GameSpot may receive revenue from affiliate and advertising partnerships for sharing this content and from purchases through links.

Interview With Nvidia's David Kirk

GameSpot talks with Nvidia's chief scientist about past trends and future technologies in the topsy-turvy world of 3D graphics.

Comments

There's always a lot going on in the world of 3D graphics, and the last year is no exception. We've had the chance to speak with Nvidia's chief scientist, David Kirk, to get his perspective on recent twists, like the newest GeForce3 driver features, as well as upcoming efforts, like the push toward OpenGL 2.0.

GameSpot: You've seen quite a few graphics chip designs since you joined Nvidia in 1997. Which generations stand out most in your mind as being key for Nvidia's past and future success?

David Kirk: They have all been exciting, but in different ways. RIVA 128 was exciting because it was three to five times faster than previous products. Amazing fun. TNT2 drastically improved image quality and introduced the idea of rendering either 2 pixels per clock with a single texture or 1 pixel per clock with two textures, making maximum use of the pipeline. In fact, it's such a great product that we still sell a lot of them today. GeForce changed graphics by integrating hardware transform and lighting into our pipeline, allowing PCs for the first time to process enormous amounts of geometry. GeForce3 introduced programmability to graphics pipelines and brought programmable shading to the mainstream. Each product has, in its way, been very exciting to me, so it's just hard to choose.

GS: What have been some of the challenges for Nvidia as the company has grown rapidly over the last couple of years and expanded into new markets, such as mobile devices, multimedia graphics cards, and integrated chipsets?

DK: I think that the biggest challenge has been to try not to do too much. While Nvidia is entering new markets, we are also continuing to produce a new high-performance GPU every six months. Now, we are also producing portable (laptop) chips, professional workstation products, platform processors (graphics-integrated system core logic), game console (Xbox) chips, and also products for the mainstream and budget PC markets. That's a lot! As Nvidia grows larger and we try to bring our technology to additional complementary markets, we want to make sure to not lose focus and continue to serve our core market and customers.

GS: As recent graphics cores have featured increasing raw power, the bottleneck has shifted to memory bandwidth in many situations. What has Nvidia done to improve memory performance, and what are other ways to get around the problem as current graphics architectures mature?

DK: Nvidia has developed a lot of techniques for aggressive bandwidth reduction. Based on this effort, we are using a lot less bandwidth per pixel and per triangle. Bandwidth per triangle is reduced by improved technology for clipping and culling--triangles that are either offscreen partly or entirely or triangles and pixels that are invisible don't need to be drawn. The GeForce3 occlusion-culling technology can avoid drawing invisible geometry very efficiently.

Other optimizations are used to reduce the bandwidth consumed by pixel rendering. The first of these technologies to be seen was texture compression--compressed textures are smaller, so they consume less frame buffer space and less memory bandwidth. In addition, the GeForce3 uses lossless-compression technology to read and write less data from the frame buffer.

GS: The release of the Detonator XP drivers exposed new features, shadow buffers, and 3D texture support. How do you see these being used in games?

DK: Like we do with every new chip release, we have been lobbying the game developers for some time and doing a lot of education on shadow buffers and 3D textures. Just like we have done in the past, we bring them in for developer kitchens and give them the tools that make adopting these new features less painful. The strategy worked great for vertex and pixel shaders, as witnessed by the 57 titles we have listed on our Web site. You can expect the same aggressive campaign for shadow buffers and 3D textures. A great use of shadow buffers has been in the movie Final Fantasy: The Spirits Within, which we use to show off our shadow buffers. They make the use of lifelike shadows with soft edges and light reaction possible with very little hit to game performance. This adds a ton to the realism of a game.

Even when compressed, 3D textures can be a little cumbersome, so I think their use will be more conservative. If you have a key area in a game where a 3D texture can add to the environment or to an object that will be (smashed, broken, split, etc.) and is critical to the scene, I think you will see 3D textures applied there. Not likely that a game will be done entirely with 3D textures; however, I believe that you will see some very exciting volumetric lighting and fog effects created using 3D textures.

GS: Some have said that in an era of programmable graphics processors, GPU manufacturers won't compete on proprietary features as in the past but rather on pure performance and standards implementation. How do you see programmability changing the competitive landscape?

DK: Nvidia has led the push to provide more programmability in graphics pipelines, first with GeForce2's pixel shading. Then, Nvidia's GeForce3 changed the landscape of graphics programming by making the vertex pipeline completely programmable and making a more programmable pixel pipeline than had ever previously been available. This evolution will continue, and it's a very exciting time to be in the graphics world. The beauty of programmability is that it frees game developers from the idiosyncrasies of the old API-based hardwired pipeline. This newfound freedom will provide developers with the means to apply their creativity more effectively to their art: creating great games. It's the best thing to happen to game development in a long time.

One important thing to understand about programmability is that compatibility and compliance to the programmable instruction set is extremely important. Think in terms of a CPU. Would you buy a CPU that doesn't run the programs that run on your Athlon or Pentium PC? Of course you wouldn't. You couldn't run your games or other applications. The same is true for programmable vertex and pixel shading. GeForce3, GeForce3 Ti500, GeForce3 Ti200 (a really exciting product, because it brings GeForce3 technology to the mainstream), and the Xbox are all 100 percent compatible implementations of vertex and pixel programmability. That represents an outstanding virtual platform for developers--they can create stuff that just works. The burden is great for competitive hardware products to be compatible with GeForce3.

Looking forward into the future, I see even more excitement in programmable graphics pipelines. Nvidia will continue to innovate in this direction. As the technology leader in the graphics market, it is our responsibility to drive this technology forward in a way that it can be adopted by the industry.

GS: Since Siggraph 2001, there has been some real talk of hardware vendors coming together to produce an OpenGL 2.0 spec. What's Nvidia's involvement with this process, and how do you see the future of OpenGL as an API for game graphics?

DK: Nvidia is strongly committed to the future viability of OpenGL, both for professional applications and as a game development API. OpenGL allows Nvidia to bring technology to developers and consumers immediately when new hardware is available, independent of others. That results in a great experience for all. Nvidia is an active participant in the OpenGL 2.0 effort. The future of OpenGL involves adoption of increased levels of programmability, as discussed above. OpenGL 2.0 is an opportunity to shed the old legacy interfaces and hardwired API features of the original OpenGL. Those hardwired features just don't make sense anymore with modern hardware and, in fact, are superceded by programmability. Ideally, in OpenGL 2.0, all of the older features should be implementable using the programmability. In that way, all of the old crust is gone, but nothing is lost in terms of functionality. I am very excited about this effort.

At the same time, the future is a long time, and we actually live in the present. It's also important to--in parallel, while defining the future of OpenGL 2.0--continue to keep OpenGL 1.x alive and viable. As an example, we need to unify competing hardware vendors' extensions for vertex programming into a single, compatible API extension, working toward OpenGL 1.4. To this end, Nvidia has (in my opinion, generously) offered to license our patented vertex programming technology to members of the OpenGL ARB [Architecture Review Board], as we have to Microsoft for DX8. In my opinion, it is not possible to define and implement a vertex programming extension for OpenGL without infringing Nvidia's patents. However, we are committed to advancing OpenGL, so we are willing to share this technology for the good of the industry. I hope that other ARB members are willing to make the same effort and commitment to the API. We are also willing to consider similar arrangements for the OpenGL 2.0 effort.

GS: The most recent generation of graphics chips has more than doubled in complexity in terms of transistors. Will future generations be able to grow as quickly without production breakthroughs, or will it be an issue of using available real estate more efficiently?

DK: Moore's law rules! We expect the growth of transistor count on graphics chips to continue. The opportunity here is to make truly great and exciting products with all of this additional graphics horsepower embodied in those transistors. There's something really exciting here, though, that is not completely obvious. Moore's law describes not simply the growth in transistor count and speed, but also the growth in computing power for a CPU. That's the key point: for a CPU. For a GPU, there is much better opportunity to harness those transistors effectively toward solving graphics problems. Consequently, we can easily exceed Moore's law, doubling every six months instead of every 18 months.

GS: Where do you see the most important trends in game graphics for the next few years?

DK: I think that the most important trends will have to do with developers aggressively adopting programmability. By the middle of next year, I expect to see more than 5 million GeForce3 family GPUs in the hands of consumers, maybe as many as 10 million. That's phenomenal! That number of programmable GPUs in the consumer market represents a tremendous opportunity for developers to create special effects in lighting, shadows, reflections, and realistic materials. Programmable shading will completely change the look of games over the next year.

GS: Thanks for your time, David.

Got a news tip or want to contact us directly? Email news@gamespot.com

Join the conversation
There are no comments about this story