DirectX 11 Tessellation—what it is and why it matters
With the recent buzz around DirectX 11, you’ve probably heard a lot about one of its biggest new features: tessellation. As a concept, tessellation is fairly straight forward—you take a polygon and dice it into smaller pieces. But why is this a big deal? And how does it benefit games? In this article, we’ll take a look at why tessellation is bringing profound changes to 3D graphics on the PC, and how the NVIDIA® GeForce® GTX 400 series GPUs provide breakthrough tessellation performance.
In its most basic form, tessellation is a method of breaking down polygons into finer pieces. For example, if you take a square and cut it across its diagonal, you’ve “tessellated” this square into two triangles. By itself, tessellation does little to improve realism. For example, in a game, it doesn’t really matter if a square is rendered as two triangles or two thousand triangles—tessellation only improves realism if the new triangles are put to use in depicting new information.
|When a displacement map (left) is applied to a flat surface, the resulting surface (right) expresses the height information encoded in the displacement map.|
The simplest and most popular way of putting the new triangles to use is a technique called displacement mapping. A displacement map is a texture that stores height information. When applied to a surface, it allows vertices on the surface to be shifted up or down based on the height information. For example, the graphics artist can take a slab of marble and shift the vertices to form a carving. Another popular technique is to apply displacement maps over terrain to carve out craters, canyons, and peaks.
Like tessellation, displacement mapping has been around for a long time, but until recently, it has never really caught on. The reason is that for displacement mapping to be effective, the surface must be made up of a large number of vertices. To take the example of the marble carving—if the marble block were made up of eight vertices, no amount of relative displacement between them can produce the relief of a dragon. A detailed relief can be formed only if there are sufficient vertices in the base mesh to depict the new shape. In essence—displacement mapping needs tessellation, and vice versa.
With DirectX 11, tessellation and displacement mapping finally come together in a happy union, and already, developers on jumping on board. Popular games like Alien vs. Predator and Metro 2033 use tessellation to produce smooth-looking models, and developers at Valve and id Software have done promising work on applying these techniques to their existing game characters.
|After a coarse model (left) goes through tessellation, a smooth model is produced (middle). When displacement mapping is applied (right), characters approach film-like realism. © Kenneth Scott, id Software 2008|
Because the DirectX 11 tessellation pipeline is programmable, it can be used to solve a large number of graphics problems. Let’s look at four examples.
Perfect Bump Mapping
At its most basic, displacement mapping can be used as a drop-in replacement for existing bump mapping techniques. Current techniques such as normal mapping create the
illusion of bumpy surfaces through better pixel shading. All these techniques work only in particular cases, and are only partially convincing when they do work. Take the case of parallax occlusion mapping, a very advanced form of bump mapping. Though it produces the illusion of overlapping geometry, it only works on flat surfaces and only in the interior of the object (see image above). True displacement mapping has none of these problems and produces accurate results from all viewing angles.
|PN-Triangles enable automatic smoothing of characters without artist input. Both geometry and lighting realism is improved.|
The other natural partner to tessellation is refinement algorithms. A refinement algorithm takes a coarse model, and with the help of tessellation, creates a smoother looking model. A popular example is PN-Triangles (also known as N-patches). The PN-Triangles algorithm converts low resolution models into curved surfaces which are then redrawn as a mesh of finely tessellated triangles. Much of the visual artifacts that we take for granted in today’s games—blocky character joints, polygonal looking car wheels, and coarse facial features, can be eliminated with the help of such algorithms. For example, PN-Triangles is used in Stalker: Call of Pripyat to produce smoother, more organic looking characters.
Seamless Level of Detail
In games with large, open environments you have probably noticed distant objects often pop in and out of existence. This is due to the game engine switching between different levels of detail, or LOD, to keep the geometric workload in check. Up until this point, there has been no easy way to vary the level of detail continuously since it would require keeping many versions of the same model or environment. Dynamic tessellation solves this problem by varying the level of detail on the fly. For example, when a distant building first comes into view, it may be rendered with only ten triangles. As you move closer, its prominent features emerge and extra triangles are used to outline details such as its window and roof. When you finally reach the door, a thousand triangles are devoted to rendering the antique brass handle alone, where each groove is carved out meticulously with displacement mapping. With dynamic tessellation, object popping is eliminated, and game environments can scale to near limitless geometric detail.
For developers, tessellation greatly improves the efficiency of their content creation pipeline. In describing their motivation for using tessellation, Jason Mitchell of Valve says: “We are interested in the ability to author assets which allow us to scale both up and down. That is, we want to build a model once and be able to scale it up to film quality…Conversely, we want to be able to naturally scale the quality of an asset down to meet the needs of real-time rendering on a given system.” This ability to create a model once and use it across various platforms means shorter development times, and for the PC gamer, the highest possible image quality on their GPU.
How GeForce GTX 400 GPUs handle Tessellation
Traditional GPU designs use a single geometry engine to perform tessellation. This approach is analogous to early GPU designs which used a single pixel pipeline to perform pixel shading. Seeing how pixel pipelines grew from a single unit to many parallel units and how it gained dominance in 3D realism, we designed our tessellation architecture to be parallel from the very beginning.
GeForce GTX 400 GPUs are built with up to fifteen tessellation units, each with dedicated hardware for vertex fetch, tessellation, and coordinate transformations. They operate with four parallel raster engines which transform newly tessellated triangles into a fine stream of pixels for shading. The result is a breakthrough in tessellation performance—over 1.6 billion triangles per second in sustained performance. Compared to the fastest competing product, the GeForce GTX 480 is up to 7.8x faster as measured by the independent website Bjorn3D.
After many years of trial and error, tessellation has finally come to fruition on the PC. Stunning games like Metro 2033 already show the potential of tessellation. In time, tessellation will become as crucial and indispensible as pixel shading. Realizing its importance, NVIDIA has jump started the process by building a parallel tessellation architecture from the get go. The result is the GeForce GTX 400 family of GPUs—a true breakthrough in geometric realism and tessellation performance.