Game Graphics 101: Textures, UV Mapping, and Texture Filtering

 
Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
 
 
UV Mapping

#DDMoG, Vol. V
[[This is Chapter 17(f) from “beta” Volume V of the upcoming book “Development&Deployment of Multiplayer Online Games”, which is currently being beta-tested. Beta-testing is intended to improve the quality of the book, and provides free e-copy of the “release” book to those who help with improving; for further details see “Book Beta Testing“. All the content published during Beta Testing, is subject to change before the book is published.

To navigate through the book, you may want to use Development&Deployment of MOG: Table of Contents.]]

As it was noted in the beginning of this Chapter, please keep in mind that

in this book you will NOT find any advanced topics related to graphics.

What you will find in this chapter, is the very very basics of the graphics, just enough to start reading the other books on the topic, AND (last but not least) to understand other things which are essential for networking programming and game development flow.

Bottom line:

if you’re a gamedev with at least some graphics experience – it is probably better to skip this Chapter to avoid reading about those-things-you-know-anyway.

This Chapter is more oriented towards those developers who are coming from radically different fields such as, for example, webdev or business app development (and yes, switch from webdev into gamedev does happen).

Textures

Ok, we’ve got our meshes, but even if you render them, they will come completely uncolorized. To deal with it, one Really Common Way1 is to use textures. Texture is that good old raster 2D image (which we’ve discussed above in [[TODO]] section), which is applied to our mesh (more strictly– to triangles/polygons, which our mesh is made of).

A few notes about textures as they’re usually used in 3D engines:

  • Texture pixels are often referred to as “texels” – just to distinguish them from screen pixels
  • Surprised hare:Size-wise, textures are HUGE. Actually, in a typical 3D game, 90%+ of the space on disk (and of GPU bandwidth/RAM) are used by textures.Size-wise, textures are HUGE. Actually, in a typical 3D game, 90%+ of the space on disk (and of GPU bandwidth/RAM) are used by textures.
    • Putting on a hat borrowed from an esteemed Captain Obvious, a practical consequence: don’t even THINK of transferring textures over the network in real-time. Actually, even meshes are usually out of question for over-the-network transfers, but textures are even more so (and by orders of magnitude too).
  • 3D engine textures are almost-universally using so-called “mipmapping”. “mipmaps” are series of raster images representing the same picture, but with progressively reduced resolutions.
    • The idea here is similar to LOD which we’ve discussed earlier: for distant enough objects, there is no need to try rendering 1000×1000 texture (and 30×30 one will do). This allows to save on all the stuff such as GPU/memory bandwidth, GPU load, etc. In addition, it helps with avoiding so-called “subsampling” issues – see, for example, [McShaffryGraham] for further discussion.
    • Unlike LOD meshes, mipmaps can usually be generated automatically (phew).
    • As a rule of thumb, mipmaps increase size of your textures by 33%.
  • Textures within 3D engines do NOT use common compression methods such as JPEG or PNG.
    • Actually, it is not even about the format supported by your engine, but about the format supported by your GPU (as textures are rendered directly there). These days, most of GPUs are supporting compressed textures – however, they’re using fixed-rate compression algorithms (such as DXTn or PVRTC).
    • Theoretically speaking, nothing prevents you from shipping your game with a JPEG format on disk, and then decompressing them into format-suitable-for-your-GPU, during texture loading – but this is rarely done in practice (in particular, decoding all those texture JPEGs while the program is running, may cause your CPU to become a bottleneck).
      • Hare with an idea:you may want to download your textures as JPEGs, and then – while installing (or as a post-install on the first run) – to convert them into whatever-format-your-3D-engine needsInstead, if you’re really concerned with your downloadable size – you may want to download your textures as JPEGs, and then – while installing (or as a post-install on the first run) – to convert them into whatever-format-your-3D-engine needs. This conversion may also involve creating mipmaps. NB: of course, any such trickery – at least as long as JPEGs are involved – will involve loss of visual quality; how much this quality loss is, and whether gains in download times are worth it – is for you to decide.

1 In fact, it is that common that I don’t know of any others 😉

 

UV Mapping

At this point, we’ve got our meshes and our textures intended for meshes. The only tinsy-winsy problem is how to apply these textures to these meshes. The process of this mapping is known as “UV Mapping”.

In “UV Mapping” name, “UV” refers to coordinates within the texture (they’re traditionally named u and v to avoid naming collisions with x,y,z – which are traditional coordinates in 3D space). When our textured model (i.e. mesh + texture + UV mapping) is rendered, these u,v coordinates of the points within our texture are mapped to the vertexes of our model (and the texture is interpolated between the vertexes).

By convention, (u,v) of (0,0) corresponds to bottom-left corner of the texture, and (u,v) of (1,1) – to the top-right one. However, in some cases u and v can go beyond (0,1) range. In such cases, so-called “texture addressing mode” starts to apply, telling how to extend the texture beyond its original size; well-known “texture addressing modes” are “wrap”, “mirror”, “clamp”, and “border color”.

UV mapping is normally done by 3D artists.2 What matters for us as programmers, is that UV mapping is done long before we come to the picture (phew 🙂 ). And even better – I didn’t see a single practical case when UV mapping (or textures in general) affected anything beyond pure visualization; while theoretically you might have your gameplay depended on “what is the color of this thing when seen by PC”, I’ve never seen it done via rendering textures (would be too expensive for the Server-Side, and – depending on the nature of this requirement – there are usually much less expensive ways of doing an equivalent thing).


2 as in “not by programmers”

 

Normal Maps

Wtf hare:'normal map' texture is intended to represent not colours – but rather “shape” in the immediate vicinity of our mesh.Last but not least when speaking about textures, we need to mention so-called “normal maps”. Very briefly – “normal map” is a yet another texture (also UV-mapped – and generally using the same UV mapping as usual texture). However, this “normal map” texture is intended to represent not colours – but rather “shape” in the immediate vicinity of our mesh.

Using normal maps allows to work with lower-poly models while keeping similar degree of visual realism; and as we noted in [[TODO]] section above, the fight for low-poly models never ends in 3D graphics, so normal maps provide very substantial help in this regard. As a result, normal maps are used for a pretty much any game with serious 3D graphics. They’re even used in shader-based lighting of 2D scenes (see, for example, [Carlin]).

“Normal Maps” should be distinguished from “bump maps”; on one hand, from 50’000-feet point of view, both “bump maps” and “normal maps” serve the same purpose – to create “bumps” on the surface of our mesh. On the other hand, implementations of the two are rather different. “Bump map” is merely a single-color bitmap, which says “how far from original flat mesh surface the real point is”. “Normal map” has three “colors” per pixel, with each “color” actually representing an angle of the normal vector at this point; this allows “normal maps” to describe more complicated surfaces than simple “bump maps”.

Texture Filtering

One important issue which arises when we’re rendering textured objects, is so-called “texture filtering” (and related concept of “anti-aliasing”, see also [[TODO]] section on 2D anti-aliasing above). The question we’re trying to answer, is the following: what should be the color of the screen pixel at (x,y) screen coordinates?

As we’ve already described our 3D scene in GPU terms, GPU “knows” what the object is projected to this screen point, “knows” the polygon which maps there, “knows” the texture on the polygon, and “knows” the UV mapping (i.e. how the texture is applied to the polygon).

Hare asking question:how to map those texels GPU has in its 3D representation, into screen pixels we need?Still, there is a question of “how to map those texels GPU has in its 3D representation, into screen pixels we need?” To answer this question, we usually select an appropriate mipmap from the texture, and then – apply one of “texture filtering” methods to calculate screen pixels from the texels of selected mipmap. The following “Texture filtering” methods are usually supported by modern graphics APIs:

  • nearest neighbor (known as GL_NEAREST_MIPMAP_NEAREST in OpenGL world). The crudest (and the fastest) one. This method is conceptually similar to “nearest neighbor” 2D scaling, and also tends to produce rather “jagged” results; keep in mind though, that for fast-moving objects, eyes are usually MUCH more forgiving than for static ones, so for fast-moving stuff you might be able to get away even with the “nearest neighbor”.
  • bilinear (GL_NEAREST_MIPMAP_LINEAR in OpenGL). Effectively it is a 2D linear interpolation of the texels within selected mipmap to get the pixel on screen (and is conceptually similar to 2D bilinear scaling). It usually provides BIG quality improvement compared to nearest-neighbor, but not without the (relatively modest) cost.
  • Trilinear (GL_LINEAR_MIPMAP_LINEAR in OpenGL). In addition to bilinear interpolation, trilinear filtering uses two different mipmaps (one of a bit higher resolution and one of a bit lower resolution than our mipmap selection algorithm has calculated) – and makes linear interpolation between them too. Useful for removing artifacts on the boundaries of different mipmap levels.
  • Anisotropic filtering takes into account that our textures (more strictly – polygons where the textures are attached) can be not so parallel to the screen (and for quite a few textures – very much not parallel). Quite expensive.

Anti-Aliasing

Mathematically, texture filtering can be seen as an anti-aliasing technique (as mentioned in [Wikipedia.TextureFiltering]) – and it does reduce aliasing artifacts for sure. However, in a gamedev world, term “anti-aliasing” usually means something quite different.

Usually, 3D anti-aliasing can be considered as being applied after the texture filtering; in a certain sense, texture filtering deals with aliasing effects “within” the polygons, and 3D anti-aliasing is capable of dealing with “jagged” edges.

Hare pointing out:3D anti-aliasing algorithms can be divided into two large groups: 'proper' anti-aliasing (the one which tries to avoid anti-aliasing in the first place), and 'post-processing' anti-aliasing (the one which creates an aliased image – and then post-processes it to make it look better).3D anti-aliasing algorithms can be divided into two large groups: “proper” anti-aliasing (the one which tries to avoid anti-aliasing in the first place), and “post-processing” anti-aliasing (the one which creates an aliased image – and then post-processes it to make it look better).

“Proper” anti-aliasing is represented by SSAA, MSAA, and CSAA – and one way or another, they are increasing number of samples used to render each pixel; as a result – they’re usually expensive (up to “Damn Expensive”). “Post-processing” ones are represented by FXAA, MLAA, and SMAA; essentially they’re “smart blurring” algorithms, working over the already-rendered screen to reduce jagged/aliased edges.

In theory, “proper” anti-aliasing has an edge in the quality compared to the “post-processing” one; however, as for the games we’re always operating under real-time restrictions – it means that we need to balance quality with speed all the time, and in quite a few cases (with some developers arguing that it is in “most” cases) “post-processing” ones have an advantage from this balance-of-speed-and-quality point of view.

[[To Be Continued…

Tired hare:This concludes beta Chapter 17(f) from the upcoming book “Development and Deployment of Multiplayer Online Games (from social games to MMOFPS, with social games in between)”. Stay tuned for beta Chapter 17(g), where we’ll continue our very cursory discussion of 3D into lighting, camera, and frustum.]]

Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook

[+]References

Acknowledgement

Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.