Game Graphics 101: Rendering Pipeline & Shaders

Author:  Follow: TwitterFacebook
Job Title:Sarcastic Architect
Hobbies:Thinking Aloud, Arguing with Managers, Annoying HRs,
Calling a Spade a Spade, Keeping Tongue in Cheek
Rendering Pipeline

#DDMoG, Vol. V
[[This is Chapter 17(h) from “beta” Volume V of the upcoming book “Development&Deployment of Multiplayer Online Games”, which is currently being beta-tested. Beta-testing is intended to improve the quality of the book, and provides free e-copy of the “release” book to those who help with improving; for further details see “Book Beta Testing“. All the content published during Beta Testing, is subject to change before the book is published.

To navigate through the book, you may want to use Development&Deployment of MOG: Table of Contents.]]

As it was noted in the beginning of this Chapter, please keep in mind that

in this book you will NOT find any advanced topics related to graphics.

What you will find in this chapter, is the very very basics of the graphics, just enough to start reading the other books on the topic, AND (last but not least) to understand other things which are essential for networking programming and game development flow.

Bottom line:

if you’re a gamedev with at least some graphics experience – it is probably better to skip this Chapter to avoid reading about those-things-you-know-anyway.

This Chapter is more oriented towards those developers who are coming from radically different fields such as, for example, webdev or business app development (and yes, switch from webdev into gamedev does happen).

Rendering Pipeline

By now, we’ve described lots of the different concepts which are necessary to perform 3D rendering. Now it is time to put all of them into perspective, discussing how the whole thing works. As always within this Chapter, we’re not going into details – and are going to leave out many optional things (notably – geometry shaders), but this should be sufficient to start understanding the whole picture; for good and much more detailed description – see, for example, [Gregory].

Modern GPUs process things along the following lines (NB: we’ll be using terminology from [Gregory]; keep in mind that in some cases, essentially the same things may be described using different terms):

Fix XIV.5

Of course, it is a greatly simplified diagram, but for our purposes it will do. Let’s take a closer look at the stages of the pipeline:

  • Originally, all we have is our 3D scene in a system of coordinates known as “world space” – i.e. in meters or yards of the physical world we’re dealing with.1
  • Then, we’re doing Vertex Processing; at this stage, we’re at the very least performing:
    • some projection transformation from world space into a “homogeneous clip space” (see [[TODO]] section above for description of the “homogenous clip space”). Very shortly – it is a camera-centered 3D space.
    • per-vertex lighting
    • In addition, optionally a lot of stuff can be performed here, including, but not limited to:
      • per-vertex texturing calculations
      • procedural animation (such as moving leaves on trees)
    • Thinking hare:After Vertex Processing, we usually have our vertices in 'homogeneous clip space'After Vertex Processing, we usually have our vertices in “homogeneous clip space”
    • Rasterization stage takes those vertices in “homogeneous clip space” and rasterizes them into “fragments” (usually pretty much the same as “pixels”). Rasterization stage includes:
      • Clipping (which is trivial to do in “homogeneous clip space” 🙂 )
      • Screen Mapping (3D to 2D coordinate mapping, also trivial in “homogeneous clip space”)
      • Triangle Traversal – generating fragments (usually “fragment”=”pixel”) from triangles; each fragment has a bunch of attributes, which attributes are simply interpolated from appropriate vertex attributes.
      • Optionally it may also include “early Z-test” – hiding some of invisible pixels
    • At this point we’ve already got rid of 3D vertices and are working with 2D pixels/fragments. However, instead of color, at this point each pixel/fragment still has a bunch of other attributes (such as lighting).
    • Pixel Processing stage. At this stage, we’re working with those 2D fragments which have associated attributes – and need to calculate color out of those attributes. It is at this point where textures are usually applied. In addition, lots of other stuff can happen here, including, but not limited to:
      • Lighting (bump mapping/shadows/…)
      • Anti-aliasing
      • All kinds of post-processing effects
    • Merge stage. In general, it (a) filters some fragments out (based on number of potential criteria, including Z-test if it hasn’t been performed earlier), and (b) blends/merges all the fragments/pixels into the frame buffer.
    • Phew, we’ve finally got our next frame – and can start the whole process again for the next frame 🙂 .

1 Alternatively, coordinates can be expressed in “model space”, with coordinates relative to the models in a scene graph. Where exactly we’re performing the conversion from the “model space” into “world space”, is not that important for our purposes now, especially as the conversion between the two is trivial


Highly Parallel Nature of Rendering Pipeline

Inquisitive hare:each of the stages of the rendering pipeline is operating on a large set of items (vertices or fragments/pixels)One thing which needs to be noted with regards to rendering pipeline, is that each of its stages is operating on a large set of items (vertices or fragments/pixels). When coming to specific numbers for real-world scenarios, we’re speaking at least about hundreds of thousands of vertices/fragments, and more likely about millions. Which means that splitting this load over several thousand cores is not a problem; and this is exactly what modern GPUs (aided by APIs) are doing.

We’ll need this observation a bit later, when we start speaking about shaders.

Fixed-Function Pipeline vs Programmable Pipeline. Shaders

The pipeline shown on Fig. XIV.5, is very generic; this is the way how 3D was processed 15 years ago, and the way it is processed now. However, over the time, there were quite significant changes to this process.

Originally, rendering pipeline (such as the one shown on Fig. XIV.5), was a fixed-function pipeline – each stage was doing exactly what was pre-programmed by hardware guys, and while stages were configurable – and were able to produce quite good graphics too – 3D developers were craving for more 😉 .

In early 2000’s, hardware manufacturers started to produce GPUs with programmable pipelines; as a first step, Vertex Processing stage and Pixel Processing stage were made programmable.2 In short – we’re supplying a (micro-)program (known as “shader”) to do all the necessary work to transform one single vertex (for Vertex Processing) or one single pixel (for Pixel Processing) – and then run this program millions of times for each frame (on all those hundreds and thousands of GPU cores).

Traditionally, vertex shader is a (micro-)program which takes a vertex and produces a vertex – with no ability to create additional vertices (this is left to Geometry shaders). The vertex can have quite a few attributes – which will be then interpolated by Triangle Traversal (sitting within Rasterization) while it converts vertices to fragments; then these attributes will be fed to the pixel shader.

Pixel shader is another (micro-)program, which takes a pixel/fragment with these interpolated attributes, and produces a pixel with color (which color will be written to the frame buffer, subject to some restrictions).

Surprised hare:These days, even if the program uses fixed-pipeline, that fixed-pipeline is simulated over shaders anyway 😉Adding shaders has allowed to make tons and tons of new effects, and these days it is difficult to find a serious 3D game which doesn’t use shaders. Actually, even if the program uses fixed-pipeline stuff these days, that fixed-pipeline is simulated over shaders anyway ;-).

2 GeForce 3 was probably the first game-oriented example of programmable pipeline – specifically vertex shaders, and Radeon 8500 – the first game-oriented card with real fragment shaders [OpenGL]


What should you use? Fixed-pipeline vs Shaders

If you have a question about using fixed-pipeline vs using shaders – probably you are in the same camp as myself (i.e. not exactly coming from 3D ;-)). It means that dealing with shaders can become overwhelming.

Still, current state of things is that you’d better start with shader-based pipeline – even if with the simplest possible ones (effectively mimicking fixed-based pipeline for the time being). The reason for it is two-fold.

The first reason to go with shaders from the very beginning lies with the current implementation of fixed-pipeline processing: these days, all fixed-pipeline implementations are merely simulated on top of shaders anyway – and sometimes there are rather weird restrictions. In one example, Mac OS X implements OpenGL “compatibility profile” only for outdated “legacy” OpenGL 2.1 – and OpenGL 4.1, while available, doesn’t implement “compatibility profile” which implements fixed-pipeline stuff. In another example, OpenGL ES (the one intended for non-PC devices), starting from OpenGL ES 2.0 is not supporting fixed-pipelines at all.

The second reason to use shader-based pipeline is that more likely than not, by the end of the day you will probably need shaders anyway; and migration from simplistic-shaders to more-complicated ones is much more straightforward than migration from fixed-pipeline to shaders.

Hare thumb down:The only advantage of using fixed-pipeline is to support really ancient devices – but as of 2016, such devices are extremely rare.The only advantage of using fixed-pipeline is to support really ancient devices (which don’t have shaders at all) – but as of 2016, such devices are extremely rare. With shaders available on smartphones starting from iPhone 3GS and Samsung Wave (that’s around 2009-2010), and also on Raspberry Pi and as a part of WebGL, it is indeed pretty difficult to find a GPU without at least some shader support. As a result, even I myself3 cannot really argue for developing a game specifically aimed for non-shadered GPUs.4

Bottom line: these days, if going 3D, I would suggest to keep away from fixed-function pipelines, and start with shaders right away. Take the simplest shader which you can find on the Internet – and go from there. Alternatively, tools such as [ShaderGen] or [DirectXTK] may help.

On the other hand,

if you’re reading this (i.e. you’re not skipping this entire Chapter as “too trivial”) – it is usually better to take a 3rd-party 3D rendering engine

(and make an isolation Logic-to-Graphics layer around it, as described in Chapter VI); otherwise, you’re going to spend LOTS of time until you have something presentable 🙁 . While I am a kind of guy who’s known for being on “DIY everything in sight” side of things – 3D engines is one thing which is complicated enough to make it simpler to integrate (which won’t be the picnic either(!)), than to develop it ourselves 🙁 .

If going this way – you will still need to understand the basic concepts we’ve discussed here, so your time spent reading this Chapter, wasn’t wasted ;-).

3 and I am known for arguments in favour of not-so-cutting-edge stuff; in one example, I was arguing for supporting Win 9x well beyond 2005
4 creating a version which would work on non-3D-enabled devices at all, is a somewhat different story; such versions MAY (and often DO) have their own merits (which were discussed in [[TODO]] section above) – though I shall admit that in the 3D community there is quite a significant resistance to them 😉 .


Rendering Pipeline and MOGs

As this book is about MOGs, I’m trying to tell about relations of all the 3D stuff with multi-player aspects of the game. With rendering pipeline, one thing helps – it is that, as we can see, all the data is moving from the left side (logic/physics) to the right side (screen), with no information coming back.5 It means that, as a rule of thumb, we do not need to care about 3D rendering affecting our physics; in other words – we can simulate our physics world6, and feed the results of this simulation to the rendering pipeline. Phew 🙂 .

5 while there can be exceptions, such as output of geometry shader being fed back to vertex shader, it won’t affect overall picture from MOG perspective
6 on our authoritative server, and if we’re using stuff such as Client-Side Prediction described in Chapter III, there will be another simulation of the physics within our Client


[[To Be Continued…

Tired hare:This concludes beta Chapter 17(g) from the upcoming book “Development and Deployment of Multiplayer Online Games (from social games to MMOFPS, with social games in between)”. Stay tuned for beta Chapter 17(i), where we’ll close our very cursory discussion of 3D with a very brief discussion on animation.]]

Don't like this post? Comment↯ below. You do?! Please share: ...on LinkedIn...on Reddit...on Twitter...on Facebook



Cartoons by Sergey GordeevIRL from Gordeev Animation Graphics, Prague.

Join our mailing list:


  1. Dixie says

    Hey, I just wanted to let you know that it seems your submission email ( seems to be broken. I’d be interested in contributing a guest article if you want to give me an email that works better. Let me know when you can!
    Dixie S.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.