| Author: | “No Bugs” Hare Follow: |

Job Title: | Sarcastic Architect | |

Hobbies: | Thinking Aloud, Arguing with Managers, Annoying HRs, Calling a Spade a Spade, Keeping Tongue in Cheek | |

*This is Chapter 17(e) from “beta” Volume V of the upcoming book “Development&Deployment of Multiplayer Online Games”, which is currently being beta-tested. Beta-testing is intended to improve the quality of the book, and provides free e-copy of the “release” book to those who help with improving; for further details see “Book Beta Testing“. All the content published during Beta Testing, is subject to change before the book is published.*

*To navigate through the book, you may want to use Development&Deployment of MOG: Table of Contents.*]]

As it was noted in the beginning of this Chapter, please keep in mind that

What you will find in this chapter, is the very very basics of the graphics, just enough to start reading the other books on the topic, AND (last but not least) to understand other things which are essential for networking programming and game development flow.

Bottom line:

This Chapter is more oriented towards those developers who are coming from radically different fields such as, for example, webdev or business app development (and yes, switch from webdev into gamedev does happen).

## 3D Very Basics

As we’re done with 2D graphics, it is time to get to 3D. However, once again, a word of warning –

“if you plan to write at least half-decent 3D graphics engine, you’ll need to know MUCH more than I can fit here3D graphics (unlike 2D one) is such a huuuge topic, that going into 3D details would almost instantly mean starting to write another 3 volumes just about 3D. A different side of the same coin is that if you plan to write at least half-decent 3D graphics engine, you’ll need to know MUCH more than I can fit here.^{1} As a result, we’ll concentrate on introducing very basic concepts and terms, and on providing references, so on the one hand you know where to look further if you need it – and on the other hand we can refer to “meshes”, “textures”, “frustum”, and so on, in the other Chapters of this book.

^{1}and, to be honest, MUCH more than I know myself

### 3D Maths Bits and Pieces

I was shielding my readers from maths as long as it was possible, but by now it became inevitable: 3D is pretty much hopeless without at least basic (and not-so-basic) trigonometry.

I won’t go into math details here, except for noting a few obvious things:

- 3D coordinates are often represented as a vector of 3 elements; it is such a common thing, that it often has its own class (often named as
*vec3*,*Vector3, etc.*).- Strictly speaking, ‘point’ and ‘vector’ can be seen as subtly different things – or as the same thing (as ‘point’ can always be seen as ‘vector’ coming from point (0,0,0)). Whether your library sees points and vectors as different or the same – depends, though in practice I’ve usually seen the same class (such as
*Vector3*) used to represent both vectors and points. - So-called “polar coordinates” (in case of 3D, often referred to as “spherical coordinates”) are rarely used in 3D engines, so that vectors are usually interpreted as a “cartesian” tuple (x,y,z) (opposed to “polar”/”spherical” tuple (length,polarAngle,asimuthAngle)).

- Strictly speaking, ‘point’ and ‘vector’ can be seen as subtly different things – or as the same thing (as ‘point’ can always be seen as ‘vector’ coming from point (0,0,0)). Whether your library sees points and vectors as different or the same – depends, though in practice I’ve usually seen the same class (such as
- Rotations in 3D are often represented in at least
*three*different ways: (a) rotation matrices, (b) quaternions, (c) axis-angle (a normalised vector symbolising axis of rotation, plus scalar rotation angle around this axis), and (d) Euler angles (which can be thought of as yaw, pitch, and roll^{2}). What’s important is that*all four representations are equivalent and can be converted to each*other [Wikipedia.ConversionQuaternionsEulerAngles].- “for the purposes of the MOG network communications, I usually suggest using Euler anglesWithin 3D engines, quaternions and/or rotation matrices (and I won’t go into a lengthy argument which of them is better for a specific task) are routinely used. However, for the purposes of the MOG network communications (as in “to pass orientation of a rigid body calculated on Server, to be rendered on Client“), I usually suggest using Euler angles (as they’re the most compact representation – and the most resilient to precision loss too, see, for example, discussion in Chapter III and/or [McShaffryGraham]).
^{3}Sure, it means having an extra conversion (both on the Server-Side and on the Client-Side), but most of the time traffic is more valuable resource than CPU here (and BTW, extra traffic implies more CPU usage too). In other words – within your 3D engine, keep whichever form is better for you, but – as a rule of thumb – convert rotations to Euler angles for transmission over the network.

- “for the purposes of the MOG network communications, I usually suggest using Euler anglesWithin 3D engines, quaternions and/or rotation matrices (and I won’t go into a lengthy argument which of them is better for a specific task) are routinely used. However, for the purposes of the MOG network communications (as in “to pass orientation of a rigid body calculated on Server, to be rendered on Client“), I usually suggest using Euler angles (as they’re the most compact representation – and the most resilient to precision loss too, see, for example, discussion in Chapter III and/or [McShaffryGraham]).
- Actually, rotation is just one case of more generic
*transformation*. Transformations of practical interest include linear transformations, affine transformations, and quite a bit of other transformations. Linear transformations (including rotation, reflection, and scaling), can be represented as a multiplication of our (x,y,z) vector by 3×3*transformation matrix*. Affine transformations (including projections to plane) can be represented as a multiplication of (x,y,z) vector by 3×3 matrix,*plus*a constant vector; an alternative form to deal with affine transformation is to use so-called*homogenous coordinates*and to multiply (x,y,z,1) vector by a 4×4 transformation matrix.- “Perspective projection transformation is neither a linear nor even an affine transformation, but still can be implemented via multiplication of (x,y,z,1) on a 4x4 transformation matrixPerspective projection transformation (the one which gets you a perspective view on a 3D scene) – is neither a linear nor even an affine transformation, but still can be implemented via multiplication of (x,y,z,1) on a 4×4 transformation matrix. (see, for example, [Wikipedia.TransformationMatrix])

I won’t go into anywhere more detailed discussion of 3D-related maths; for this, I suggest to refer to [McShaffryGraham], to [Eberly04] or [Eberly06], or to probably the best of all when it comes to game-related maths – to [VanVerthBishop].

^{2}Strictly speaking, yaw-pitch-roll belong not to “classic Euler angles” as were introduced by Euler in XVIII century, but rather to “Tait-Bryan angles”; still, in practice the differences between “classic Euler angles” and “Tait-Bryan angles” are small enough to consider all of the three-angle tuples more or less similar; in this book, we’ll name all of them “Euler angles”

^{3}Moreover, as discussed in Chapter III, for rendering purposes each of the angles can be often represented as 10 bits, so the whole 3D orientation of a rigid-body object will fit into 4 bytes.

#### Floating-Point Implications

Usually, all those vectors and matrices are represented with some kind floating-point numbers, with 4-byte IEEE 754 single-precision (often named *float*) and 8-byte IEEE double-precision (often named *double)* being the most common, albeit certainly not the only, representation. And for the purposes of 3D engine^{4} this is perfectly fine , as long as you remember that floating-point calculations are inherently approximate, and often involve rounding.^{5}

In particular, it means that

It might come as a surprise for some of developers out there, but –

float tmp = a + b; float result = tmp + c;

can produce result which is different from the one produced by

float tmp = a + c; float result = tmp + b;

This happens because besides perfectly linear additions in mathematical sense, floating-point ‘+’ operator also includes roundings, which are inevitably non-linear. Moreover, in some cases, order of calculations can lead to significantly different results. For example, if all our *a, b, *and *c *have type float, with *a=1e30, b=1, *and *c=-1e30*, two pieces of code above will produce results 0 and 1 respectively.

To make things worse, simple

float result = a + b + c;

can be translated (depending on compiler, compiler settings, etc.) into either of the samples above (and into quite a few others too).

That being said, in practice, such things, while sometimes being quite unpleasant in physics calculations, are relatively rarely causing problems in 3D rendering. Still, if you see that something *really strange* happens with your numbers – it can be this kind of problem, known as *loss of significance*.

^{4}for transfers over the network, see Chapter III on using fixed-point numbers for over-the-network transfers to save on traffic

^{5}It should be noted that while precise calculations are not affected by this kind of problems, mere use of integer or fixed-point arithmetic doesn’t guarantee your calculations to be precise.

### Meshes

With maths aside, we can proceed to real stuff 😉 – to vertices, polygons, and meshes. In practice, most of the time^{6} you won’t be working directly with vertices and polygons; instead, you will be working with meshes. Most of the time, meshes are assets which are created by artists, and represent an important part of your asset pipeline.

Very roughly, mesh can be described as a (usually rather complex) polyhedron, consisting of faces, edges, and vertices. As it is a polyhedron, edges are straight, and faces are flat.^{7}

Theoretically, faces can be any combination of triangles, quads, and n-gons. However, in practice, n-gons are rarely used; as for quads and triangles, it usually goes along the following lines:

- For modeling (i.e. for your artists), they usually prefer mostly-quad models. The reason for it is that such models are usually simpler to modify.
^{8} - However, for rendering, all the faces of the mesh are triangulized (at the very least – at GPU level). A common practice is to triangulize at the code (i.e. even before feeding the faces to GPU).
^{9}

Wavefront .obj file OBJ (or .OBJ) is a geometry definition file format first developed by Wavefront Technologies for its Advanced Visualizer animation package. The file format is open and has been adopted by other 3D graphics application vendors. For the most part it is a universally accepted format.— Wikipedia —For meshes (and unlike for most of the other 3D things out there 🙁 ), there is a common interchange format which is understood across the tools pretty well; it is so-called wavefront .obj file. It is a text-based file which describes vertices and faces (it can also describe UV-mapping, which we’ll discuss below in [[TODO]] section). .obj meshes are NOT guaranteed to be consistent (i.e. it is perfectly possible to have an .obj which has vertices and faces, but which is not a polyhedron, and is not renderable); also it is perfectly possible to have non-flat faces within .obj (which may at least cause different rendering by different renderers). In other words – .obj user has to beware of the ugly tools producing ugly .obj files… On the other hand, as you’re not likely to use .obj on your Client-Side “as is”, you’ll need to make a conversion tool anyway, and the very same tool can (and SHOULD) check .obj for validity (in a sense in which “validity” is understood by your rendering engine).

^{6}With a notable exception of vertex shaders, which will be briefly discussed a bit later

^{7}In practice, faces can be not really flat, but if it happens – it will cause quite a bit of visual problems with triangulation and subsequent rendering, so non-flat faces SHOULD be avoided

^{8}As I am not an artist, I cannot possibly validate this claim, but well – I’ve heard about it quite a lot from people who I can trust

^{9}On the other hand, I’ve heard arguments about quads being better for tessellation purposes; unfortunately, I cannot validate this claim, so you’re on your own here. However, if you’re into this kind of detail, you probably need a MUCH more graphics-oriented book than this one 😉

#### The Curse of 3D Games: Polygon Counts and Low-Poly Meshes

“One thing which persistently haunts 3D developers, is polygon count.One thing which persistently haunts 3D developers, is polygon count. With all the increased capabilities of GPUs, there are still at most a few million of polygons which can be rendered in real-time. With good models (intended for a close-up view) for a human character taking over 100K polygons, showing several such characters at the same time starts to cause issues, and crowds tend to become quite a problem; good models for environment tend to eat hundreds of thousands of polygons quite easily too (with the whole environment taking anywhere from 5 to 30 millions of polygons [polycount].

Therefore, the problem of “how to reduce number of polygons rendered” tends to be one of the most basic ones for the 3D artist – and for the programmer too. For the artist, her job is to do the best she can do in <insert-number-of-polygons-you-can-afford-here>. However, quite a few improvements can be made on the programming side too:

**Culling.**The idea behind culling is simple – we don’t need to render whatever-is-not-visible-on-the-screen. Three most common types of culling include:*Frustum culling.*We’ll define the term “frustum” a bit later; for now, let’s say that frustum culling is removing from rendering all the objects which don’t intersect with “pyramid of vision” of the camera which renders the scene*Backface culling.*With meshes being polyhedrons, almost half of the faces will happen to be faced away from the camera; in many cases, they don’t need to be rendered.*Occlusion culling.*Removes objects which are completely covered by other objects, from rendering.

**Level of Detail**(commonly referred to as**LOD**). The idea of LOD is also simple – there is no point to render all the 100K polygons of a human character when she’s 100m away; for the100m view, we can usually get away with like 1K polygons instead of 100K.- The most common implementation of LOD in 3D games is so-called “discrete LOD”. In case of “discrete LOD”, the burden of creating several different models (with different number of polygons) lies with artists. The engine just takes the appropriate model from the list of available ones.
- In general, it might be possible to create LOD models algorithmically (either off-line or even online). However, fully automated solutions are still rather ugly, so at the moment, as far as I know, at best it is an off-line semi-automated process (with artists tuning this process to get better models).
- While historically the most common use of LOD was for terrain, LOD is not limited to terrain, and can be applied to characters too.

#### Server-Side Models: Ultra-Low Poly

“Server-Side 3D models tend to be very different from the Client-Side onesAs it was briefly noted in Chapter III, if we have 3D simulation game, we’re likely to have both Client-Side 3D models, and Server-Side 3D models. And Server-Side 3D models tend to be very different from the Client-Side ones.

The reason here is simple: on the Server-Side, we don’t need to render anything (all the rendering is performed on the Client-Side); on the Server-Side we need just enough to simulate *physics *of the process (so while they’re 3D models, they’re not really graphics models, though they do have quite a few similarities).

As, most of the time, physics within the games is rather rudimentary, it means that 3D models can be greatly simplified too.

Let’s consider the simplest example – an RPG where all the simulation is about restricting PCs/NPCs from moving through the walls (while allowing to move under low-hanging objects while crouching). In this case, on the Server-Side you might need a 3D model of both PC and the room; however, the character model can be as simple as a hexagonal prism (changing its height to show the difference between ‘walking’ and ‘crouching’), and the room can be as simple as rectangular box with openings of the appropriate height for the doors (and rectangular beams representing those low-hanging objects). See picture above for an illustration of this concept.

These ultra-low-poly Server-Side models allow to reduce amount of work on Server-Side dramatically (which is exactly the point of this change).

### [[To Be Continued…

This concludes beta Chapter 17(e) from the upcoming book “Development and Deployment of Multiplayer Online Games (from social games to MMOFPS, with social games in between)”. Stay tuned for beta Chapter 17(f), where we’ll continue our very cursory discussion of 3D into textures, UV mapping, and normals.]]

**Comment↯ below**. You do?! Please share:

### References

### Acknowledgement

Cartoons by Sergey Gordeev^{} from Gordeev Animation Graphics, Prague.

qm2k says

There’s one more important representation of rotation in 3D — so-called exponential one, known from group theory: http://www.tandfonline.com/doi/abs/10.1080/10867651.1998.10487493