Tuesday, February 7, 2017

Particles and dynamic lights

My original plan with particles is to implement it on the GPU with vertex shader or compute shader.  However,  after the completion of texture mapping, I realized that particles and dynamic lights were the only things left before reaching feature parity with the software rasterizer.  So I adjusted my plan, and decided I would just implement particles and dynamic lights in the most straightforward way, so that I can reach the feature parity milestone as quickly as possible, and then worry about all the fancy stuff later.

Particles becomes almost trivial once I decided to do it the easy way.  The entire particle emitting and updating stuff is implemented in r_part.c file. All I need to rewrite is 3 functions:

D_StartParticles : where I clear the previous frame's particles in my particle container.

D_DrawParticle: where I append one particle to my particle container.

D_EndParticles: where I upload the particle container as a vertex buffer and issue a glDrawArrays(GL_POINTS, ...) call.

The shaders are also extremely easy.  All the vertex shader  needs to do is adjusting the
point sprite size gl_PointSize base on the distance of the particle to the view point.  And the fragment shader simply copies the input color to to output.


The last missing feature is dynamic lights.

Dynamic lights sounds difficult but it really isn't.  In fact I would even say it's one of the easiest features to implement.

All it takes is uploading the dynamic light positions to a uniform array (in R_PushDlights(), there are up to 32 dlights in Quake).  Then in the fragment, go through all dlights, compute the light contribution with a dot product of the normal vector and light position, adjusted by the light fall off base on square of light distance to the current fragment.   This is actually already quite a bit better than what the original Quake did in its software renderer, where it does the dlights on the low-res light maps and doesn't use the normal vector and doesn't compute the fall off.

Although the implementation is simple,  the result looks quite nice:



Monday, January 23, 2017

Textures Textures Textures

So I started with diffuse maps on level surfaces. Which is fairly straightforward. I did have to rearrange the rendering loop a little bit because in order to minimize texture binding switching I need to group polygons with the same textures together.



It didn't take me very long to get a shot like the above.  As you can see all the static level surfaces were textured.  It looked a lot more like Quake, even though sky, water, lava, portal and entity textures were still missing.

As I had mentioned in the previous post, Quakes uses a rather peculiar way to store the level texture coordinates.  Each polygon surface has 2 vectors that describe 2 axis of the UV plane and 2 offset values.  In order to get the texture coordinate, we perform a dot product of the vertex coordinate with the 2 axis and add the offsets (see this link http://www.flipcode.com/archives/Quake_2_BSP_File_Format.shtml under Texture Information Lump section)

Next thing to implement was entity textures.  Each entity in Quake has only one texture image, which usually has two parts.  The left half is for the front side of the object and the right half for the back side.  In entity models, vertexes are shared across triangles so each triangle has a flag indicating whether this is a front side triangle or a back side one and the vertexes texture coordinates needs to be adjusted if this vertex 1) is shared by both front side and back side triangles AND 2) is being used to render a back side triangle.

I ended up with an entity model renderer that renders each model in 2 steps.  First render all the front side triangles, then modify a uniform flag and render all back side triangles.


Quake uses the flag SURF_DRAWTURB to describe surfaces that has water, lava or portal textures.  It means to draw the textures with a turbulence effect.  This is fairly easy to implement in a fragment shader using the global time as a uniform and add an offset to the UV coordinates based on a Sin curve.

The last and most difficult texture type is SURF_DRAWSKY.  Quake's sky texture is like this


The left part is the foreground layer and the right part background.  The 2 parts scroll at different speeds to create an illusion of depth.

Now the tricky part is Quake doesn't have a sky box.  It just seals all openings of the level with polygons with the SURF_DRAWSKY flag.  Even though these polygons all have different facing and at different distance,  they need to look completely transparent and uniform.  This means the polygons' texture coordinates are completely useless,  if I use them to address the sky texture I would end up with seams on those sky polygons.

So I need to calculate the UV textures completely base on the sky's world coordinate.  And I did this in the pixel shader with a ray cast from the view point through the fragment's world coordinate to the sky:


scale = (SkyY - FragY)
Sky = ViewPoint + (Fragment - ViewPoint) * scale

So here we go, sky texture!




Monday, January 9, 2017

My new quest of rebuilding Quake's renderer with modern OpenGL

I have wanted to do this for a long time but it's only recently I finally gathered enough motivation to start writing some actual code.  Now 2 months have passed since the first commit, it's time to take a look back at the journey.

The first commit happened on Nov 14 2016. I cloned the official id Quake repo and started with removing the stuff that I don't need.  I know there are more advanced source ports available but I wanted to start my work from zero, so the official repo is the perfect starting point for me.

Code cleaning up took almost an entire month.  I aggressively removed all platform specific code and replaced them with SDL2 or the C++ standard library.  By Dec 15,  I had fully ported sound/music/input and the software renderer to SDL.  All platform specific files and unused #if blocks were also removed.  I was happy with the much cleaner code base,  but also a bit unsure about whether I'd be able to finish this project as it seemed to be a lot of work.

Two days later I had my first model on screen.


There are 3 kinds of models in Quake:  the world model, which is the static part of the level,  brush models, which are similar to the world model but are smaller movable parts of the level (doors, elevators etc.), and alias models (monsters and projectiles).

I went with the alias model first because their data structure looked easiest to me.  At first my renderer just drew a static model frame on screen while the game ran its simulation and audio in the background.  The highly modular design of the engine was really impressive as the graphics subsystem was completely ripped out but it didn't affect any of the other subsystems at all.

It took me another few days to get model animations working.  But once I could draw one model, I could draw many,  just throw in the world and viewpoint transformation and here we go:


It was quite an exciting moment to see the monsters and objects appear on screen all at the right spot.

Level geometries were a bit harder than alias models, mainly due to the way they were stored in the data structures. I eventually got the level on screen on Dec 29.


Above is one of the first screen shots and you can see some polygons had wrong facing.  This was later fixed and it looked much better.


Here's another shot

Oh yeah, the above one was a shot taken on a Mac. I initially started with OpenGL 4.5 because direct state access is so nice.  But I then decided it's worth to have Mac support so I had to rewrite the low level GL code to target OpenGL 3.3, which I thought should have good support on a wide range of systems, including VMware (but turned out VMware's OpenGL driver wasn't good enough to run it even though they claim 3.3 support).

At first I just dump the entire level to the GPU without using the BSP tree at all. But it actually wasn't very hard to walk the BSP tree and only render the polygons in the PVS of the current leaf node. So I got this done in the new year holidays.  Walking the BSP tree also had the nice side effect that I could add some of the small objects (mainly torches) to the rendering as I passing through the leaf nodes because these objects are stored inside the BSP tree.

The next thing was rendering brush models, these are the doors and moving floors. Thanks to the holidays I had the time to implement these in just a couple days.  Once these are in,  I kind of reached a milestone that all 3D objects were rendered.  Monsters, projectiles, torches(with animated flames), ammo boxes, moving floors and other level mechanisms, all there.  So I captured a video of it running the opening demo.


The next big challenge was texturing.  Quake has diffuse textures and up to 4 light maps.  light maps seemed a bit more complex but also more interesting so I went for them first.

The big problem with light maps was that every polygon had their very own light map.  In order to render the polygons with their light maps I would have render them one by one while binding their light map textures, instead of putting everything in a vertex buffer and render them in one draw call.

So I took the obvious solution which is adding all light maps to a texture atlas.  I had to write a texture atlas builder that allows me to add sub textures and then return a texture atlas and to translate texture coordinates to the coordinates within the atlas.

The way Quake stores its texture coordinates is also rather strange.  Instead of storing the UV coordinates together with the vertexes, Quake stores a plane equation with each polygon.  So I had to project each vertex coordinate to the texture plane to get the texture coordinate.

I was quite surprised that it only took me two days to get light maps working. Perhaps I'm getting better at this now.  Oh yeah, weapon models were added too but these were fairly easy. They were simple alias models, just a matter of setting up the correct transformation matrices.


The first screenshot above still had an off by 1 bug.  If you look closely, some shadows doesn't align well with the geometry.  The light map is 1/16 res as the diffuse map. So I just divided all texture coordinates by 16 but turned out that's not enough.  I actually had to + 1 so it's like s = s / 16 + 1 instead of just s = s / 16.  I don't understand why there's this  +1 but once this was fixed, everything aligned perfectly.  Here's another shot with texture filtering turned off.


This is where I'm at right now.  It's quite an adventure for a graphics noob like me but I learned a hell lot and more importantly, had great fun doing it.  Now I'm more confident than ever that I will be able finish it in the sense of reaching feature parity with the software renderer and even go beyond.

These are the things still remain to be done:
  •  Diffuse maps
  •  Procedural textures (sky, water, lava)
  •  Particals
  •  Dynamic lighting

Thursday, June 11, 2015

Fizz buzz with template meta programming: C++ vs D




C++ compile time:






D run time:



D compile time:




The D compile time version is almost identical to the run time version, except replacing 'if' with 'static if' and replacing function call with template instantiation.  And it is so much more readable than the C++ counterpart.