Particles on the GPU and Winning

These last couple of days I’ve decided to dive into what might be the most important technique in games of the future, computing on the GPU. 

Now like you, I like particles. I remember doing a simple 2D particle system back in the Amiga days. It was basically a star field on the Amiga. Not really 3D other than it looked it. (I always preferred faking). I couldn’t draw a lot of particles (stars) before it started chugging. Then years later, in the mid 1990s in fact, I worked on a particle system again on the PC. This came on the back of numerous PC related tasks, one of which was to convert an undocumented, unportable video decoder written in assembler that only worked on a 320×200 screen to one that ran in C on a 640×400 screen. In a weekend. Which I did. Should have seen the look on the assembler programmer’s face when I turned up on Monday (I was the Producer) and demonstrated that. He’d previously said it couldn’t be done. (Seeing a pattern here?)

So anyway, I worked on a particle system on a PC that chugged at a thousand particles, but was OK with 400 or so at a decent clip. I was not happy.

And so today a few thousand particles on a CPU on top of  everything else is good, but it’s not good enough for me. I’m thinking of hundreds of thousands of particles. The only way to achieve that is to do them on the GPU and exploit the enormous parallel compute power. Particles lend themselves to parallelisation of course. 

So I’ve been looking, learning, rewriting, testing, writing shaders, understanding shaders, and I’m getting somewhere.

Computing on the GPU is achieved through a technique called FBO ping-ponging. It sounds tough. It’s not all that tough. The reason you do this is because you cannot read from and write to a texture map at the same time. Why texture maps? We’ll come to that, but suffice to say they’re no longer acting as texture maps.

You use a texture map to store other data, in our case, particle data; like velocity, position, time to live and so on. You then read this data, perform your computations and then write out the modified data to the write-only texture map. You then flip the FBOs and carry on. With my test program, I am initialising from the CPU, but there is no reason why you can’t initialise from the GPU and that’s what I’ll be doing at some point. Ping-ponging textures like this is the only way you can maintain state in what would otherwise be a stateless architecture. Exploiting the heavily parallel nature of modern GPUs is what gives these techniques such extraordinary performance over doing them on the CPU.

I have a lot to learn. Ray-marching, distance fields, ray-tracing, ambient occlusion in real time and other forms of optimised global illumination. It sounds heavy, but it’s just process and like anything else, if someone else has done it, you probably can too.

In the meantime, I am going to spend some more time studying ShaderToy and seeing just how they achieve such incredible effects. I’m planning on having Chimera looking amazing. At the beginning of the week, it was a crazy, foolish dream. Tonight, it is within sight.

One Step Beyond

Never, ever, ever give in. Sometimes, I accept that the day is done. And that the battle continues the next day. It hasn’t been lost, it’s just going to go on another day. Then I carry on. And on. And on. 

I’d made little visible progress tonight and I was about to call it a night. Then I pushed past and had the breakthrough. I now have the original Chimera drawing in 3D. The blocks are placeholder and all the same, the camera needs adjusting, but it’s there. Lighting, shaders, perspective, 3D, C++, 3 days.

Never give up. 

Push through.


Screen Shot 2013 05 29 at 22 28 27

Screen Shot 2013 05 29 at 22 28 37

Screen Shot 2013 05 29 at 22 28 39

Day 3 of Mission “You’re Having a Laugh Mate”

May 29, 2013

9:42 AM

Towards a Full Game

Goals for today.todo

  • Create a class derived from VBO for static blocks
  • Render whole Chimera room with placeholders
  • Look at using z-buffers
  • Create graphics from original game, extruding sprite traced shapes
  • Get a basic particle system going
  • Investigate GPU based particle system
  • Investigate AO (look at shadertoy examples)
  • Look into Assimp
  • Create credits list

48 Hours

Screen Shot 2013 05 28 at 23 25 05

From a former programmer who never thought he could get past 2D, to a family man doing 3D OpenGL graphics in C++, using vertex and fragment shaders, lighting, texturing, mesh export and import and a lot more, in 48 hours, I have to be very, very happy with that.

Tomorrow I think I will have the Chimera map being drawn using placeholder objects in 3D. What’s blown my mind is that if I want to go for straight up isometric, I can just use an isometric transform instead of a full perspective transform. And as for fragment shaders, well, I’m only using those for lighting yet, but I could go to town there.

I’ve not been this excited about something in a very long time.

Want to know how to keep young? Keep learning. Never be afraid to set stupidly high targets. And don’t be surprised when you hit them.

Screen Shot 2013 05 28 at 23 32 00

It’s Never Too Late

May 28, 2013

12:22 PM

It’s Never Too Late

This is the last fig leaf.

Yesterday, I went from never having really done any 3D programming to getting a cube up on a display with a simple geometry and fragment shader. That was a great start.

Today my ambition is to entertain some friends, whilst ignoring Twitter as much as I can and importing a textured mesh and displaing that. If things go really well, I’d like to get some kind of ambient occlusion going.

I’m an optimist. And because it shouldn’t be possible, I’m going to do it. Most people confuse the “infeasible for most” with “impossible”. I like reaching for the stars, it’s less crowded up there too.

I understand I need to be able to load an OBJ file and that many 3D packages, like Blender for example, will export these files. I know that meshes have a winding order and that it’s better that a 3D package take care of that rather than do all of this manually. The winding order goes clockwise or counter-clockwise, and depending on that order, a face (triangle) either faces the camera, or doesn’t, in which case it can be culled. Byron assures me that hardware now takes care of that. Having done some preliminary reading on shadow maps, it might still make sense to draw back faces anyway, something to do with rounding errors, but we’ll cross that shadowed bridge when we come to it.

A 3D package will also take care of a normal map and the texture UVs. I’m always happy about a tool that takes away pain. There is no need to inflict pain on yourself. In fact, the first thing I did for Chimera in January 1985 was to write an isometric sprite editor for it, in Z80 assembler, as you do. I wasn’t going to use graph paper (though that had been my preferred method for creating bitmap fonts!)

The Cinder library includes an OBJ loader that allows me access to a TriMesh object directly for OpenGL draw calls. This is fantastic.

So my plan is to make some simple Chimera cubes and apply some simple texture to them, then spit them out of some draw package. Which draw package?

Well Blender is free, and free is good, but the interface is intimidating. It does however make it really easy to create voxel output, which is the kind of look I’ll need for Chimera, even though I don’t know if I’ll be using a voxel technique necessarily.

Then there’s Cheetah3D and there’s Milkshape. I’ve not really looked into Milkshape, but I have a problem with sloppy looking websites, and the Milkshape site looks like something from the ’90s. That doesn’t sound hopeful.

Cheetah3D looks very complete, is available for under £50 on the App Store (remember I’m on a Mac) and does OpenGL preview. Also, unlike Blender, the interface looks Mac native. That means a lot to me. It saves me time when I’m trying to pick something new up.

I might also be able to enlist the help of an architect friend in creating some Chimera objects.

To say I’m excited is an understatement.

I used to be afraid of water. I couldn’t swim most of my life. I’d do anything I could to avoid it. Then I made up my mind at 36 to fix it. Steven Shaw took away my fear of water and eventually, that fear turned into love. I’m not a strong swimmer, or even much of a swimmer, but I’d love to swim every day if it was more convenient. Becoming a swimmer changed my self-image completely. Before, I saw a limit. Suddenly, that limit was gone, along with potentially all other limits. I learned to juggle. Not very well, but I did it. I learned to sing. Not very well, but enough to record demos. Many other barriers have been smashed in my life.

And now I’m removing the last fig leaf. 3D graphics.

Feels good.

Not Bad for a Day

1:57 PM


Isometric allows me to make a lot of assumptions.

For each Chimera block, I need to draw 1024 voxels. Each voxel is always going to have the same shape, or geometry, but not scale or size necessarily.

3:54 PM

Lots of learning about how to do meshes and stuff in Cinder.

I have a cube being drawn as a VBO. Next up, I need to set the normals and rotate it isometrically before it gets drawn.

6:10 PM

Pretty lost in experimentation. I keep oscillating between a simple isometric approach (fast) and a full-blown camera approach (flexible, and possibly not much slower)

11:05 PM

A breakthrough.

After pontificating one way and another, I took half an hour of Byron’s time (thanks Byron!) and received some valuable insights into how the rendering pipeline works.

I spent a few more hours working and you can see the results for yourself. This morning I had never done any real 3D programming in my life. I’m a family man and a business development person with lots of distractions and of course, today my Twitter timeline saw an unusually high level of activity… And now, I have shaders running, drawing a cube from a given camera position. There are still some kinks to be worked out, but I’m really happy with how far I came in a single day.

Chimera2 image002

Towards Basic Understanding

11:53 AM

Towards basic understanding

I just tried this code:

gl::translate( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0) );
gl::rotate( Vec3f(35,20,0) );
gl::drawCube( Vec3f::zero(), Vec3f(100,100,100) );

(Incidentally, I’m following the tutorial here)

Now the cube doesn’t rotate, because the rotation is not being added to the model view. I’m still confused as to where the model view is being stored. Perhaps the transforms are actually being applied to the world?

The tutorial I’ve used so far didn’t get me very far. So let’s have a look futher afield.

12:54 PM

Think again…

I don’t need rotational transforms, as the world will not be rotated, only translated. I don’t need a perspective transform, as the perspective is fixed.

I will need to learn some vector maths for lighting I suspect.

It does seem silly to think of voxels. There is no projection to do, the faces presented will always be the same. Voxels make sense if we take each individual block in Chimera and turn those into voxels, with depth and so on.

In the original Chimera, each block was 16×16, voxel based this would translate to 16x16x16, or 1024 voxels. Each one could have its own material. If we were to up the resolution, this would be 32x32x32 per block, or 32,768 voxels. Still manageable if we keep the total number of blocks down. So let’s have a think about how to draw this with flat lighting.

1:21 PM

I don’t need to overthink this. What I want to do is not complex. It is simple. Strip it down. Strip it back. Think of the essence of the thing. Cubes. Three faces. They never present differently. Three faces. Six triangles. Three lighting values. This should be easy.

The very next thing I need to do is draw a Chimera map, but instead of drawing sprites, I draw voxels.

Baby Steps

11:39 AM

Baby Steps

I just added this:

void Basic3DApp::draw()
    // clear out the window with black
    gl::clear( Color( 0, 0, 0 ) );
    gl::rotate( Vec3f(0,1,0) );
             Vec3f(100,100,100) );

Now the cube is rotating in 3D space.

I have a solid cube rotating about the Y axis. This I achieved before lunchtime on my first day of experimenting with 3D in C++

Into the Water

10:50 AM

Into the water

Three types of matrices used to convert 3D geometry into a 2D image

  • Model (maps object’s local space coordinates to world space coordinates)
  • View (maps world space coordinates to camera coordinates)
  • Projection (maps to 2D space)

I’ve got a basic program up and running. Very simple.

gl::drawCube( Vec3f(getWindowWidth()/2,getWindowHeight()/2,0), Vec3f(100,100,100) ); 

Chimera2 image001

The above is my first ever cube drawn in OpenGL.

From humble beginnings. Remember, I’ve not done 3D programming before. And I am a suit. Bear that in mind, and if you’re an oldie like me, let’s do this together!