I started working on the engine for my next game, which will - if everything goes as planned - be the first game to be release under the label nXperience.
And because every engine needs a good particle system (amongst other things) I started with that.
- Each particle is a single vertex in a VBO that is rendered with a geometry shader to update its properties.
- The results of this process are stored in another VBO via Transform Feedback.
- This VBO is then rendered with another geometry shader to expand the points to quads.
At this point the particle system updates and renders 3 million particles @ 20fps on a GTX 460 which is far from breath-taking.
On the other hand it is everything but optimised.
Each vertex in the VBO is 64Bytes in size, which is much too big and most of it isn't even used. But I won't cut down on the size of a single vertex, until I have played around with that system and can be certain what data needs to be available per-vertex.
I'd also like to see how much influence triple-buffering of the update-VBOs has on performance. But taking into account that a VBO containing 3 million particles weighs around 185MB doesn't make triple-buffering look feasible at the moment. That's another reason why the size per vertex needs to go down.
A quick note on how particles are generated: they are not generated on the GPU.
The geometry shader that updates the particles only drops them under certain conditions.
New particles are spawned by a Lua script which inserts them into another VBO. This very VBO is then added to the double-buffering VBOs in one swoop via
Developer Support (rant)
I started developing the particle system on a Radeon HD3850 which wasn't really up to the task.
So I needed a new graphics card and because the new Radeon 6xxx-series is not available yet and I need a card of both vendors (AMD/nVidida) anyway I bought a GTX 460.
So far, so good and it seems like a nice piece of hardware.
But having used ATI cards for the last few years I only heard about the great nVidia developer and OpenGL support and how dedicated they are to supporting OpenGL.
First of all PerfHUD is for D3D only. Nothing new there. At least there is GLExpert that... oh.. doesn't work on 64bit Windows since quite some time now.
Ok, so that doesn't work. But there is still hope, because
ARB_debug_output is all I need and turned out to be a incredibly valuable and informative tool while using my trusted ATI Radeon. Unfortunately the current nVidia drivers don't support
Great! So I'm stuck with calling
glGetError() after each call manually and then figuring out what exactly is wrong. :(
The beta drivers support
ARB_debug_output, but the message of a debug output looks like this:
GL_INVALID_OPERATION error generated. which is nice, but not really helpful when it comes to finding the source (code and reason) for the error.
Besides, it seems that
DrawTransformFeedback is broken in the current drivers, or at least I'm not the only one having problems.
So its back to the HD3850 for the moment and applying as registered developer at nVidia for filing a bug report.