I just realized that I never wrote an entry for January. It's the first month that I have not had an update since I started this dev blog. To be honest, I didn't have much to report. I haven't really written any code but I have been thinking about a lot.
At first I was thinking about physics. I thought it would be nice to actually have collision detection with my terrain and possibly throw balls around or maybe even drive a car. However there was a big problem with this. How do I detect collision with a mesh that is deformed entirely on the GPU? Obviously I would have to have some way of sending the physics data to the GPU, do the collision detection there, and then somehow pass the resultant data back to the CPU.
Getting the data to the GPU is the easy part, I think. If I only use bounding spheres for all of the objects, then I can simply pass one normal Color texture to the GPU containing the position of each sphere in the RGB and the radius of the sphere in the Alpha. It may even be possible to set a constant memory buffer (ie array) with the data, which would be even easier.
Once I have this data, I can run through each object in the vertex shader to see if it collides with the current vertex. The problem I ran into then is I don't know how to get the data back to the CPU. I would obviously want to write the data to a render target. Unfortunately, the collision data is in the vertex shader. Pixel shaders cannot index into memory in XNA/DX9/SM3.0. [In DX10/SM4.0 they can index into constant memory, in DX11/SM5.0 both the vertex and pixel shaders can read and write to dynamic resources.] I have no idea how I would pass the data from the vertex shader to the pixel shader.
That means I must somehow do the collision detection in the pixel shader. However, this means that I will be doing the checks for every object, for every pixel. That will be massive overkill. I couldn't come up with a good solution, so I pretty much gave up on physics for now. It should be a cinch in DirectX 11 and Shader Model 5.0!
The next thing I was thinking about was mainly efficiencies. Currently I am calculating the normal in the pixel shader by doing 32 noise calculations per pixel. This is quite a strain on the GPU. I was reading an article about deferred rendering and I had a thought. If I only output the height of each pixel to a render target, then I could have another pass that reads the neighboring pixels in the render target into order to calculate the normal. This means it would one pass that does 8 noise calculations per pixel and then a second pass that does 4 texture lookups per pixel. I imagine that would be a much faster way of doing things.
I have yet to actually implement anything though. So everything here is just speculation. Sorry for no pretty pictures. I will try to get something worth showing off sometime soon.
No comments:
Post a Comment