Thursday, February 19, 2009

Fixed Lighting + Higher Res

As I mentioned in my previous post, I was getting some strange vertical lines appearing in my deferred lighting result. After I turned down the ambient light to make the lighting a bit more realistic, the lines became even more pronounced.

Casting that problem aside for a bit, I decided to increase the resolution because 800x600 just wasn't cutting it anymore. I went with a widescreen resolution because both my laptop and my desktop have widescreen screens. I settled on 1280x720 because 1920x1200 would just be overkill right now, in my mind.

The problem with increasing the resolution was that the lines got even worse! Now I was getting horizontal lines as well as vertical lines, so it looked like a big checkerboard mess. I spent several days trying to figure out what was going wrong. At first I thought it was a bad driver/GPU in my laptop. So, I went to test it on my desktop, but I found out that my power supply was dead. Luckily my brother let me remote into his PC and run the app. I got the exact same results, so I knew it wasn't my GPU. I then installed FX Composer to have a better debugging IDE. I soon discovered that I was using wrong texel offsets to sample neighbors in the world position texture. This removed the lines from FX Composer, but they were still appearing in XNA. I was messing around with my sampler filters when I finally fixed the problem by switching them from Point to Linear. While it does get rid of the lines, it comes at a cost. I am now getting about 18fps average. Obviously the change in resolution also figures into that as well.

I have some interesting new screenshots to share.

Sunday, February 8, 2009

Deferred Lighting

This weekend I implemented a lighting system like the one I talked about in my last two blog posts. I'm calling it deferred lighting because it doesn't do any lighting calculation until I have render targets for the scene. I have one render pass that has two targets: one containing the diffuse color of the scene, and the other containing the world position of each pixel. In a second render pass, I calculate the normal of each pixel by sampling it's neighboring pixel world positions. I then simply do a standard lighting calculation using the normal and the diffuse color of the scene.

It also has much better performance compared to the brute force 32 noise calculations method. At low altitudes I was getting 16fps with the noise method and 33fps with the deferred method. At high altitudes I was getting 12fps and 30fps, respectively. As you can see, I was getting at least double the framerate all the time.

Now for some pretty pictures. They are not much different from my previous lighting pictures, the important thing is that they are being rendered much faster now. I also fixed a slight bug I had in the previous lighting that made the light direction the same for every side of the planet (there was no dark side). There are some strange vertical lines that are appearing which you can see in some of the screenshots below. I'm not sure why they are there, but I will continue to investigate them.

In the last picture you can see the detailed designs that are being generated for the terrain itself. Just to show a difference between the lit vs diffuse renderings, here is the diffuse texture alone for the last picture.

Thursday, February 5, 2009

Per Pixel Normal Calculation

Sorry, still no actual code or pretty pictures!

I just wanted to write up a quick note related to my second topic in my previous post. I did some Googling to see if any other people have implemented a similar system and indeed some people have. In fact I found an article by Microsoft that describes exactly what I was talking about.

In the article, they are creating procedural materials dynamically, so they have to calculate normals dynamically as well.

From the article:
"One solution is to run the shader multiple times and compute the difference in height at each sample point. If we calculated the height one pixel to the right of the currently rasterized pixel and one pixel above the currently rasterized pixel, we could compute tangent and bitangent vectors to the central pixel. Doing a cross product on these would give us the normal for that point."

What I found funny is how they start talking about the ddX and ddY functions in HLSL but in the end they still use the render target + second pass method.

"The solution that this sample uses by default is to render the perturbed heights of the objects in the scene into an off-screen render target. That render target is then read back in on another pass. For each pixel on the screen, its right and top neighbors are sampled. Tangent and bitangent vectors are created from the neighbors to the central pixel. A cross product between these will give the normal."

I now feel very confident about this method of doing things and I will proceed to implement lighting in this manner. I will probably branch off of my existing planet codebase so I can easily compare the differences between the brute-force noise calculation vs the "deferred" style.

Tuesday, February 3, 2009

Hello 2009!

I just realized that I never wrote an entry for January. It's the first month that I have not had an update since I started this dev blog. To be honest, I didn't have much to report. I haven't really written any code but I have been thinking about a lot.

At first I was thinking about physics. I thought it would be nice to actually have collision detection with my terrain and possibly throw balls around or maybe even drive a car. However there was a big problem with this. How do I detect collision with a mesh that is deformed entirely on the GPU? Obviously I would have to have some way of sending the physics data to the GPU, do the collision detection there, and then somehow pass the resultant data back to the CPU.

Getting the data to the GPU is the easy part, I think. If I only use bounding spheres for all of the objects, then I can simply pass one normal Color texture to the GPU containing the position of each sphere in the RGB and the radius of the sphere in the Alpha. It may even be possible to set a constant memory buffer (ie array) with the data, which would be even easier.

Once I have this data, I can run through each object in the vertex shader to see if it collides with the current vertex. The problem I ran into then is I don't know how to get the data back to the CPU. I would obviously want to write the data to a render target. Unfortunately, the collision data is in the vertex shader. Pixel shaders cannot index into memory in XNA/DX9/SM3.0. [In DX10/SM4.0 they can index into constant memory, in DX11/SM5.0 both the vertex and pixel shaders can read and write to dynamic resources.] I have no idea how I would pass the data from the vertex shader to the pixel shader.

That means I must somehow do the collision detection in the pixel shader. However, this means that I will be doing the checks for every object, for every pixel. That will be massive overkill. I couldn't come up with a good solution, so I pretty much gave up on physics for now. It should be a cinch in DirectX 11 and Shader Model 5.0!

The next thing I was thinking about was mainly efficiencies. Currently I am calculating the normal in the pixel shader by doing 32 noise calculations per pixel. This is quite a strain on the GPU. I was reading an article about deferred rendering and I had a thought. If I only output the height of each pixel to a render target, then I could have another pass that reads the neighboring pixels in the render target into order to calculate the normal. This means it would one pass that does 8 noise calculations per pixel and then a second pass that does 4 texture lookups per pixel. I imagine that would be a much faster way of doing things.

I have yet to actually implement anything though. So everything here is just speculation. Sorry for no pretty pictures. I will try to get something worth showing off sometime soon.