There are five different ways (that I can think of) to address this problem.

1) Do absolutely nothing

For the longest time with my original LOD algorithm this is the path I followed. If you are okay with having nasty cracks in your mesh, then this is definitely the way to go. But that is no longer acceptable to me.

2) Cheap fix (force all edges to be 1)

If all quad edges are not subdivided at all, then there will be no cracks and it will be fast. The problem is there will be no detail at the edges, and this quickly becomes very obvious and ugly.

3) Expensive fix (force all edges to be 64)

Here is the flip-side to the previous option. All quad edges are subdivided to the maximum level. This ensures that the best detail will be used at each edge. However this is too expensive to do for all quads.

4) Be smart about it (use adjacency information)

This is the method that Jack Hoxley uses and describes here. Basically he builds a vertex buffer that contains the 4 vertices of the quad plus another 8 vertices representing the 4 adjacent quads. In the hull shader, he calculates the midpoint of each quad and then calculates the distance (and thus a tessellation factor) from the midpoint to the camera. He chooses the minimum factor for each edge in order to have the quads match.

This is a pretty good solution, but it requires building a large vertex buffer containing adjacency information, as well as the additional midpoint calculation in the hull shader.

5) Do it right (calc factors from each vertex)

The next question is, can we do efficient watertight adaptive tessellation without adjacency information or the midpoint calculation? The answer is yes! If we calculate the tessellation factors from the vertices themselves, then we can guarantee that the surrounding quads will use the same factors (because they are using the same vertices).

The basic algorithm is this:

- Calculate the tessellation factor based on camera distance for each of the 4 vertices

float distanceRange = maxDistance - minDistance;

float vertex0 = lerp(minLOD, maxLOD, (1.0f - (saturate((distance(cameraPosition, op[0].position) - minDistance) / distanceRange))));

float vertex1 = lerp(minLOD, maxLOD, (1.0f - (saturate((distance(cameraPosition, op[1].position) - minDistance) / distanceRange))));

float vertex2 = lerp(minLOD, maxLOD, (1.0f - (saturate((distance(cameraPosition, op[2].position) - minDistance) / distanceRange))));

float vertex3 = lerp(minLOD, maxLOD, (1.0f - (saturate((distance(cameraPosition, op[3].position) - minDistance) / distanceRange))));

- Use the minimum value for each edge factor (pair of vertices)

output.edges[0] = min(vertex0, vertex3);

output.edges[1] = min(vertex0, vertex1);

output.edges[2] = min(vertex1, vertex2);

output.edges[3] = min(vertex2, vertex3);

- Use the overall minimum value for the inside tessellation factor

float minTess = min(output.edges[1], output.edges[3]);

output.inside[0] = minTess;

output.inside[1] = minTess;

Note: I originally thought the inside factor should be the maximum of the 4 vertices, but I after viewing it in action, I felt that the minimum was better.

That's it! Simple, fast, and easy watertight adaptive tessellation.

Check out the video of it in action: (I recorded the video at 1280x720, so be sure to view it at 720 to see the little details.)