However, I'm having trouble understanding a couple points in the code.

In his energy minimization article, he shows a method to distribute samples evenly around a sphere. It's essentially the same as applying spring forces to the samples. The code is:

- Code: Select all
`// Energy minimization`

int iter = 100;

while( iter– ) {

for( int i = 0; i < samples; i++ ) {

Point3 force;

Point3 res = Point3( 0.0f, 0.0f, 0.0f );

Point3 vec;

float fac;

vec = sampleSphere[ i ];

// Minimize with other samples

for( int j = 0; j < samples; j++ ) {

force = vec - sampleSphere[ j ];

fac = DotProd( force, force );

if( fac != 0.0f ) {

fac = 1.0f / fac;

res.x += fac * force.x;

res.y += fac * force.y;

res.z += fac * force.z;

}

}

res = res * 0.5f;

sampleSphere[ i ] += res;

sampleSphere[ i ] = Normalize( sampleSphere[ i ] );

}

}

- Code: Select all
`fac = DotProd( force, force );`

The other question I've got is more a question of approach. Raytraced AO normally works by shooting off a bunch of rays from the point you're interested in evaluating evenly over that point's hemisphere, and determining what occludes it within a set distance. The ratio of rays that hit something nearby vs. the rays that don't gives the amount of occlusion (shading) for that point.

SSAO is similar, but instead of raytracing, it selects random points within a sphere, and tests if those sample points (relative to the point of interest) are occluded in screen space. Points in the sample sphere which are behind the point's hemisphere are discarded. This is typically done by doing a dot product between the screen point's normal (n) and a vector from the screen point to the sample point (s - p).

This is what I expected to see in the code - something like:

- Code: Select all
`float dp = DotProd( normal, Normalize( s - p ) );`

- Code: Select all
`// Get sample point, in camera space`

Point3 samplePoint = p + vector;

// Get screen coordinate of the sample point

Point2 screenSamplePoint = MapCamToScreen( samplePoint );

int sx = int( screenSamplePoint.x + 0.5f );

int sy = int( screenSamplePoint.y + 0.5f );

// If sample point is outside the bitmap then ignore it.

// This code could potentially be skipped in a realtime scenario

if( sx < 0 || sy < 0 || sx >= w || sy >= h )

continue;

// Get z-buffer depth at sample point

float sampleZ = zBuffer[ sx, sy ];

// Do nothing with samples that are the background

if( sampleZ <= -1.0E30f || sampleZ == 0.0f )

return;

// Get the difference of the depth at the sample and the depth at p

float zd = sampleZ - z;

// Ignore samples with a depth outside the radius or further away than p

if( zd < radius )

{

// Calculate difference in distance to sample point and the z depth at that point

// Optimized by using squared, ok due to the nature of how we will use it

// One could probably use samplePoint.z instead of length though.

float zd2 = LengthSquared( samplePoint ) - sampleZ*sampleZ;

// Check that the sample point is in front of the z-buffer depth at that point

if( zd2 > 0.0f )

{

// Now get a new point that is samplePoint, but with an adjusted z depth

Point3 p2 = Normalize( samplePoint ) * -sampleZ;

// Get cosine of angle between the normal in p and a vector from p to p2

float dp = DotProd( -normal, Normalize( p2 - p ) );

// Check that the angle is inside the cone angle

if( dp > coneAngle )

occlusion += 1.0f;

}

}

- Code: Select all
`// Now get a new point that is samplePoint, but with an adjusted z depth`

Point3 p2 = Normalize( samplePoint ) * -sampleZ;

// Get cosine of angle between the normal in p and a vector from p to p2

float dp = DotProd( -normal, Normalize( p2 - p ) );

I'd post a question in his blog, but it's filled up with spam posts.