sascha wrote:I don't know how the spherical harmonics lighting works, but I suspect in RenderMan terms it would be a lightsource-shader, is that correct?

More or less. It calculates the approximated light falling on the surface based on the surface normal, not including occlusion or viewing angle.

Spherical harmonics are just a set of 3D basis functions. You can think of them as sort of spherical version of the Sine waves used in the Fourier transformation. Here's an image I ran across:

This only shows the first part of the series - it continues to infinity.

The first sphere is constant ambient lighting, from all directions. The level below it captures directional lighting from the top/bottom and left/right. A coefficient can have a negative value, indicating the the other portion is shaded.

Each level captures more detailed information. So the more levels you've got, the more accurate the reconstruction is going to be, just like with a Fourier tranform. It turns out you can get a pretty good reconstruction of the scene's diffuse lighting - independent of the number of actual lights in the scene - with only 9 coefficients. For example, here's the dataset for the Grace Cathedral lightprobe:

- Code: Select all
`// Constants for Grace Cathedral lighting`

const vec3 L00 = vec3( 0.78908, 0.43710, 0.54161);

const vec3 L1m1 = vec3( 0.39499, 0.34989, 0.60488);

const vec3 L10 = vec3(-0.33974, -0.18236, -0.26940);

const vec3 L11 = vec3(-0.29213, -0.05562, 0.00944);

const vec3 L2m2 = vec3(-0.11141, -0.05090, -0.12231);

const vec3 L2m1 = vec3(-0.26240, -0.22401, -0.47479);

const vec3 L20 = vec3(-0.15570, -0.09471, -0.14733);

const vec3 L21 = vec3( 0.56014, 0.21444, 0.13915);

const vec3 L22 = vec3( 0.21205, -0.05432, -0.30374);

Here's the rest of the SH shader from the Orange book. You can see it's got constant, linear, and quadratic elements to the equation:

- Code: Select all
`varying vec3 DiffuseColor;`

uniform float ScaleFactor;

const float C1 = 0.429043;

const float C2 = 0.511664;

const float C3 = 0.743125;

const float C4 = 0.886227;

const float C5 = 0.247708;

void main(void)

{

vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal);

DiffuseColor = C1 * L22 * (tnorm.x * tnorm.x - tnorm.y * tnorm.y) +

C3 * L20 * tnorm.z * tnorm.z +

C4 * L00 -

C5 * L20 +

2.0 * C1 * L2m2 * tnorm.x * tnorm.y +

2.0 * C1 * L21 * tnorm.x * tnorm.z +

2.0 * C1 * L2m1 * tnorm.y * tnorm.z +

2.0 * C2 * L11 * tnorm.x +

2.0 * C2 * L1m1 * tnorm.y +

2.0 * C2 * L10 * tnorm.z;

DiffuseColor *= ScaleFactor;

gl_Position = ftransform();

}

So SH are just a method of encoding lighting information at any given point, and they're only an accurate encoding for that particular point. If you make some simplifying assumptions - the lights are all distant, and there's no occlusion - you can use a single set of coefficients represent the entire scene.

SH don't represent a complete lighting solution. For example, they aren't good at representing specular lighting, because it requires too much information. They aren't good are representing nearby lights or shadowing, because that's too position dependent. So a more general solution is to use them to capture diffuse lighting, and use other methods for specular and nearby lights.

Getting the coefficients for SH is basically the inverse of the equation: feed in the lighting at a given direction, scaling by that coefficient's contribution. Fortunately for me, someone already did the math for spherical light probes, so it was just a matter of porting the code to Java. Here's the core routine:

- Code: Select all
`private void updateCoeffs(float[] hdr, float domega, float x, float y, float z) {`

/******************************************************************

Update the coefficients (i.e. compute the next term in the

integral) based on the lighting value hdr[3], the differential

solid angle domega and cartesian components of surface normal x,y,z

Inputs: hdr = L(x,y,z) [note that x^2+y^2+z^2 = 1]

i.e. the illumination at position (x,y,z)

domega = The solid angle at the pixel corresponding to

(x,y,z). For these light probes, this is given by

x,y,z = Cartesian components of surface normal

Notes: Of course, there are better numerical methods to do

integration, but this naive approach is sufficient for our

purpose.

*********************************************************************/

int col ;

for (col = 0 ; col < 3 ; col++) {

float c ; /* A different constant for each coefficient */

/* L_{00}. Note that Y_{00} = 0.282095 */

c = 0.282095f;

coeff[0][col] += hdr[col]*c*domega;

/* L_{1m}. -1 <= m <= 1. The linear terms */

c = 0.488603f;

coeff[1][col] += hdr[col]*(c*y)*domega ; /* Y_{1-1} = 0.488603 y */

coeff[2][col] += hdr[col]*(c*z)*domega ; /* Y_{10} = 0.488603 z */

coeff[3][col] += hdr[col]*(c*x)*domega ; /* Y_{11} = 0.488603 x */

/* The Quadratic terms, L_{2m} -2 <= m <= 2 */

/* First, L_{2-2}, L_{2-1}, L_{21} corresponding to xy,yz,xz */

c = 1.092548f;

coeff[4][col] += hdr[col]*(c*x*y)*domega ; /* Y_{2-2} = 1.092548 xy */

coeff[5][col] += hdr[col]*(c*y*z)*domega ; /* Y_{2-1} = 1.092548 yz */

coeff[7][col] += hdr[col]*(c*x*z)*domega ; /* Y_{21} = 1.092548 xz */

/* L_{20}. Note that Y_{20} = 0.315392 (3z^2 - 1) */

c = 0.315392f;

coeff[6][col] += hdr[col]*(c*(3*z*z-1))*domega ;

/* L_{22}. Note that Y_{22} = 0.546274 (x^2 - y^2) */

c = 0.546274f;

coeff[8][col] += hdr[col]*(c*(x*x-y*y))*domega ;

}

}

If you check the code out closely, you'll note the C values don't match up between the routines. However, it matches what's in the paper's shader, so I assume they're some derivatives or something. It would help if I were better at math.

I've had a few thought about the shader model I'm looking for, I've prepared a few drawing (showing both, the inner working and a possible GUI represenation of the shaders). It's not finished yet, I'll post it tomorrow.

I'm looking forward to seeing it.