OpenGL Renderer

Ideas, enhancements, feature requests and development related discussion.

Re: OpenGL Renderer

Postby dcuny » Mon Dec 03, 2007 10:01 am

sascha wrote:I don't know how the spherical harmonics lighting works, but I suspect in RenderMan terms it would be a lightsource-shader, is that correct?

More or less. It calculates the approximated light falling on the surface based on the surface normal, not including occlusion or viewing angle.

Spherical harmonics are just a set of 3D basis functions. You can think of them as sort of spherical version of the Sine waves used in the Fourier transformation. Here's an image I ran across:
Rotating_spherical_harmonics.gif
A rotating representative of spherical harmonics.
Rotating_spherical_harmonics.gif (160.76 KiB) Viewed 6586 times

This only shows the first part of the series - it continues to infinity.

The first sphere is constant ambient lighting, from all directions. The level below it captures directional lighting from the top/bottom and left/right. A coefficient can have a negative value, indicating the the other portion is shaded.

Each level captures more detailed information. So the more levels you've got, the more accurate the reconstruction is going to be, just like with a Fourier tranform. It turns out you can get a pretty good reconstruction of the scene's diffuse lighting - independent of the number of actual lights in the scene - with only 9 coefficients. For example, here's the dataset for the Grace Cathedral lightprobe:
Code: Select all
// Constants for Grace Cathedral lighting
const vec3 L00  = vec3( 0.78908,  0.43710,  0.54161);
const vec3 L1m1 = vec3( 0.39499,  0.34989,  0.60488);
const vec3 L10  = vec3(-0.33974, -0.18236, -0.26940);
const vec3 L11  = vec3(-0.29213, -0.05562,  0.00944);
const vec3 L2m2 = vec3(-0.11141, -0.05090, -0.12231);
const vec3 L2m1 = vec3(-0.26240, -0.22401, -0.47479);
const vec3 L20  = vec3(-0.15570, -0.09471, -0.14733);
const vec3 L21  = vec3( 0.56014,  0.21444,  0.13915);
const vec3 L22  = vec3( 0.21205, -0.05432, -0.30374);


Here's the rest of the SH shader from the Orange book. You can see it's got constant, linear, and quadratic elements to the equation:
Code: Select all
varying vec3  DiffuseColor;
uniform float ScaleFactor;

const float C1 = 0.429043;
const float C2 = 0.511664;
const float C3 = 0.743125;
const float C4 = 0.886227;
const float C5 = 0.247708;
void main(void)
{
    vec3 tnorm      = normalize(gl_NormalMatrix * gl_Normal);

    DiffuseColor    = C1 * L22 * (tnorm.x * tnorm.x - tnorm.y * tnorm.y) +
                      C3 * L20 * tnorm.z * tnorm.z +
                      C4 * L00 -
                      C5 * L20 +
                      2.0 * C1 * L2m2 * tnorm.x * tnorm.y +
                      2.0 * C1 * L21  * tnorm.x * tnorm.z +
                      2.0 * C1 * L2m1 * tnorm.y * tnorm.z +
                      2.0 * C2 * L11  * tnorm.x +
                      2.0 * C2 * L1m1 * tnorm.y +
                      2.0 * C2 * L10  * tnorm.z;

    DiffuseColor   *= ScaleFactor;

    gl_Position     = ftransform();
}
Surface normal in, RGB tuple out. Porting to any other shader is trivial.

So SH are just a method of encoding lighting information at any given point, and they're only an accurate encoding for that particular point. If you make some simplifying assumptions - the lights are all distant, and there's no occlusion - you can use a single set of coefficients represent the entire scene.

SH don't represent a complete lighting solution. For example, they aren't good at representing specular lighting, because it requires too much information. They aren't good are representing nearby lights or shadowing, because that's too position dependent. So a more general solution is to use them to capture diffuse lighting, and use other methods for specular and nearby lights.

Getting the coefficients for SH is basically the inverse of the equation: feed in the lighting at a given direction, scaling by that coefficient's contribution. Fortunately for me, someone already did the math for spherical light probes, so it was just a matter of porting the code to Java. Here's the core routine:
Code: Select all
   private void updateCoeffs(float[] hdr, float domega, float x, float y, float z) {

        /******************************************************************
         Update the coefficients (i.e. compute the next term in the
         integral) based on the lighting value hdr[3], the differential
         solid angle domega and cartesian components of surface normal x,y,z

         Inputs:  hdr = L(x,y,z) [note that x^2+y^2+z^2 = 1]
                  i.e. the illumination at position (x,y,z)

                  domega = The solid angle at the pixel corresponding to
             (x,y,z).  For these light probes, this is given by

             x,y,z  = Cartesian components of surface normal

         Notes:   Of course, there are better numerical methods to do
                  integration, but this naive approach is sufficient for our
             purpose.

        *********************************************************************/

        int col ;
        for (col = 0 ; col < 3 ; col++) {
          float c ; /* A different constant for each coefficient */

          /* L_{00}.  Note that Y_{00} = 0.282095 */
          c = 0.282095f;
          coeff[0][col] += hdr[col]*c*domega;

          /* L_{1m}. -1 <= m <= 1.  The linear terms */
          c = 0.488603f;
          coeff[1][col] += hdr[col]*(c*y)*domega ;   /* Y_{1-1} = 0.488603 y  */
          coeff[2][col] += hdr[col]*(c*z)*domega ;   /* Y_{10}  = 0.488603 z  */
          coeff[3][col] += hdr[col]*(c*x)*domega ;   /* Y_{11}  = 0.488603 x  */

          /* The Quadratic terms, L_{2m} -2 <= m <= 2 */

          /* First, L_{2-2}, L_{2-1}, L_{21} corresponding to xy,yz,xz */
          c = 1.092548f;
          coeff[4][col] += hdr[col]*(c*x*y)*domega ; /* Y_{2-2} = 1.092548 xy */
          coeff[5][col] += hdr[col]*(c*y*z)*domega ; /* Y_{2-1} = 1.092548 yz */
          coeff[7][col] += hdr[col]*(c*x*z)*domega ; /* Y_{21}  = 1.092548 xz */

          /* L_{20}.  Note that Y_{20} = 0.315392 (3z^2 - 1) */
          c = 0.315392f;
          coeff[6][col] += hdr[col]*(c*(3*z*z-1))*domega ;

          /* L_{22}.  Note that Y_{22} = 0.546274 (x^2 - y^2) */
          c = 0.546274f;
          coeff[8][col] += hdr[col]*(c*(x*x-y*y))*domega ;

        }
      }
The domega scales the value by the solid angle. There's really very little else to it.

If you check the code out closely, you'll note the C values don't match up between the routines. However, it matches what's in the paper's shader, so I assume they're some derivatives or something. It would help if I were better at math. :?
I've had a few thought about the shader model I'm looking for, I've prepared a few drawing (showing both, the inner working and a possible GUI represenation of the shaders). It's not finished yet, I'll post it tomorrow.

I'm looking forward to seeing it. :)
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby sascha » Mon Dec 03, 2007 11:35 am

I see.

My expirence with RenderMan shaders is limited, but I think there is a distinction between surface and lightsource shader: For each "pixel", the surface shader loops over the lightsources (in the surface shader's illuminance loop). This again results in calls to the light shader's solar or illuminate functions. So my guess was that such a spherical harmonics lightsource is best implemented as a light shader, this way it can be added to any scene without a need to change the surface shaders.

I see how the spherical harmonics work (it's seems to be like using a Fourier transformation to convert a signal from time to frequency domain), but I still have an (obviously) stupid question: What's the benefit? Wouldn't an image-based solution be faster? Like procedural-textures vs. image maps: if I happen to have an image of the texture at hand, I'd use it (instead of trying to find way to synthesize the image procedurally).
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: OpenGL Renderer

Postby dcuny » Mon Dec 03, 2007 12:44 pm

sascha wrote:What's the benefit? Wouldn't an image-based solution be faster?

No, not really.

First of all, you need to blur the image. But what size convolution should you use? A different size kernel on a different image will give you different results. Also consider that you're blurring a sphere, not a planar image. That means that you're introducing distortion to the spherical mapping.

You'll also want to smoothly interpolate data from the image across the surface of the object.

With spherical harmonics, you get that "for free" - the image is "averaged" into it's primary components, and the results are smoothly interpolated across the surface of the object. There's no distortion added by the process, and it works for any size image - large or small. And the initial cost of converting the spherical imagemap is virtually nothing. If you still consider that too expensive, you can always cache the 27 numbers into a lookup value.

Plus, you aren't eating up a texturemap in the process. (Although you will be if you use the source image to generate specular highlights).


I've been hammering away at getting spherical texture mapping to work. I finally figured out why it wasn't working - it was conflicting with the textures used by the shadowmap. After fiddling with the shadowmap code, I got the spherical texture mapping to work.

But that led to a rewrite of the Uniform class which creates uniform variables for the shaders. The cleaned up code is much nicer, but unfortunately, it's broken the spherical mapping code again. I'm going to call it a night. :?

Edit: On the drive to work, it occurred to me that in cleaning up the Sampler2d in the Uniform class, I forgot to set the glActiveTexture back to the default texture. That's probably why the spherical mapping code is broken. I'll have to wait until I get how tonight to find out... :|
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Tue Dec 04, 2007 9:14 am

The problem with the renderer turned out to be the same stupid bug I was struggling with last night, and it was pretty easy to add spherical mapping. The only thing that threw me was that the image map has to be a power of 2.

Here are some models with image based lighting that's a mixture of spherical harmonics and spherical mapping of an imagemap. There's also a single directional light providing shadows (but no actual illumination):
mix_01.png
Diffuse base with a lot of specular.
mix_01.png (94.4 KiB) Viewed 6548 times

mix_02.png
Diffuse with a small amount of specular.
mix_02.png (38.8 KiB) Viewed 6545 times

mix_03.png
Same as the prior, but with the shadow adjusted to a more sensible value.
mix_03.png (44.47 KiB) Viewed 6545 times

mix_04.png
Very little diffuse, high specular.
mix_04.png (97.8 KiB) Viewed 6546 times

The edges of the shadows are a bit jagged, so I'll need to get the soft shadow code working properly. The next thing on the "to do" list is to use the material's actual diffuse and specular values to provide values to the renderer.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Tue Dec 11, 2007 8:42 am

After doing a bit more research, it seems I'm missing a step here: what I'm calling the "specular" map is actually a "reflection" map. I'm guessing the simplest way to do this is to split the LDRI image based on the luminosity of a given pixel - if it's below some value, it goes to the "diffuse" map, otherwise it goes to the "specular" map. I've got some code for calculating luminosity in the Filters module I wrote a while back:
Code: Select all
   private final static float luminance( float red, float green, float blue ) {
      return (76f*red + 15f*green + 29f*blue)/256f;
   }

I'd forgotten some of those cool filters! It would be nice to have them incorporated into JPatch at some point. :P

The other thing I've been looking into is some sort of skin shader. I'm not going for accuracy so much as something that generally looks good, runs quickly, and doesn't require a lot of tweaking. Maybe even the Pharr shader... :?

One problem I've got is the OpenGL renderer doesn't have any real lights - it's all driven off image based lighting. There's a directional light, but it only provides shadows, not any actual illumination. I suppose that I could use the light's direction as the primary light to drive the skin shader. I've also found an example of a shader that looks a lot like the "average of shadow map" code I was playing with, so I might follow that up further.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby pndragon » Tue Dec 11, 2007 4:52 pm

The other thing I've been looking into is some sort of skin shader.
Can renderman shaders be converted for your renderer? If they can, RudyCSkin.sl has parameters for color, pores, skin oil, skin roughness, blemishes, a texture map (I used this one to apply scales)....those are just the ones I used.

As a side note: Rudy Cortes (currently a TD for Disney) was forced to shutdown his forum at http://www.rendermanacademy.com due to spam. He has however, recently authored his own book on renderman shaders (it comes out on December 27) which Santa has pre-ordered for himself for Christmas this year...

--- Jim
"We're so sorry, Uncle Albert,
But we haven't done a bloody thing all day."
--- Paul McCartney
pndragon
 
Posts: 591
Joined: Sun Dec 05, 2004 1:27 am
Location: North Carolina

Re: OpenGL Renderer

Postby dcuny » Tue Dec 11, 2007 7:33 pm

pndragon wrote:Can renderman shaders be converted for your renderer? If they can, RudyCSkin.sl has parameters for color, pores, skin oil, skin roughness, blemishes, a texture map (I used this one to apply scales)....those are just the ones I used.

Thanks, I'll have a look at it. At this point, my code doesn't support u/v maps, so a lot of the features won't be usable. But I'm not really interested in getting photo ealistic skin, I just want something that's got more appeal than a flat shader. I suspect adding a simple ramp shader and some wrap lighting (to add some red in the shadows) might be enough to do the trick.


He has however, recently authored his own book on renderman shaders (it comes out on December 27) which Santa has pre-ordered for himself for Christmas this year...

Merry (early) Christmas! :)

I was rearranging furniture the other day so we'd have a place to put up the Christmas tree, and found a Border's gift certificate for $8. It was probably for one of the kid's, but it didn't have any name signed to it, and no one claimed it. I also happened to have a 25% off coupon for Borders, and also just happened to run across the Corpse Bride book the last time I was wandering through the store. So I ended up buying myself an early Christmas present as well. :mrgreen:

Something the book didn't really cover was the amount of post-processing that was done in the film. It was alluded to, but the big old Special Effects tome at the store on the shelf above had some cool shots from the movie showing off some of the green screen effects in Corpse Bride. There's a lot of compositing going on in that film. This is also mentioned in the Ice Age book: just about every CGI shot these days is composited together from multiple sources. We've touched on this before for JPatch, but we should hit on it again once the baseline version of JPatch is out: should compositing be a feature of JPatch and part of the basic pipeline, or something that's created with an external program?

But that can wait for a bit...
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Screen Space Ambient Occlusion

Postby dcuny » Tue Dec 11, 2007 9:25 pm

I just ran across a Wikipedia article on Screen Space Ambient Occlusion. Because it's sampled, the technique is subject to a bit of noise. It's also got some visible artifacts.

Still, it's worth looking into.

Edit: There's quite a bit of information available for it, including a complete OpenGL shader. Of course, there's a tradeoff between speed and quality. Still, getting realtime ambient occlusion is nothing to sneeze at. 8)
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Wed Dec 12, 2007 10:29 am

There seem to be two basic ways to do this.

The first is the one laid out in the original paper - compare the depth of the local pixel against the neighborhood of pixels in the zbuffer to determine the amount of occlusion.

The second approach compares the normals of the nearby pixels. The more they face each other, the greater the occlusion. This tends to emphasize edges.

In both approaches, the resulting map is often darkened and blurred before combining it back with the original image. They both tend to get artifacts, but I get the impression the first approach is more susceptible than the second.

I'd seen this sort of thing attempted in Blender but wasn't impressed by the results. :?

I ran across an interesting thread that linked to impressive pictures, but I suspect that the math is a bit beyond me. I'll read through the paper (3.4 Meg pdf) tonight and see what I can get out of it. :|
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Wed Dec 12, 2007 8:05 pm

All right, I made it through the paper. It's (mostly) not as complicated as I thought. Then again, I could be misunderstanding things.

Their method of calculating ambient occlusion is almost the same as the normal way of doing it, but they're doing in the screen space with the z buffer and a normal buffer. They generate a z buffer version of the scene from the camera's point of view. This will be used in calculating the distance to the occluders. They also create a normal map of the scene from the camera's point of view. This will be use to determine the direction the surface is facing.

Recall the ambient occlusion works by sending out a bunch of rays over a hemisphere from a point on a surface. You count the number of rays that collide with nearby geometry vs. those that don't, and the ratio between them tells how much occlusion (shadowing) to apply to the surface.

The problem with doing this in screen space is that you haven't got a raytracer. However, with a zbuffer and a normal map you get the answers to enough questions to simulate the process:

  • Is this point too far away to be an occluder? Consider an indoor scene - all the rays are going to eventually going to run into something. So you want to be able to discard rays that extend past a certain distance.
  • Is this ray pointing away from the surface? If the ray's pointing down, it'll obviously collide with the surface you're trying to calculate the occlusion for. This is where the normal map comes in handy - just do a dot product to determine if the ray's pointing the right way. If it's not, discard it.
  • How far away is the point from the camera? For points far away, you only have to examine a small amount of screen space. For points closer to the camera, you'll need to check a larger range of neighbors in order to search within the ray's maximum distance.
In theory, this sort of thing could be retrofitted into Inyo, or any other renderer that can generate a zbuffer and normal map. The final occlusion is typically added as a post process to the final scene.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby sascha » Thu Dec 13, 2007 9:33 am

Recall the ambient occlusion works by sending out a bunch of rays

I haven't yet implemented AO anywhere, but in my understanding "sending out a bunch of rays" is true only metaphorically. You don't actually "send out rays", do you? (that is, there's no ray/shape intersection test involved). My understanding of the algorithm was:
Code: Select all
for each vertex {
    for each face {
        if (face is "in front of" vertex" ¹) {
            if (face "faces to vertex" ²) {
                compute per face occlusion based on vertex-face distance, normal and face area
                add it to per-vertex occlusion value
            }
        }
    }
}
¹ based on the vertex' normal
² based on the faces normal

Am I wrong?

I'm not quite sure what this z-buffer approach is doing exactly. Is it using a rasterized "pixel" z-buffer, or is it just projecting all geometry into screen space? The problem with a pixel z-buffer is that it doesn't account for objects that are out of sight (above, below, alongside or behind the camera), isn't it?
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: OpenGL Renderer

Postby dcuny » Thu Dec 13, 2007 12:08 pm

sascha wrote:I haven't yet implemented AO anywhere, but in my understanding "sending out a bunch of rays" is true only metaphorically. You don't actually "send out rays", do you?

In "pure" ambient occlusion, that's exactly what you do. The other methods are estimations, and fairly recent developments. They were developed because raytraced ambient occlusion is slow and noisy.

I'm not quite sure what this z-buffer approach is doing exactly. Is it using a rasterized "pixel" z-buffer, or is it just projecting all geometry into screen space?

It's estimating the occlusion based on the screen space. Thus the name SSAO - screen space ambient occlusion.

The problem with a pixel z-buffer is that it doesn't account for objects that are out of sight (above, below, alongside or behind the camera), isn't it?

That's right. One of the algorithms I've seen actually generates two sets of zbuffers, one for the front and one for the back. Another option is to apply a liberal amount of blur to the occlusion.

Keep in mind that (unlike shadows), ambient occlusion doesn't have to be exact - just close enough, and consistently so.


I got a chance to play with some stuff on the OpenGL renderer tonight. I tried separating the specular lighting from the ambient lighting. My first attempt was to split it into two spherical harmonic terms. That looked awful. My next attempt was to create a specular map from the image. Those results were much better:

reflection_map.png
A reflection map applied to the model.
reflection_map.png (78.22 KiB) Viewed 6427 times

specular_map.png
A specular map applied to the model.
specular_map.png (50.81 KiB) Viewed 6422 times


While the reflection map looks cool, it's a bit distracting because it's reflection geometry that's not part of the scene. The specular map hasn't got that problem. I also did a quick test by applying a blur to the specular map:

blurred_specular_map.png
A specular map with blurring applied.


It give the impression of a rougher surface, sort of like plastic. I suspect that the blur would be better handled inside the shader, because different objects can have different amounts of blur. It would be better if the blur could be done inside the shader = perhaps averaging some area on the image map.

Hrm... Since the specular map is just a version of the reflection map, it was pretty trivial to move the logic into the shader. It just checks the luminosity of the pixels in the environment map, and if they fall below a certain threshold, it ignores them. Changing the value lets you move from a mirror-like metal to a high-glossy surface. Very cool. :) I'll see if I can move blurring into the shader as well...
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Thu Dec 13, 2007 1:00 pm

I hacked in support for specular blur into the shader, so I can control how bright the pixel on the reflection map has to be for it to be a specular source, as well as the size of the blur:

blur.png
Various brightness cutoffs for the specular value, as well as various blur sizes.


I can also adjust how bright the specular value is. Here, they're about .4. In the prior post, I think they were about .6. (I also used a different lightprobe on the prior images).
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Tue Jan 22, 2008 11:56 pm

I see that Blender's added the this method of ambient occlusion to their latest SVN release. They've also implemented the fixes mentioned in GPU Gems 3. Maybe I'll have a look at the source and see what's involved. :|
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: OpenGL Renderer

Postby dcuny » Wed Jan 23, 2008 5:52 am

I finally got a hold of the High-Quality Ambient Occlusion paper, and have read through it a number of times. I've already talked about how they solve a number of the artifacts before, and those methods look pretty straight forward. I'm curious how much it would take care of some of the artifacts that are currently showing up.

The most important bit is how they solve the problem with under-tessellated mesh - they use pixel shaders. I just don't have that sort of coding skill. It's got me wondering about using Inyo to do the rendering, though. But it sort of gets rid of the whole "really, really fast approximation" thing, so it's probably not a good idea. I guess I can live without good contact shadows (since that's the point of having that single lightsource).

One thing really cool about Blender's implementation is the use of Spherical Harmonics:
brecht wrote:The first solution I tried for this was to cluster disk together not only by position, but also by normal. While this worked well to get rid of artifacts, this resulted in traversal time that was perhaps slower than necessary, since disks were not clustered together spatially that well. The second approach I tried now approximates the sum of disks with spherical harmonics, as used in PRMan and explained in the Point-Based Graphics book, chapter 8.4 (sorry, again no direct link). This means we don’t have to cluster by normal anymore, though in the end it seems not much faster, but it does seem to avoid some artifacts at lower accuracy.

Although he notes that it didn't really accelerate the algorithm, it's pretty cool. So I'll have to do some research on that. Looks like I'll have to brush up on my math some more, as well. :|
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

PreviousNext

Return to Development

Who is online

Users browsing this forum: No registered users and 1 guest

cron