OpenGL Renderer

Ideas, enhancements, feature requests and development related discussion.

OpenGL Renderer

Postby dcuny » Fri Oct 05, 2007 1:58 am

I've started working on an OpenGL "quality" renderer. Part of my motivation is to learn OpenGL, and the other part is to see if I can write a faster renderer for JPatch.

I've got a JOGL program that can read simple .ply files (no UV or color) and render the object. I've been spending the last couple of days working on implementing ambient occlusion as described in Dynamic Ambient Occlusion and Indirect Lighting (pdf), a paper that was published in GPU Gems a couple of years ago.

The algorithm itself is pretty straight forward, but it's slow (one of those n^2 problems), and requires a lot of setup. So I'll either have something to post in a couple of days, or a lot of stupid OpenGL questions.

I wasted a good chunk of time last night trying to track down one bug, and finally decided to go to bed at around 6:00 in the morning (I'm home sick, one of the reasons I'm up so late). Settling down to sleep, it occurred to me that the problem I was running into was because the size of the model was less than 1 unit, so the various calculations that used squared values were coming up with nonsensical values.

The minute you stop thinking about a problem, the answer will come to you. :?

There are a number of features I want to add to the renderer, but unless I can get this feature working, there's no point in talking about any of the others.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Fri Oct 05, 2007 8:43 am

I'm going to use GL as a previz renderer, but you'll get the same quality as in the interactive (realtime) renderings.
For anything beyond that, you'll need GLSL (OpenGL Shading Language), which is only supported since OpenGL version 2.0 (and has been an extension in version 1.5) - not all graphics hardware/drivers have support for it, and the level of support varies heavily. My ATI driver falls back to a (very slow) software path when a compiled program is too long or uses too many variables (!), and some of the functions (like 3D noise) are simply unimplemented and thus always return 0 :?

Most of these GPGPU (General-Purpose Computation Using Graphics Hardware) stuff uses shaders in fancy ways, it's even possible to do raytracing with it. The trick is that the graphics hardware is in fact a SIMD (Single Instruction, Multiple Data) device, so whenever it comes to doing the same simple calculation on massive amounts of data the GPU will outperform the CPU by orders of magnitude.
I'd love to implement my subdivision code on the GPU one day (it's designed in such a way that it could, at lest theoretically, run as a fragment shader).

But I've decided to wait for OpenGL 3.0 (they're about to revamp the entire API and make shaders a lot more flexible) before trying any of the fancy things. I'll maybe have another graphics card by then, and I suppose that shader support will be much better with newer hardware.

So my experience is limited to basically OpenGL 1.1 stuff, but I'll gladly help out if I can, so feel free to post a lot of stupid OpenGL questions ;-)

And get well soon!
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Fri Oct 05, 2007 9:59 am

I'm still debugging the AO code. It seems that for every step forward I go, I end up adding another bug to the code... :?

I've been reading up on GLSL, and I see you've got two shader in JPatch that use it. I've got mixed feelings about using it - apparently there aren't very good tools available for debugging it, and it doesn't do a lot of good locking people out who haven't got "high end" cards.

The papers on Gelato have a number of interesting ideas, and the source code is available for a lot of the techniques. For example, you can create an image of any resolution you want by tiling it using glFrustrum, downsampling (to remove aliasing) and stitching the image together. So you aren't limited to a particular video resolution.

Another trick that Gelato does in render down to the micropolygon level, similar to how the Reyes renderers work. That's got some interesting possibilities. Of course, if it ends up taking longer than doing it in Sunflow, there's not a lot of point to it. :?

I've seen the raytracing demo - it's pretty slick, but limited to a fixed image as the background.

I watched a 90 minute on-line Nvidia presentation the other day on "Cinematic Effects". It made a pretty compelling argument that lighting was really the key to effective CGI. I was later looking through the gallery at Ogre3D at screenshots of professional games such as Ankh - Heart of Osiris and Jack Keane. They do some pretty compelling things with light and color. There's a push to be "realistic" in CGI, but that's only a means to an end. My goal is to have the images good enough that people don't think of them as CGI.

There were a number of "behind the scenes" shots of photoshoots making the point that what seemed to be "simple" cases of lighting often had complex lighting setups - and this was for "real life", where things like global illumination were already "solved". ;)

As a result, I spent a lot of time looking at stuff for the next couple of days, and noticed that sharp, raytraced shadows didn't really occur naturally. I also noticed that most of the sorts of scenes that I was interested in being able to light should be able to do well with soft (zbuffer) shadows.

Along those lines, there's a paper I ran across which demonstrates how to get soft shading (with Percent-Closer Filtering) with dynamic perumbras, so the contact shadows expand and get softer as the object gets further from the occluder. Very slick, and the implementation looked pretty straight forward. Still, I think I'll be happy with plain old zbuffer shadows.

But the absolute minimum for me was a sort of global illumination effect. The first time I ran across images made with the Arnold renderer, I was completely stunned. They were the first CG images I had seen that looked real, and it was the GI effects which made it happen.

It think this video demonstrates pretty effectively that even with image based lighting, you still need ambient occlusion to make the image look "real". And I think the most "realistic" images that I created with Inyo were the ones that combined image-based lighting with AO.

One thing that's kept me away from non-raytraced AO is contact shadows. In order for them to work, the ground has to be sufficiently tesselated, and that leads to all sorts of issues. But I've come to the conclusion that for most cases you're going to have a "real" light providing a "real" shadow. So that issue goes away. If I really need a complicated effect, I can always fall back on Sunflow. :)

Anyway, that's the basic idea: use OpenGL to get fast(er) quality renderings that I can get with raytracing. But I've got a lot of debugging to do before I can get to that point. :?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby dcuny » Fri Oct 05, 2007 11:08 am

Woohoo! :)
Image
I finally gave up and simply dropped the occluder size out of the equation. Of course, I'll have to go back in and fix that (later), but at least I've got an image that shows off occlusion. It takes about 70 seconds for it to calculate the occlusion on an this, which only has 7600 vertices. It takes a lot longer to load a high-poly mesh.

Actually, there's quite a bit of overhead just setting up the data structure. In order to do the calculations, it needs to know which vertices are shared. With that, it's able to calculate the average normal, and the size of the occluding vertex (the sum of 1/3 the area of the polygons sharing that vertex).

I'm loading models saved from Blender in .ply format, and none of that information is available. So for a 32,000 poly mesh, it takes over a minute just to figure out that initial information. It takes another huge chunk of time to calculate the occlusion between the vertices.

Of course, I haven't even started writing any optimization for the code. There's no point, since it's still not working correctly. ;)

JPatch should be able to hand over the mesh with a lot more information about the vertices, so a lot of the initial overhead would go away. (Inyo needed to know similar information). I'll put the vertices in a structure of some sort (octree or kd-tree) so it'll be a lot faster. The second step can be done in realtime by the GPU, but (from what I've read) the implementation is a bit fragile.

Once the occlusion is calculated, it's viewpoint independant. So for static objects, it can be baked into the mesh (or UV map). For animated objects, it's got to be recalculate each time.

Here's the higher resolution mesh (31,500 vertices) that took for freaking ever to calculate:

Image

Yeah, it looks pretty awful. :D Also, keep in mind that AO isn't really supposed to be noticed on the image - it's mixed in at an almost subliminal level.

In this blog, the author claims he can calculate the AO for a 50,000 poly object in about 10 seconds (as opposed to 10 minutes for raytracing the same object). So I'm pretty confident this can be accelerated enough to make it useful, without resorting to GPU coding.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Fri Oct 05, 2007 11:28 am

I took a look at the Ankh - Heart of Osiris images. They seem to use a bloom effect, and although I generally like this effect, it's overdone (like in Elephants Dream).

sharp, raytraced shadows didn't really occur naturally

Agreed, but most raytracers offer area-lights these days. I agree that global illumination adds a lot to the realism of an image, but whether it's truly necessary for a cartoon I don't know. Of course one has to strive for utmost realism when combining CGI with live action footage, but for cartoons I prefer a more "surrealistic" look - you still set the mood with the lighting, but it doesn't need to look "real". To the contrary - photographers would love to have e.g. lights that don't cast shadows, or shadows without a lightsource, negative lightsources, etc. With CGI you have all these "unrealistic" tools at hand.

The shader sourcecode in JPatch was a test for per-pixel-lighting in OpenGL. It's not used at the moment. If you want to get rid of the tesselated look in specular highlights and spotlight penumbras you'll need per-pixel-lighting (not the standard per-vertex-lighting).
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Fri Oct 05, 2007 6:25 pm

sascha wrote:They seem to use a bloom effect, and although I generally like this effect, it's overdone (like in Elephants Dream).
That's for sure! It's one of those effects that, if used at all, you only want a small touch of, so people won't really notice. It really dates a shot (like various effects on drums date a piece of music).

sascha wrote:
sharp, raytraced shadows didn't really occur naturally

Agreed, but most raytracers offer area-lights these days.

The problem with area lights in raytracers (as far as I understand) is that they require multiple samples over the area of the light. That means they're subject to noise if you undersample them. But Blender seems to have implemented area lights without raytracing. I've looked around, but haven't found out how they've done it (yet). Got any idea?

I agree that global illumination adds a lot to the realism of an image, but whether it's truly necessary for a cartoon I don't know.

It's personally necessary for me. ;)

Most of the GI effects aren't necessary. When you do see a GI effect in a film, you can bet that someone's placed a fill/bounce light there to get exactly the lighting they wanted, instead of relying on nature.

... but for cartoons I prefer a more "surrealistic" look - you still set the mood with the lighting

That was exactly why I was pointing to the Ogre3D gallery. I remember reading that each character in Toy Story had their own sets of lights, to highlight specific features of the character. For example, Buzz Lightyear had a set of lights to highlight his helmet, even though there were no such lights in the scene.

The "Cinematic Effects" presentation I mentioned pointed out that most films try to light the main focus with different lighting to ensure that people look at them. The moral (for me) was that just because all these lighting effects come "for free" with a renderer, that doesn't mean that the shot's going to look good.

Well, the larger moral is that if you want things to look professional, you generally have to hire a professional. :P

In any event, I agree that people shouldn't be locked into a particular "look" by a renderer. (I suspect that we've both been reading pretty much the same material).

The shader sourcecode in JPatch was a test for per-pixel-lighting in OpenGL. It's not used at the moment. If you want to get rid of the tesselated look in specular highlights and spotlight penumbras you'll need per-pixel-lighting (not the standard per-vertex-lighting).
Yeah, that was the main reason I had for looking at GLSL in the first place.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby dcuny » Sat Oct 06, 2007 7:57 am

I found the bug in the code - I had added a magic "scaling factor" to make up for the fact that the model was so small. But I didn't include it in the distance calculations, so the area of the triangles was being calculated much too small.

Image

Here's the high-resolution version (which still takes forever to calculate):

Image

I should note that there's no actual lighting in either of these objects - the color is purely from the ambient occlusion.

There's a fudge factor the Nvidia paper adds to the equation that I can't seem to get right, so I'll just drop it out. I've read a number of other sources, and they drop that part of the term out as well.

Next up on the list is writing some sort of spacial subdivision scheme to accelerate the calculations. The Nvidia method is pretty straight forward, but the paper Real-Time Dynamic Ambient Occlusion notes:
Although not mentioned in [Bum05], further analysis of the implementation (released a few weeks later) revealed that the tree-like structure was built using texture coordinate coherence for neighbor determination. The approach yields good results for correctly mapped uv-meshes but can be very fragile.

It was determined that a more general solution was to be found so a new space partitioning algorithm was developed. This algorithm had to allow for small clusters of arbitrary size, as well as adaptinve clustering when the mesh is not evenly tesselated.
The paper goes into some level of detail on their implementation, so I'll be studying that.

Thad Beir's blog also has a bit of information on how he implemented his version, so I'll be looking at that as well.

It's a good thing it's a three day weekend. ;)
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Sat Oct 06, 2007 9:15 am

The samples look promising. Does it work for more than one object too - I mean, one object occluding the light causing the other object to appear darker?

But again, I'm looking for a much simpler solution:
1) Make the ambient light dependent of the surface normal - normals pointing upwards get more light (from the sky), normals pointing downwards get less light. It could use e.g. a HDRI image or a pre-rendered cube (environment) map, so it can be adapted for each setup.
2) Add "ambient shadows" - spherical regions that darken the ambient light (depending on the distance to the sphere's center). Such "shadows" could be attached to all larger objects, so they would darken the ambient light in their vicinity.
3) Allow tricks such as negative lightsources or shadowless lightsources to fine tune the lighting.

All in all this requires some setup, but it should give you greater control. And it shouldn't slow down rendering speed much.
I haven't really tried this yet, but I'll start with a few simple renderman shaders to see if it works.

The problem with area lights in raytracers (as far as I understand) is that they require multiple samples over the area of the light. That means they're subject to noise if you undersample them. But Blender seems to have implemented area lights without raytracing. I've looked around, but haven't found out how they've done it (yet). Got any idea?

I'm not sure if this will work, but probably you can modify the ray-triangle test to not return the intersections only, but the closest distance from the line to each triangle (if there is no direct intersection). This way you get a measure of how close the ray passed an object, which could be used to fake a penumbra. This won't be 100% accurate, but it should be fast as lighting and not cause any noise at all. Just an idea, when I've more time I'll try to hack a proof of concept.
Blurred shadow maps are another option of course.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Sat Oct 06, 2007 9:41 am

sascha wrote:The samples look promising.
Thanks. :)

Does it work for more than one object too - I mean, one object occluding the light causing the other object to appear darker?
Well, there's no light in the scene. Occlusion is based on the premise that ambient light is coming from all directions. So it checks local geometry to see if anything nearby might be occluding.

Make the ambient light dependent of the surface normal - normals pointing upwards get more light (from the sky), normals pointing downwards get less light. It could use e.g. a HDRI image or a pre-rendered cube (environment) map, so it can be adapted for each setup.
There's a "cheat" that does that in Sunflow. I've played with it a bit. It's better than nothing, but a bit disappointing.

You can trivially add environment mapping, and that's on my "to do"list. I've mostly seen this applied to composite scenes. What I really want to find out is how well it works when the entire scene is CG. My guess is that it should be a big win - most of the benefits of GI lighting, without the rendering cost (because it's pretty much free).

That's my hope, anyway. :?

Add "ambient shadows" - spherical regions that darken the ambient light (depending on the distance to the sphere's center). Such "shadows" could be attached to all larger objects, so they would darken the ambient light in their vicinity.
I've read a number of papers that use a similar scheme. The problem with a spherical solution is that you end up getting unrealistic occlusion (that is, a causal observer will notice that it's wrong). The solutions that I've seen used "occlusion fields", and are pre-calculated. That's tech-speak for "takes too long to generate in real time".

Allow tricks such as negative lightsources or shadowless lightsources to fine tune the lighting.
Yes, I expect to have the normal "tricks" of lights, including lights that are attached to particular objects, and so on. Of course, it's only good if JPatch supports it. ;)

I'm not sure if this will work, but probably you can modify the ray-triangle test to not return the intersections only, but the closest distance from the line to each triangle (if there is no direct intersection).
I said that badly. What I meant to say was that I was under the impression that Blender's area lights were not raytraced. I could easily be wrong about this, because I haven't researched too deeply into it. But (if they aren't raytraced), I'm curious to see how they pulled it off.

I'd mentioned it before, but didn't supply the link: Percentage-Closer Soft Shadows outlines a technique that combines percent-closer filtering (for smooth shadows) with a variable penumbra, based on the distance of the occluder. There's a demo on the Nvidia site as well.

I haven't really had a chance to play with it, so I don't know if it's the sort of thing that works, but only in a very constrained environment. I'll find out when I try to implement it. But first, I need to add plain old z-buffered shadows.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby dcuny » Sun Oct 07, 2007 12:54 am

I've gone through and refactored the code a number of times now, so it's not nearly the kludge that it was yesterday. I've also managed to track down a number of stupid bugs, which had the program duplicating a lot of work.

I'm using a kd tree as an acceleration structure, and it seems to give pretty good performance. However, the actual image itself has artifacts that weren't in prior versions:

Image

For example, you can see the mouth isn't occluding properly. I think the problem is with the normals. When I display them, they seem odd. That's also what's happening with the ears and the eyebrows.

I suspect there's a unit scaling bug in there as well.

The code's now spending the vast majority of its time trying to figure out which vertex is connected to another vertex, so I need to write a custom file format. Waiting 3+ minutes for the file to load gets old really fast. :?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Sun Oct 07, 2007 9:51 am

I haven't really had a chance to play with it, so I don't know if it's the sort of thing that works, but only in a very constrained environment. I'll find out when I try to implement it. But first, I need to add plain old z-buffered shadows.

I took a look at the percentage closer soft shadows, but they're using shadow maps. I was thinking about ray-traced soft shadows, but instead of shooting multiple rays at the area-light, I'd only shoot a single ray and compute the distance by with it misses the geometry and use that as a hint for the shadow blurriness.
But I have to admit that concerning rendering techniques you're well ahead of me, so I'm afraid I can't be of much help here. For the time being I'll focus on RenderMan integration (e.g. create a workflow that allows automatic shader compilation, shadow maps creation, etc.). I'll leave the decision about what renderer and what rendering-features go into JPatch up to you :wink:
But let me know if you need JPatch to pass special parameters to the renderer. Attaching a lightsource to an object will be trivial using the scene-graph.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Sun Oct 07, 2007 10:28 am

sascha wrote:For the time being I'll focus on RenderMan integration (e.g. create a workflow that allows automatic shader compilation, shadow maps creation, etc.). I'll leave the decision about what renderer and what rendering-features go into JPatch up to you
As well you should! ;)

I'm still thinking that integration with Sunflow is the way to go.

The occlusion code is (once again) completely broken. I've restructured things again with the intent of supporting a new file format. I've also added more debugging to the display, so it shows the normals and the occlusion disk normals.

The normals seemed a bit skewed, like they were all radiating from the center of the object outward. That's why, for example, the ends of the ears are occluded, even though there's nothing there to occlude them.

Anyway, I finally found my bug:
Code: Select all
   void addVertex(float x, float y, float z, float nx, float ny, float nz) {
      vertex[vertexCount] = new Point3f(x, y, z);
      normal[vertexCount] = new Point3f(x, y, z);
      vertexCount++;
   }
Which should have instead been:
Code: Select all
   void addVertex(float x, float y, float z, float nx, float ny, float nz) {
      vertex[vertexCount] = new Point3f(x, y, z);
      normal[vertexCount] = new Point3f(nx, ny, nz);
      vertexCount++;
   }
It's a wonder the code ever worked in the first place. Now I can get back to tracking down why all my occlusion values are returning as Nan. I'll bet that's something painfully stupid, too. :P
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Sun Oct 07, 2007 10:42 am

normal[vertexCount] = new Point3f(nx, ny, nz);

:shock: Be careful with that. Points are not vectors! (and vice versa). The correct line would read normal[vertexCount] = new Vector3d(nx, ny, nz). Point3f and Vector3f have a common superclass called Tuple3f, which you can use in situations where you don't need to know whether the tuple is a point or a vector. Keep in mind that Matrix4f.transform(Point3f) and Matrix4f.transform(Vector3f) yield different results.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Sun Oct 07, 2007 10:57 am

sascha wrote::shock: Be careful with that. Points are not vectors! (and vice versa).
Yeah, I know. :P

It's actually worse than you think - I've written my own Point3f class out of laziness. (I was replacing the prior code, which had used float[] to hold point values).

Of course, it probably would have helped find another stupid bug in the code I just tracked down. Now, instead of returning NaN, it's saying that everything is occluding...

Anyway, don't worry: at some point, I'll rip out my custom classes and use the proper ones. It's just a holdover from wanting to get into coding as fast as possible, and being too lazy to track down the proper classes.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby dcuny » Sun Oct 07, 2007 11:09 pm

I've got occlusion working (mostly) again. There are some odd artifacts that I'm seeing, and I'm not sure why they are occuring.

I've created another file format, which stores information about the occlusion disks - the location, the normal, and the area. This loads a lot faster. Of course, it's not that useful if there's a bug in one of those precomputed values.

Which, of course, there was. :roll:

When I initially wrote the code, I split all quads into triangles. Then I got the (probably not so bright) idea that perhaps I should support quads as well, so I rewrote that portion of the code.

Naturally, I missed a couple of places. For example, when it checks to see what faces might contain a vertex. Or when calculating what the area of the occluding disk would be.

With this corrected, the occlusion looks better, but it doesn't take care of the bug I was seeing. So it's back to the drawing board.

What I might go back and do is convert the .ply to contain only triangles and see if the bug goes away. :?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Next

Return to Development

Who is online

Users browsing this forum: No registered users and 1 guest

cron