Page 1 of 2

Irradiance Caching

PostPosted: Tue Jul 21, 2009 6:26 pm
by dcuny
One thing that bugged me about Inyo was that I never got irradiance caching to work. The other was that path tracing never quite worked right. So between the two, Inyo failed in its fundamental purpose: to be an open-source path tracer.

So I've decided to revisit the problem again. I found the book Practical Global Illumination with Irradiance Caching, which is fairly small (134 pages total), but the most in-depth book I've seen on the topic. I'm also revisiting a number of my other texts, to see if I can finally crack this nut.

It's been a long time since I've looked at Inyo, and I don't even remember if I've got a copy lying around which can be used without JPatch. Since the irradiance cache is independent of the renderer itself, I may just write a toy "spheres and plane" ray tracer to test it out with.

Unsurprisingly, the irradiance cache works equally well with accelerating ambient occlusion, so if I actually get this working, I'd more likely use this with ambient occlusion + spherical harmonics rather than a full patch tracing solution.

I didn't know this, but the Radience Renderer is now Open Source. The binary is available for a number of platforms. So (in theory) JPatch could support it.

Re: Irradiance Caching

PostPosted: Mon Jul 27, 2009 8:41 am
by dcuny
I've run across an interesting paper on Irradiance Filtering. I'd seen it before, but never really paid attention to it. The authors argue that they can achieve the same results are irradiance caching, but at a much lower cost.

The paper notes that there's still artifact flickering with animations rendered using this technique, so it may be problematic. The approach looks a lot like the one outlined in the Pixar paper Statistical Acceleration for Animated Global Illumination.

Edit: On re-reading the paper, it doesn't look at all like the PIxar approach. But I'm still puzzling my way through the paper. I really wish I was better at math... :?

Edit: Corrected the link to the paper. :roll:

Re: Irradiance Caching

PostPosted: Mon Jul 27, 2009 10:31 am
by pndragon
I really wish I was better at math...
There is no way that your math skills are as bad as mine.

Re: Irradiance Caching

PostPosted: Mon Jul 27, 2009 5:17 pm
by sascha
The link to the paper actually links to a video :wink:

Re: Irradiance Caching

PostPosted: Mon Jul 27, 2009 9:30 pm
by dcuny
sascha wrote:The link to the paper actually links to a video

Ooops! In my defense, it's a good game. ;)

I'm having trouble grokking the technique in the paper. The technique of irradiance caching is pretty straight forward: you use surrounding samples and average an irradiance estimate. This allows you to amortize high-quality sampling over many pixels.

From what gather, the Arnold renderer used a slightly different technique. It would perform high-quality sampling (i.e.: 256+ rays) in screen space (say, every 8x8 pixel), and then interpolate a value for intervening pixels. It would then perform a lower quality (i.e.: 32+rays) and compare the estimate with the approximation. If there was too much variance, it would instead use a high-quality sample.

I get the general idea of filtering, but I'm just fuzzy on the specifics. It'll probably take me a couple of days of carefully reading through the paper until I finally really understand what's going on. One other bit of concern is a general lack of references on the Internet. If a method is really that much better than another, you'd thing people would be adopting it. There's also this, which says:
Irradiance computation is done using an irradiance cache instead of using the irradiance filtering algorithm.
It makes me wonder if this method is ultimately a dead end. :?

Re: Irradiance Caching

PostPosted: Tue Jul 28, 2009 10:54 am
by sascha
I still wonder if Ambient Occlusion isn't sufficient for animation. It's fast, and you've got a great deal of control over the final look (and no noise!)
So, for the cartoon-style I have in mind I think using raytracing for reflections and possibly shadows, AO for global illumination and good old REYES for everything else is the way to go. 3Delight seems to deliver all of that, and it's just a matter of time until the open source renderers will catch up.

Pixar and many other studios clearly aim for more photorealism in every new film, but I don't think that this leads to better films. Ok, if you look at Toy story today, the rendering (not the animation) looks a bit dated, but since Monsters the image quality is gorgeous. Adding more photorealism is cool and sets the mark for everybody else, but otherwise isn't necessary in any way.

Re: Irradiance Caching

PostPosted: Tue Jul 28, 2009 6:23 pm
by dcuny
sascha wrote:I still wonder if Ambient Occlusion isn't sufficient for animation. It's fast, and you've got a great deal of control over the final look (and no noise!)

I'd add to that some sort of image based lighting as well. But yes, that pretty much covers the gamut.

So, for the cartoon-style I have in mind I think using ray tracing for reflections and possibly shadows, AO for global illumination and good old REYES for everything else is the way to go. 3Delight seems to deliver all of that, and it's just a matter of time until the open source renderers will catch up.

I think targeting 3Delight as the renderer of choice for JPatch is a good idea.

I'm rather disappointed that Marin is relatively slow rendering simple scenes. It's currently in the "pretty much abandoned" state, but I'm sure that at some point in the future I'll have the urge to revisit it. Supporting JPatch, for example, would probably provide a bit of motivation. ;)

It was a real pain for me to get approximate occlusion working for my OpenGL renderer, but I've still got the code laying around. The approach in Blender is complicated a bit because it takes the approach outlined in GPU Gems 3 and ultimately resolves down to triangle-based geometry. This doesn't happen with the point cloud approach, so it might be a bit easier to implement. :?

Interestingly, the pixel cache in Blender approximate ambient occlusion seems to be a plain vanilla screen-based interpolation, rather than some fancy irradiance cache. :|

Ok, if you look at Toy story today, the rendering (not the animation) looks a bit dated, but since Monsters the image quality is gorgeous.

That's apparently one of the reasons they decided they decided to go back and re-render Toy Story. According to Wikipedia, Toy Story was rendered at 1536 × 922 (1.42MP). The time to render one frame was typically around 2–3 hours, with ten times that for the most complex scenes. According to this link, the average frame from Cars took 15 hours, despite a 300x overall increase in compute power.

The big complaint of the Reyes architecture is that it never really accounted for global illumination. The strength of Reyes - that you dice geometry down to the needed resolution - was also the big problem when dealing with GI. I think point-based occlusion takes care of a lot of this. Still, most studios seem to be going with a dual ray tracer architecture, because it allows single or multiple bounce lighting effects. You can simulate this with approximate ambient occlusion, but it doesn't seem to be happening with commercial studios.

Despite approximate ambient occlusion, you'll note that Pixar seems to be doing more and more ray tracing these days. Then again, they have the hardware to back it up- and they can afford huge render times!

Of course, having to essentially maintain two different renderers is a pain. Blue Sky seems to be the only studio using a pure ray tracing solution. The real problem with a ray tracer is holding a massive amount of geometry in memory.

Re: Irradiance Caching

PostPosted: Tue Jul 28, 2009 7:02 pm
by dcuny
pndragon wrote:There is no way that your math skills are as bad as mine.

Applied to me, the phrase "math skills" is an oxymoron. ;)

So what file format are you using to render with? JPatch's?

Re: Irradiance Caching

PostPosted: Tue Jul 28, 2009 9:26 pm
by sascha
The real problem with a ray tracer is holding a massive amount of geometry in memory.

Don't post this to the POV-Ray newsgroups unless you want to start a flame war :wink:
Their standard argument goes like this:
Imagine rendering a forest with, say, a million trees in it, and you have very detailed models of e.g. 50 different trees. Keeping the geometry of 50 different trees in memory seems doable, while keeping a million highly detailed models in memory certainly does not. But this still isn't a problem for a raytracer. All it needs to keep in memory is the location of the 1000000 trees and their bounding boxes. Only if a ray hits a trees bounding box, the renderer transforms the reference model to the actual world-space position and computes the exact ray-shape intersections for that tree. You can even break this down in a hierarchy, e.g. each tree has several 1000 leaves, which again would only need the RAM to store e.g. 10 different leave-shapes and the position of each leaf.
In this example, a scene with a million trees would render quite as fast as a scene with 50 trees, and it wouldn't use much more memory.

Of course REYES (or any scanline renderer) can use sophisticated LOD (level of detail) techniques to achieve something similar, and the raytracing "trick" only works if you have several "clones" of a few reference objects in your scene.

Adaptive subdivision is another thing that can be done with a raytracer, and with some clever caching this can dramatically reduce the amount of geometry that has to be in memory (I think this technique is used by prMan and most other RenderMan raytracers.) Basically you subdivide only after a bounding box was hit, and keep the subdivided geometry in memory just in case the next ray happens to hit the same bounding box. Before you run out of memory, you discard the subdivided geometry from all bounding boxes that haven't been hit recently.

Re: Irradiance Caching

PostPosted: Tue Jul 28, 2009 10:51 pm
by dcuny
sascha wrote:Imagine rendering a forest with, say, a million trees in it, and you have very detailed models of e.g. 50 different trees.

Well, yes... Instancing is a very powerful technique, but it's got to be supported by the workflow.

Adaptive subdivision is another thing that can be done with a raytracer, and with some clever caching this can dramatically reduce the amount of geometry that has to be in memory (I think this technique is used by prMan and most other RenderMan raytracers.)

Yeah, I've been thinking about this, too. It's especially appropriate for use with JPatch SDS (tm), since you could defer tessellation until its actually needed. Since first hit rays tend to be cohesive, this works to your advantage. And for some things like shadow rays, you can usually get away with lower levels of tessellation in the first place.

Re: Irradiance Caching

PostPosted: Wed Jul 29, 2009 3:22 am
by pndragon
dcuny wrote:So what file format are you using to render with? JPatch's?
I would assume JPatch's for now. I am willing to try out whatever is easiest.

I like the Animator gui from the last jpatch though and would recommend a scene builder gui of some kind be included.

Re: Irradiance Caching

PostPosted: Thu Jul 30, 2009 1:55 pm
by John VanSickle
sascha wrote:
The real problem with a ray tracer is holding a massive amount of geometry in memory.

Don't post this to the POV-Ray newsgroups unless you want to start a flame war :wink:
Their standard argument goes like this:
Imagine rendering a forest with, say, a million trees in it, and you have very detailed models of e.g. 50 different trees.

I work at Wal-Mart. A REYES renderer could handle a Wal-Mart, with shelves, the 10,000 or so different items of merchandise in quantities of 1-100 each, without any trouble at all. POV-Ray would be thrashing my hard drive somewhere about 10% of the way through parsing, unless I replaced each object with sprite versions.

It is easily possible to simply pipe the output of a modeler (whether human-operated or algorithmic) into a REYES renderer, and even if the scene has many billions of primitives, the stream can be written so that the render only needs to keep a small portion of the scene in memory at any given time. Even instancing won't give ray-tracing enough of a breather to compete with that.

Re: Irradiance Caching

PostPosted: Thu Jul 30, 2009 5:20 pm
by pndragon
A REYES renderer could handle a Wal-Mart
I like aqsis but and 3delight but to really use them effectively with the most current version of JPatch, you have to export the scene as a rib file. Unfortunately, that function doesn't seem to be working in the final JPatch. Also, I'm having problems getting 3delight as the RENDERMAN renderer. One of the reasons I'm waiting for the next version.

Re: Irradiance Caching

PostPosted: Fri Jul 31, 2009 12:00 am
by dcuny
Speaking of hard problems, I just ran across this summary of how fur was handled in Ice Age: The Meltdown. I'd seen an even briefer blurb prior to that, but nothing that went into any real detail. In the original Ice Age, fur was actually just a series of sprite cards attached to the characters.

There's an interesting discussion of the differences between ray tracing and scanline rendering here. The focus is on game rendering, but much of it is appropriate.

pndragon, which version of JPatch are you using? Could you point me to a link?

Re: Irradiance Caching

PostPosted: Fri Jul 31, 2009 5:52 am
by dcuny
I followed a number of the links in the article above, including Approximating Catmull-Clark Subdivision Surfaces with Bicubic Patches (PDF). There's also Next-Generation Rendering of Subdivision Surfaces, which shows it implemented in hardware.

I'd seen it last year, but the math's a bit beyond me. I'll have another go at it. Basically, it allows a direct evaluation of Catmull-Clark surfaces. The resulting surface isn't smooth at the extraordinary vertex, so a pair of tangent patches are created that can be evaluated to approximate smooth normals.

I'm not sure how useful this is for a raytracer. As far as I know, there's not a good way to raytrace bicubic patches. On the other hand, I suspect it would be great for a micro-polygon renderer (like Marin ;)).

So I'll spend some time with the paper this weekend, and see if I can make sense of it.

Just when I was getting excited about raytracing, too... :roll: I'll still need to spend some time poking around to see where I've hidden my Inyo code.

I've got a week of vacation coming up soon. I'm not going anywhere, so I'll probably stay home and focus on a coding project. I'd been intending to do stuff with raytracing, but it might make more sense to work on getting Marin working with SDS.

Well, as much sense as writing my own renderer can. There's no way I can realistically compete with something like 3delight.

If I start working on Marin, I'll pull out the programmable shading code. I think it was a mistake to start down that route. And there's always the option of adding raytracing to support secondary effects, such as transparency and shadows.