Reyes Renderer

Ideas, enhancements, feature requests and development related discussion.

Reyes Renderer

Postby dcuny » Wed Apr 30, 2008 10:01 am

I've decided to have another look at the REYES rendering algorithm. As usual, I'm not saying I'm going to code anything - just have a look at it, and see if I understand how it works. Even if I wrote a REYES renderer, it wouldn't be nearly as slick as jrMan, and I certainly wouldn't attempt to make it RenderMan compliant. :P

I've been reading a number of papers on the REYES architecture. On stumbling point for me had been the micropolygon grid. I hadn't realized that REYES typicall described primitives in uv space, not in camera space. So I'd been baffled on how you went about generating the micropolygons. So here's my current understanding of REYES. The general algorithm is:

  • Split each object until it is sufficiently small,
  • Dice the split part into a grid micropolygons,
  • Shade the micropolygons, and
  • Hide the points which aren't visible
The original implementation of REYES didn't use buckets. They were added for two reasons:

  • To reduce the memory requirements of the screen buffer, and
  • To perform occlusion culling.
The screen is broken down into the following hierarchy:

  • The screen is a grid of buckets.
  • Each bucket contains a grid of pixels, a unsplit queue and a split queue. An occlusion flag and minimum depth is known, so that if all the pixels in the bucket are opaque, any geometry that would be occluded can be trivially rejected.
  • Each pixel contains a jittered grid of samples. Like the bucket, the pixel maintains an occlusion flag and minimum depth, so occluded geometry can be trivially rejected.
  • Each sample has an opaque color sample, and possibly a list of transparent color samples. Each sample has a flag indicating if it has any transparency.
  • Each color sample has a color, opacity and depth.
REYES begins by loading geometric primitives (gprims) from a file. It bounds the gprim with a camera-space axis-aligned bounding box. It then checks if the gprim is outside the camera's bounding planes. If it is, it's immediately culled.

The gprim is then tested to see if it lies within the current bucket. If not, it is moved to the cue of the next bucket it intersects. If it fails to intersect any buckets, it's culled. (This shouldn't happen, since it already been tested against camera space).

A size test is then performed on the gprim. If it is not "sufficiently small" (or will generate too many micropolygons if diced), it's split into two or more parts, and placed back in the queue. The queue is processed until it is empty. The gprim will either become sufficiently small, or be culled from the current bucket, and placed in the next bucket its bounding box intersects.

In fact, any gprim that overlaps the bounds of the current bucket is moved on to the next bucket of interest, to prevent duplication of work.

All items in the queue are sorted by z depth, with the closest items being processed first. This allows occlusion culling to take place, preventing further processing of gprims that are occluded at the bucket or pixel level.

One interesting feature of REYES is that gprims are typically described in uv space, not eye space. This makes the process of splitting most items fairly trivial, since can split the range into quarters, and point back to the original gprim's data, instead of creating completely four new objects.

Once the gprim is "sufficiently small", it can be diced. REYES maps (u0,v0) - (u1,v1) into eyespace to determine the how large the micropolygons should be. The aim is to have micropolygons be about 1/4 the size of a pixel. REYES iterates over the uv space for the gprim, and creates a grid of micropolygons. One of the reasons for using a grid is because the vertices are shared between the micropolygons in the grid. Shading is then performed, which may change the position of the vertices. Each micropolygon vertex has has a:

  • Position
  • Normal
  • Color
  • Opacity
Each micropolygon in the grid is then processed. It is bounded, checked for on-screen visibility, and backface culled. The micropolygon is tested against each pixel. If the bounding box lies within a pixel, the micropolygon is is tested against each sample point in that pixel. The z depth of the micropolygon is calculated via interpolation against the sample's position against the sample's z depth. If the micropolygon is closer than the z depth currently stored in the sample, it replaces the color and depth of the sample. Any color samples in the transparency list that are occluded, are removed from the list. If the micropolygon has transparency, it is added to the sample's transparency list.

Like gprims. if the micropolygon grid isn't entirely held within the current bucket, it is also placed into the next bucket that the bounds overlap with, to prevent a duplication of work.

Once all the split queue has been processed for a bucket, the transparency list for each pixel is summed with the opaque color, and a final color for each sample is determined. The colors for all the samples is averaged, generating a final color for that pixel.

To handle motion blur, REYES calculates sample points along the motion path of the object. The micropolygon is then moved to that position when performing sampling. (This also requires calculating a bounding box that takes into account the motion path).

That's the algorithm as I understand it from the papers. Did I miss anything important? (I've left out raytracing on purpose). I've also glossed over handling nasty issues like cracks.

I'm also not quite sure how REYES determines if a micropolygon occludes a pixel's sample. The folk at jrMan split the micropolygon into two triangles, but I suspect that Renderman uses some sort of half-space calculation to determine if the sample point is occluded.
Last edited by dcuny on Thu May 01, 2008 8:27 pm, edited 1 time in total.
Reason: Changed "half-edge" to "half space", added link
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Reyes Renderer

Postby John VanSickle » Wed Apr 30, 2008 2:33 pm

dcuny wrote:I've decided to have another look at the REYES rendering algorithm.

The algorithm also allows for focal blur in addition to motion blur.

I have been working on a REYES renderer of my own. I developed mine based on the description in one of the papers at graphics.pixar.com, although I do miss a few of the optimizations you describe (passing on slices and micropolygon patches to the next bucket that can use them is one such optimization that I missed).

I implement motion blur and focal blur by assigning a transform for every sample in each pixel (making the buckets smaller makes this feasible). The transform for each sample transforms the space used by the micropolygons so that the transformed polygon occupies the point (0,0) if it appears in the sample. Each sample's transform is built to include the jittering for motion blur, focal blur, and anti-aliasing. Also, each sample has a list of depth and transparency data for the micropolygons that have already hit that sample; any additional hits are inserted into this list at the proper depth.

I haven't gotten to the point where the app will render anything; the project has been on my back-burner since July.

I forget precisely how the guys at Pixar integrated ray-tracing into the REYES architecture, but I think that their focus was to use ray-tracing as a way of shading things, so that it could be shut off where it didn't contribute to the imagery. Pixar, as we know, makes very ambitious scenes, and one of the weaknesses of ray-tracing is that the algorithm cannot run efficiently unless the entire scene is in memory. They use a lot of tricks to get around this; for instance, a reflected ray is not traced to infinity, but only for a limited distance, beyond which the ray is fed to an environment map. Mirrored surfaces are either flat, and show a subset of the scene, with the same level of detail as appears in the rest of the scene, or they are curved, and show more of the scene at greatly reduced resolution for each scene object, and Pixar seems to have taken advantage of this as well in their implementation.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: Reyes Renderer

Postby sascha » Wed Apr 30, 2008 3:07 pm

Yes, that's a nice summary of how REYES works :-)

There's one thing that's important though: When a primitive was split, the dicer can choose to dice the sub-primitives at different rates - e.g. one subdivided up to level 2, and an adjoining one at level 3. In a naive implementation this will cause visible artifacts (cracks at the micropolygon boundaries).

* Shade the micropolygons, and
* Hide the points which aren't visible

That's right (and important) - shading is done before hiding because a displacement shader can change the z-order.

Each micropolygon vertex has has a:

* Position
* Normal
* Color
* Opacity

One important feature of RenderMan is that you can specify arbitrary parameters. They algorithm will treat them just as the default parameters and properly subdivide and dice them. This way it's possible to e.g. attach reference-positions, u/v coordinates, etc. to each vertex, which the shader can use for shading. Optionally these additional parameters can also be "rendered" into the final image and stored as a separate channel e.g. in OpenEXR format. This way it's easy to e.g. render a z-depth image for shadow mapping, but there are many more applications, e.g. providing x-, y- and z-velocity for each vertex, saving them as extra channel and using them for post-processing (motion blur in this case).

It's not exactly part of the REYES architecture, but I think an implementation shouldn't treat position, normal, color, etc. any special. It should be able to process any number of parameters.

I think it works this way:
You can specify any additional parameters when defining the geometry (on a per vertex level).
The additional parameters are interpolated (split, diced) just like everythin else when creating the micropolygon grid.
The additional parameters can be passed to a shader.
The shader may produce even more additional parameters, these are also stored with each micropolygon vertex.
When the final image is composited, the additional parameters can be saved as separate channels or to separate files.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Reyes Renderer

Postby dcuny » Wed Apr 30, 2008 8:18 pm

John VanSickle wrote:The algorithm also allows for focal blur in addition to motion blur.

Yes, I should have included that. It's basically the same method used for motion blur - the circle of confusion for the object is calculated, and the object is jittered around that point.

I have been working on a REYES renderer of my own. I developed mine based on the description in one of the papers at graphics.pixar.com, although I do miss a few of the optimizations you describe (passing on slices and micropolygon patches to the next bucket that can use them is one such optimization that I missed).

The paper How PhotoRealistic RenderMan Works can be found in this Siggraph 2000 paper Advanced RenderMan 2: To RI INFINITY and Beyond.

I implement motion blur and focal blur by assigning a transform for every sample in each pixel (making the buckets smaller makes this feasible). The transform for each sample transforms the space used by the micropolygons so that the transformed polygon occupies the point (0,0) if it appears in the sample. Each sample's transform is built to include the jittering for motion blur, focal blur, and anti-aliasing.

Nice. 8)

I haven't gotten to the point where the app will render anything; the project has been on my back-burner since July.

I know the feeling. :lol: I started working on a REYES renderer in Java last year, but got stuck trying to figure out how to create micropolygons. I kept falling back onto using rasterization, which was stupid. If I wanted that, I could simply render using OpenGL and downsize the image (which is what I eventually decided to do).

They use a lot of tricks to get around this...

Yeah, it's pretty amazing. Having to produce a renderer that works in the "Real World" will do that. I get the feeling that we've all seen the same sets of papers, including the raytracing one.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Reyes Renderer

Postby dcuny » Wed Apr 30, 2008 8:29 pm

sascha wrote:Yes, that's a nice summary of how REYES works

Thanks. :)

There's one thing that's important though: When a primitive was split, the dicer can choose to dice the sub-primitives at different rates - e.g. one subdivided up to level 2, and an adjoining one at level 3. In a naive implementation this will cause visible artifacts (cracks at the micropolygon boundaries).

Yeah, I'm being intentionally vague here. In the older version of Renderman, there was some sort of check to make sure that adjacent edges were diced at powers of two to each other. In the more current version, the cracks are "stitched up" by generating a triangle that fits into the crack. I believe this is the same approach taken by Aqsis

One important feature of RenderMan is that you can specify arbitrary parameters.

Point taken. I'll add it to the "Vaporware Feature List." ;)

Most of the algorithm looks pretty straight forward. The part that I'm the most leery about is handling cracks correctly - that was also the downfall of my zbuffer renderer. :?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Reyes Renderer

Postby sascha » Wed Apr 30, 2008 9:54 pm

The part that I'm the most leery about is handling cracks correctly - that was also the downfall of my zbuffer renderer.

I see, but that's something completely different:
It's got something to do with drawing adjacent polygons without gaps or overlap and is related to the problem that drawing a line from A to B should fill exactly the same pixels as drawing a line from B to A. While naive ad hoc solutions are likely to fail, I think that problem has been solved some decades ago and is well documented :P

My dicer implementation was able to stitch in the power-of-two case (and since I used the subdivision scheme I always got a power-of two). When using the B-Spline patch approach you could theoretically dice the patch to non-power-of-two vertices, but adapting the stitching algorithm should be straight forward (simply connect the triangles to the next closest point).
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Reyes Renderer

Postby John VanSickle » Thu May 01, 2008 12:56 pm

Now that I think about it some more, the ray-tracing is optimized by generating an environment map for every object that has any reflectivity, and keeping in memory full-time only those objects that are within a certain distance of an object that reflects rays. Reflected rays that intersect nothing within that distance are then fed to the environment map, which for curved surfaces can have a very low resolution, and which can be built from the low-res version of the objects in the scene.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: Reyes Renderer

Postby dcuny » Thu May 01, 2008 7:59 pm

sascha wrote:I see, but that's something completely different:

Yes, the problem is well documented.I just wasn't aware of how to do it. ;)

...but adapting the stitching algorithm should be straight forward (simply connect the triangles to the next closest point).

It's not that it's a hard problem, it's just one that I have a general idea of how to solve, but my eyes glazed over when looking at the specifics. :P I'll get around to comprehending the details at some point.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Reyes Renderer

Postby dcuny » Thu May 01, 2008 8:21 pm

John VanSickle wrote:Now that I think about it some more, the ray-tracing is optimized by generating an environment map for every object that has any reflectivity, and keeping in memory full-time only those objects that are within a certain distance of an object that reflects rays.

I don't think any REYES renderer I write is going to be production quality, so caching objects to disk might not be something I'll have to worry about. :P

While much of the geometry may be in memory for PR Renderman, it relies of disk to cache a lot of stuff for that too - specifically, textures. Production renderers have a truly massive amount of texture maps. One of the reasons Blue Sky went almost entirely with procedural textures was for the memory savings.

Still, I guess it's not a good idea to build limitations into the renderer, or you'll end up with something like Siren, which can do some amazing stuff (even raytraced shadows), but is ultimately limited to 64K. (Interestingly enough, although Siren is RenderMan compliant, it's not a REYES renderer).
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Reyes Renderer

Postby John VanSickle » Fri May 02, 2008 1:58 pm

dcuny wrote:Production renderers have a truly massive amount of texture maps.

Which led Pixar to whip up an optimization for that. They have taught their texture maps to only load as much of the texture as the current bucket will need, and allow anything left from previous buckets to be deleted from memory.

If they're still using C instead of something OO, their coders are probably half nuts on a regular basis.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Motion Blur

Postby dcuny » Fri May 02, 2008 6:54 pm

I had a look at 3delight yesterday. The license is pretty liberal - you get one full copy free per machine. It's production quality, and seems to be really, really nice. So it's created a lot of disincentive to work on my own REYES renderer. But I'll probably slog through this project until I get a couple of images generated, just so I can prove that I finally understand how REYES works under the hood.

My current question is: how does motion blur work in REYES. That (and focal blur) add a lot to the realism in a scene, so it's sort of important. ;)

Here's what I've been able to find out so far:
'How PhotoRealistic RenderMan Works' wrote:1.2.5 Motion Blur and Depth of Field
Interestingly, very few changes need to be made to the basic Reyes rendering pipeline to support several of the most interesting and unique features of PRMan. One of the most often used advanced features is motion blur. Any primitive may be motion blurred either by a moving transformation or by a moving deformation (or both). In the former case, the primitive is defined as a single set of control points with multiple transformation matrices; in the later case, the primitive actually contains multiple sets of control points. In either case, the moving primitive when diced becomes a moving grid, with positional data for the beginning and ending of the motion path, and eventually a set of moving micropolygons.

The only significant change to the main rendering pipeline necessary to support this type of motion is that bounding box computations must include the entire motion path of the object. The hidden-surface algorithm modifications necessary to handle motion blur are implemented using the stochastic sampling algorithm first described by Cook, et al. in 1984. The hidden surface algorithm’s point sample locations are each augmented with a unique sample time. As each micropolygon is sampled, it is translated along its motion path to the position required for each sample’s time.

PRMan only shades moving primitives at the start of their motion and only supports linear motion of primitives between their start and stop positions. This means that shaded micropolygons do not change color over time, and they leave constant-colored streaks across the image. This is incorrect, particularly with respect to lighting, as micropolygons will “drag” shadows or specular highlights around with them. In practice, this artifact is rarely noticed due to the fact that such objects are so blurry anyway.

Depth of field is handled in a very similar way. The specified lens parameters and the known focusing equations make it easy to determine how large the circle of confusion is for each primitive in the scene based on its depth. That value increases the bounding box for the primitive and for its micropolygons. Stochastically chosen lens positions are determined for each point sample, and the samples are appropriately jittered on the lens in order to determine which blurry micropolygons they see.

So if I want to motion blur on an object, I need to supply the following parameters:

  • Change in position (xTranslate, yTranslate, zTranslate) of the primitive.
  • Number of samples to take along the path.

Given that, how do I go about actually implementing this?

The first thing I need to do is augment the bounding box, so in addition to putting point (x,y,z) inside, it also puts point (x+dx, y+dy, z+dz). That's pretty straight forward. Nothing else changes, up to the point where the micropolygons are hidden. Instead of
Code: Select all
for( Micropolygon poly : microGrid ) {
    // render the poly
    poly.hide(...);
}
each micropolygon has to be moved along the path and resampled:
Code: Select all
for( Micropolygon poly : microGrid ) {
    // sample the poly over the shutter sample points
    for ( i = 0; i < shutterSamples; i++ ) {
        // render the poly
        poly.hide(...);

       // move the poly along the path
        poly.translateBy( xTranslate/(float)shutterSamples, yTranslate/(float)shutterSamples, zTranslate/(float)shutterSamples );
    }
}

One problem with this is that you're going to end up with a solid object being rendered at multiple locations, something like:
overlapping.png
overlapping.png (1.74 KiB) Viewed 17166 times

This isn't the nice blurry image that people expect to see (click for more detail):
Image
The image above is rendered in Siren, which isn't even a REYES renderer. :roll:

Another possibility is to choose one sample over the motion path:
Code: Select all
for( Micropolygon poly : microGrid ) {
    // pick a random point along the path
    float xPath = xTranslate * Math.random();
    float yPath = yTranslate * Math.random();
    float zPath = zTranslate * Math.random();

    // move the poly to that point on the path
    poly.translateBy(xPath, yPath, zPath, ...)

    // render the poly
    poly.hide(...);
}


But this ignores the number of shutter samples. Multiple samples implies that each micropolygon is sampled at different places:
Code: Select all
for( Micropolygon poly : microGrid ) {
   // sample each micropolygon multiple times
   for (int i=0; i < shutterSamples; i++) {
        // pick a random point along the path
        float xPath = xTranslate * Math.random();
        float yPath = yTranslate * Math.random();
        float zPath = zTranslate * Math.random();

        // move the poly to that point on the path
        poly.translateBy(xPath, yPath, zPath, ...)

        // render the poly
        poly.hide(...);
    }
}


The other possibility would be to stratify the samples, so they would still be random, but each sample range within the shutter position:
Code: Select all
for( Micropolygon poly : microGrid ) {
   // amount of distance that can be travelled per shutter sample
   float xPerShutter = xTranslate/(float)shutterSamples;
   float yPerShutter = yTranslate/(float)shutterSamples;
   float zPerShutter = zTranslate/(float)shutterSamples;

   // sample each micropolygon multiple times
   for (int i=0; i < shutterSamples; i++) {
        // pick a random point within this shutter's distance
        float xPath = xPerShutter * Math.random();
        float yPath = yPerShutter * Math.random();
        float zPath = zPerShutter * Math.random();

        // move the poly to that point on the path
        poly.translateBy(xPath, yPath, zPath, ...)

        // render the poly
        poly.hide(...);

        // move to the start of the next shutter position
        poly.translateBy(xPerShutter-xPath, yPerShutter-yPath, zPerShutter-zPath, ...)
    }
}


Thoughts? :?

And yes, there would be a test before the last translateBy in the loop to prevent it from running for the last micropolygon.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Motion Blur

Postby John VanSickle » Sun May 04, 2008 5:01 am

dcuny wrote:My current question is: how does motion blur work in REYES. That (and focal blur) add a lot to the realism in a scene, so it's sort of important. ;)

For motion blur, REYES requires a start-of-frame and end-of-frame transform for each object and the camera. Inverting the camera transforms, and applying it to the object transforms, gives start-of-frame and end-of-frame transforms for the objects in camera space.

In my own version, each sample gets a random value from 0 to almost 1, which specifies the time during the frame that the given sample is taken. This is used to interpolate the start-of-frame and end-of-frame transform of the object to yield a specific transform for the sample. Actually, it's already transformed for start-of-frame, and what I do after that is to apply an additional amount equal to the difference between the two transforms, scaled by the random time value. REYES, in order to make rendering faster, treats the SOF and EOF transforms as if they can be linearly interpolated. Since it's blurred, it's hard to notice any inaccuracies that may lurk in this method.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: Motion Blur

Postby dcuny » Mon May 05, 2008 7:21 pm

John VanSickle wrote:For motion blur, REYES requires a start-of-frame and end-of-frame transform for each object and the camera. Inverting the camera transforms, and applying it to the object transforms, gives start-of-frame and end-of-frame transforms for the objects in camera space.

Yes, my mistake: I left the camera transform out of the equation.

In my own version, each sample gets a random value from 0 to almost 1, which specifies the time during the frame that the given sample is taken.

Hrm... I'd been thinking about translating the micropolygon, but you've described that you're translating the sample itself. That would give better sampling, but don't you now have to check to make sure the sample is still on screen?


Since it's blurred, it's hard to notice any inaccuracies that may lurk in this method.

I think I can live with that. :)

I've spent a couple of evenings hacking away that my own REYES renderer, but it's not to the point where I can generate any images with it yet. I'm still missing a lot of basic stuff, and I've only implemented triangles, which are pretty much the worst objects for REYES to deal with.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Motion Blur

Postby John VanSickle » Thu May 08, 2008 2:18 am

dcuny wrote:Hrm... I'd been thinking about translating the micropolygon, but you've described that you're translating the sample itself. That would give better sampling, but don't you now have to check to make sure the sample is still on screen?


There are several transforms: One translates the object from world space to camera space, then from camera space to screen space, then from screen space to sample space. This means that some samples, with their jittering and such, will point to areas that are off-screen without the jittering, but that's actually necessary for the features to work properly. What's important is that the object and patch culling should use buckets that take the various features into account.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: Reyes Renderer

Postby sascha » Fri May 09, 2008 1:11 pm

I'm not really sure how the PrMan motion-blur implementation works, but here's how a naive REYES implementation could do it:
* Generate the micropolygon grid and shade each micropolygon.
* Rasterize into an accumulation buffer, transforming the micropolygon grid according to the motion-vectors (without re-shading!)

This should work, but has the following disadvantages:
* Shading is done only once (very fast!), but this will cause artifacts (e.g. highlights dragged with the blurred object)
* All the motion-blur strokes are linear (could by problematic e.g. for a rotating propeller, etc.)
According to Pixar, both problems can be safely ignored.

The approach described above has another disadvantage: The accumulated image will still exhibit distinct frames (i.e. samples in time), e.g. a sample rate of 8 will result in 8 distinct samples accumulated into one frame. Production renderers IMHO use some kind of stochastic sampling to get more "blurry" strokes. It could work by using different, random time samples when rasterizing each micropolygon.

I don't know how to combine this approach with focal blur though, so I'd recommend another Google search...
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Next

Return to Development

Who is online

Users browsing this forum: No registered users and 1 guest

cron