SDS Raytracer

Ideas, enhancements, feature requests and development related discussion.

Re: SDS Raytracer

Postby John VanSickle » Wed Apr 23, 2008 12:50 pm

sascha wrote:
you can approximate the limit surface using Bezier bicubic patches...

...as long as all of the vertices are regular (i.e. of valence 4).

The method in the paper allows each patch to have a maximum of one irregular vertex.

The technique, however, is much more useful for passing an SDS to OpenGL (which is the reason for my interest in that paper).
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: SDS Raytracer

Postby sascha » Wed Apr 23, 2008 2:41 pm

I assume that, for the purposes of being able to dynamically edit a mesh, the half-winged is superior.

Exactly. I wouldn't use your structure in a modeler, but if it's static I don't see any disadvantages though. But...
One thing to keep in mind (at least when coding in Java) is that you'll probably have to deal with millions of "patches" (a face plus its one ring neighbors) to subdivide. If you allocate new objects for each subdivision step you have to create millions of new objects - the memory needs to be allocated, not to mention the high load on the garbage collector.

The dicer I've been using used pre-allocated arrays to do it's job, but it had a farily low maximum subdivision level and always diced an entire patch (no recursions).

For your application (ray SDS intersection tests) I think you'd need some recursive algorithm. I don't know how yet, but you must avoid the new keyword in the main subdivision loop!
Any home brewed memory management is likely to be inferior to the built in garbage collector, so don't do that. The only way I could think of it to either cleverly use some preallocated arrays or to use only locally declared primitives, which will be allocated on the stack and passed by value from one iteration to the next.

The method in the paper allows each patch to have a maximum of one irregular vertex.

I didn't realize that. See my previous message, however.
The technique, however, is much more useful for passing an SDS to OpenGL (which is the reason for my interest in that paper).

Ok, I see. Are you referring to GLU evaluators? I don't know much about them, but my impression was that they'd be implemented in software and thus don't run on the GPU - could be wrong though.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: SDS Raytracer

Postby John VanSickle » Thu Apr 24, 2008 12:48 pm

sascha wrote:
The technique, however, is much more useful for passing an SDS to OpenGL (which is the reason for my interest in that paper).

Ok, I see. Are you referring to GLU evaluators? I don't know much about them, but my impression was that they'd be implemented in software and thus don't run on the GPU - could be wrong though.

Well, I dunno if the evaluation is done in hardware or software (some graphic cards might support it); but I can tell you that the evaluation won't be done by the application, which will save me coding time.

I used Beziers before with v1.6 of my modeler; I subdivided twice, shoved the vertices through a matrix to calculate the Bezier control points, and passed the control points to OpenGl. The control points for the triangular faces also had to be modified to take into account that OpenGL (as far as I could discern) does not support triangular Bezier patches.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: SDS Raytracer

Postby John VanSickle » Thu Apr 24, 2008 1:06 pm

dcuny wrote:They also note that after subdivision, irregular patches contain at most a single extraordinary vertex, so there's really only a single special case to deal with.

This is slightly off, because the paper assumes that all faces are quadrilaterals. The Catmull-Clark allows polygons with any number of sides.

If the hull contains a face with other than four sides, then the face vertex created in the first subdivision step will be an extraordinary vertex. If any of the vertices of the original face were also extraordinary, then the first level of subdivision will yield faces that have two extraordinary vertices (at opposing corners). Verifying whether this is so can be done on a case-by-case basis, so it may be possible to adaptively subdivide, or (in worst case) subdivide the whole mesh again.

After the second level of subdivision, all faces have at most one extraordinary vertex.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: SDS Raytracer

Postby dcuny » Mon Apr 28, 2008 7:26 pm

From what I've read, there are only two practical approaches to raytracing SDS:

  • Converting to "smooth" triangle mesh, or
  • Converting to Bezier bicubic patches.
I haven't found anyone who's taken a "pure" approach to SDS raytracing, so I'm going to assume that it's just not a practical thing. I think Sascha is right: SDS is very efficient with a REYES-style renderer. So using a scanline REYES with trace() is the way to go.

So I'm going to keep learning about SDS, but drop the idea of writing a "pure" SDS raytracer. :?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: SDS Raytracer

Postby sascha » Tue Apr 29, 2008 11:28 am

So using a scanline REYES with trace() is the way to go.

If you just use the raytracer for secondary rays, a pre-tesselated triangle mesh (e.g. at subdiv level 2 or 3) might be sufficient, so you don't have to struggle with adaptive subdivision at rendertime. Let's recap what the rays could be used for:

Primary rays: No - REYES is more efficient and much easier to implement.
Shadow rays: Maybe - shadow maps have their own problems - I don't know.
Global illumination: No - the pointcloud ambient occlusion implementation of e.g. 3Delight produces better results (no noise) and is much faster than raytraced GI.
Reflections: Definitely yes. Environment maps might be useful in some cases, but raytracing is simpler to use and produces better results.
Refractions: Yes, same as reflections.
Photon mapping: Too slow. Fake caustics look good too, I don't know if caustics can be faked with REYES though (ideas?)

So, if we ignore raytraced shadows for a moment, what remains are reflection and refraction rays. Typically the object that reflects or refracts will be convex, the objects visible in the reflection will typically appear quite small, so there's no need for high subdiv levels. There's still need for du/dv information for shader antialiasing (or MIP-mapping), so I keep iterating it: ray differentials are imperative.


Speaking of open source REYES renderers with support for raytracing - Pixie seems to support pointcloud based AO and raytracing, I think it's worth a closer look. My last recollection is that it was slow, crashed often, and had problems compiling some shaders (all compared to 3Delight), but things might have changed.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: SDS Raytracer

Postby John VanSickle » Tue Apr 29, 2008 2:10 pm

sascha wrote:For your application (ray SDS intersection tests) I think you'd need some recursive algorithm.

With Catmull-Clark, subdivide until all faces are quads and have no more than one extraordinary vertex. This requires two subdivisions as most.

Then, prepare a static data structure that can hold the final subdivision results for a single patch (one face and its first-degree neighbors). The structure will basically be a two-dimensional array of geometric coordinates and texture-mapping coordinates. Use this data structure to subdivide each patch when necessary. You can establish a cache of subdivided patches to speed up testing of patches that are intersected multiple times in a render.
John VanSickle
 
Posts: 189
Joined: Sat Feb 16, 2008 2:17 am

Re: SDS Raytracer

Postby dcuny » Tue Apr 29, 2008 9:00 pm

sascha wrote:If you just use the raytracer for secondary rays, a pre-tesselated triangle mesh (e.g. at subdiv level 2 or 3) might be sufficient, so you don't have to struggle with adaptive subdivision at rendertime.
I agree. I'm having another look at the REYES architecture again.

To basically echo what you've said:
  • Primary rays: REYES better be faster, since that's it's primary task.
  • Shadow rays: Raytraced shadows look much better.
  • Global Illumination:: Unfortunately, it's still not doable for animation.
  • Ambient Occlusion: Cheaper and faster than GI. The approximated ambient occlusion faster than raytracing, and has less artifacts than raytracing. Raytracing is simpler to implement.
  • Reflections: Definitely yes.
  • Refractions: Yes, same as reflections.
  • Photon mapping: Does anyone use this for animation anymore?
Two other approaches to GI are:
    Instant Radiosity: This shoots out a single photon from a light source, marks where it hits after travelling a predecided maximum distance, and renders the scene lit by the single photon. Repeat this multiple times, and you've got an approximation for the global illumination in the scene. If you've got a hardware renderer, it can be very fast. This can be implemented in a raytracer as well - Sunflow has it as an option, and it's quite a bit faster than other "real" GI solutions. There's a nice explanation of it here, with some pictures to clarify things.
  • Lightcuts: This is a relatively new optimization algorithm. Essentially, it evaluates multiple light paths to determine how much the will contribute to the scene. If they are below a certain threshold, they are ignored. The result is a much faster rendering. It turns out that you can also evaluate multiple dimensions, so in addition to light paths, motion blur, participating media, depth of field and spacial anti-aliasing can all be applied.
These two methods can be combined together. For example, one "Summer of Code" projects is the the addition of lightcuts to Blender.

So I keep iterating it: ray differentials are imperative.
Yeah, Pixar agrees with you.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: SDS Raytracer

Postby sascha » Wed Apr 30, 2008 9:58 am

Primary rays: REYES better be faster, since that's it's primary task.

I disagree. While it might be faster, it has several other advantages:
* Smooth surfaces in REYES are rendered smooth, with sub-pixel precision, period.
* It can deal with huge amounts of geometry. Unlike a raytracer, it doesn't have to keep all geometry in memory, it can render one primitive after the other, just like any scanline renderer.
* du/dv comes for free (it's a scanline renderer), which can be used for advanced shaders (antialiasing, toon-shading, etc.)
* Antialiasing is cheap because it can be split into geometry- (removing jaggy edges) and shader-antialiasing (removing Moiré pattern like artifacts from textures). You basically shade each pixel (i.e. micropolygon) only once, but then supersample the micropolygon grid to get rid of the jaggy edges during rasterization. With traditional supersampling you'd have to run the shader multiple times per pixel.
* The micropolygon grid REYES produces as a byproduct can be used for other things (like an ambient occlusion pointcloud).

So, raytracing can be faster then REYES, but raytracing an image of the same quality would take much longer.

Photon mapping: Does anyone use this for animation anymore?

I don't know. I'm pretty sure that the caustics in Nemo are fake - I guess it's just a clever shader that applies them in no time.
Sometimes you spot caustics near water-glasses, etc. I doubt that they're photon-mapped, fake caustics might look just as good.

Yeah, Pixar agrees with you.

Or was it the other way round, who knows? :-)
The thing is that without ray differentials, raytraced images with complex textures can look much worse then images rendered with OpenGL in realtime using MIP mapping (unless the raytracer uses some insane amout of supersampling).
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: SDS Raytracer

Postby dcuny » Wed Apr 30, 2008 10:23 am

sascha wrote:It can deal with huge amounts of geometry. Unlike a raytracer, it doesn't have to keep all geometry in memory, it can render one primitive after the other, just like any scanline renderer.

There's a tradeoff between the "classic" version of Renderman and the "enhanced".

In the "classic" version, you've got to keep the entire screen in memory, including all samples (16 per pixel, on average) and all transparency information until the last primitive has been streamed in. That's a huge amount of information, so Renderman generally doesn't do that.

Instead, the "enhanced" version divides the screen into buckets. Geometry that's not handled by the current bucket is passed on to the next bucket it intersects. You only discard a chunk of geometry once you've determined that there are no buckets that could bound it. So you can potentially load in a lot of geometry - most of the scene, in fact.

The reason the "enhanced" version doesn't choke on the huge database is because most of the primitives are stored as "high level" descriptions of the objects, rather than as highly tessellated triangle mesh. Splitting and dicing happen on demand, and occlusion testing can prevent much of it from being processed in the first place. But, in fact, a good chunk of the scene is held in memory.

I don't know. I'm pretty sure that the caustics in Nemo are fake - I guess it's just a clever shader that applies them in no time.

That's correct.

Sometimes you spot caustics near water-glasses, etc. I doubt that they're photon-mapped, fake caustics might look just as good.

Yes, there's little point in actually doing something that can be faked just as easily, and rendered many, many times faster.

The thing is that without ray differentials, raytraced images with complex textures can look much worse then images rendered with OpenGL in realtime using MIP mapping (unless the raytracer uses some insane amount of supersampling).

It's on my "To Do" list. ;)
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Previous

Return to Development

Who is online

Users browsing this forum: No registered users and 3 guests

cron