Builtin Renderer

Ideas, enhancements, feature requests and development related discussion.

Builtin Renderer

Postby dcuny » Mon Jul 21, 2008 9:26 pm

I've been working on my REYES renderer, mostly with a goal of understanding how the REYES algorithm works. But I've also been thinking of whether it might be a good match for an internal renderer for JPatch. More and more, I suspect that it won't be the case. Of course, that won't stop me from using it, which is one of the reasons I'm looking to make it compatible enough with RenderMan to be able to parse out RIB files that JPatch can generate.

I'd also originally started working on Inyo to convince Sascha to add a builtin renderer for JPatch. I'd had been looking at Sunflow, because it generated really pretty pictures and was written in Java. At the time, however, development on Sunflow had stopped. Now that Sunflow is up and running again, there seems to be little point in continuing work on Inyo.

Anyway, here's a list I put together in thinking about various renderers.
  • It's got to be fast, because it's being used for animation.: 3delight, Aqsis, jrMan.
  • It should support animation features such as motion blur: 3delight, Aqsis, Pixie.
  • It should be written in Java, because it avoids cross-platform issues, and could be integrated into JPatch: jrMan, Sunflow.
  • It should have no licensing issues that would prevent it from being bundled with JPatch: Aqsis, jrMan, Pixie, Sunflow.
  • It should be RenderMan compliant, and work well with SDS: Aqsis, jrMan, Pixie.
  • It should generate pretty pictures: 3delight, Aqsis, Pixie, Sunflow.
I've left a number of renderers off the list, probably unfairly:
  • POV-Ray. While it can be used for animation (see Sascha's "The Imposter"), it's not terribly fast, which is bad for animation.
  • Yafray and Yaf(a)ray: I've seen both of these used in animations, but again, they're raytracers.
  • Indigo and Kerkythea: AFAIK, neither are practical for animation.
  • Art of Illusion: At one point, there was talk of having it work as a stand-alone renderer. I've never followed up on that. But (like the other renderers), it's a raytracer and a bit slow. (It's got a zbuffer that's being worked on, but I don't know much about it).
  • Blender: I've actually written a Python script that allows Blender to be called from the command line. Problematically, a lot of the best features, such as ambient occlusion, didn't have any API references. It was also a bit clunky. But it's a possible workflow.
Thoughts? Sascha, did you have anything in mind?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Builtin Renderer

Postby sascha » Tue Jul 22, 2008 9:33 am

POV-Ray. While it can be used for animation (see Sascha's "The Imposter"), it's not terribly fast, which is bad for animation.
True, but more importantly, it doesn't support some "mandatory" features, such as (adaptive) subdivision, texture-reference space, subsurface-scattering, HDRI output, etc.
Thoughts? Sascha, did you have anything in mind?

Right now I need to get the modeler and animator done. For rendering, my primary focus is on RenderMan and REYES rendering, for the time being I'll use 3Delight as the "reference" renderer. The 1.0 version of JPatch should support both, 3Delight and Aqsis out of the box (you'll need to provide the paths to the renderer and shader compiler binaries, and JPatch should do the rest for you). I haven't decided on how to support different shaders, but for the first versions I think you'll have to live with either very basic materials, or external shaders (hand-written or created with tools like Shader-Man). What I'd like to automate are things like shadow- and environment-map generation, point-cloud generation for ambient occlusion, and rendering different layers and/or channels for later compositing.
I still think that a hybrid REYES/raytracing approach is best for animation, and if this proves to be too difficult to implement, I'd fall back to REYES. So I think that your REYES renderer is very interesting. Here's an (incomplete) list of features I'd think to be important:
* Splitting and dicing of SDS (implemented?)
* High-quality rendering (e.g. no holes or cracks in the surfaces)
* Focal blur (implemented)
* Motion blur (implemented)
* Shader support (partially implemented)
* Point-cloud ambient occlusion
* Subsurface scattering
* Environment- and shadow-map support
* Image-mapping (texture-mapping) with MIP mapping
* HDRI input/output (ideally OpenEXR support)
* Basic RenderMan compatibility (should support passing arbitrary parameters to shaders). The only primitive absolutely required is SubdivisionMesh.

I agree that having a built-in renderer greatly adds to the "user experience" - you don't have to download, install and set-up a 3rd party renderer and can start using JPatch immediately. On the other hand, good integration with 3rd party products should make the renderer setup as simple as possible. JPatch could e.g. test whether a supported renderer is installed in its default location, so all you'd have to do is install the renderer.
I think the main advantage of a built-in renderer from the development perspective is that it'd be a perfect playground for experiments. If the design allows it, one could e.g. replace the shader engine with OpenGL shaders, or experiment with "special effects" like SSS or global illumination.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Builtin Renderer

Postby dcuny » Tue Jul 22, 2008 11:40 am

sascha wrote:Right now I need to get the modeler and animator done.

Well, yes. :mrgreen:

What I'd like to automate are things like shadow- and environment-map generation, point-cloud generation for ambient occlusion, and rendering different layers and/or channels for later compositing.

That would be great. One of the complaints with RenderMan is that it's a pain to set up the shadow maps.

I was looking at Stop Motion: Craft Skills for Model Animation at the bookstore today. Although it deals with "real" 3D animation, there's a lot of compositing that's used. It's also a good way to increase render times.

I still think that a hybrid REYES/raytracing approach is best for animation, and if this proves to be too difficult to implement, I'd fall back to REYES.
I'm planning, at a minimum, to add support for raycast shadows to my renderer. If that's not to terribly slow, I'll consider other features. There are two issues here - speed, and shading. The impression I get is that raytracing in REYES is much slower than the zbuffer algorithm. Plus, there are complexities of trying to integrate raytracing and shading.

Splitting and dicing of SDS
Next up on my list. Currently, I subdivide the entire mesh, which adds a lot of overhead. Subdivision on demand will hopefully speed things up a bit. At some future point, we'll have to look into using JPatch's SDS code.

High-quality rendering (e.g. no holes or cracks in the surfaces)
No cracks so far. I'm sure my luck won't hold out in this regard. I suspect most of the cracks are going to come from displacement shaders. There's an interesting solution in the Aqsis wiki that I'll be looking into.

Focal blur
This will need to be re-implemented in the renderer. I can filch the code from the old renderer.


Motion blur
Again, this needs to be re-implemented. It shouldn't be a big deal (depending on how tricky RIB motion blocks are).

Shader support
My plan is to hard-code the "minimum" shaders. I'd like to get the renderer to a stable point before I start working on programmable shaders again.

Point-cloud ambient occlusion
Yeah, it's on my "To Do" list. We'll see if I can get good rendering times from it or not. I got the latest [i]Pixar paper the other day, but didn't have a chance to read it in depth.

Subsurface scattering
I suspect this is going to be pretty expensive to implement.

Once I've got the RIB loader somewhat working, I'll start thinking about releasing the code. I need to come up with a name for the renderer (jreyes is currently the top contender) and post it to Sourceforge.

Environment- and shadow-map support
I don't see any major problems with these.

Image-mapping (texture-mapping) with MIP mapping
I haven't really looked at MIP mapping in any depth.

HDRI input/output (ideally OpenEXR support)
I just re-read an excellent article on tone mapping, which emphasized the need for maintaining high precision in the rendereing. I haven't looked into OpenEXR very much, but I'm aware of several libraries. (At a minimum, .png gives 16 bit support).

Basic RenderMan compatibility (should support passing arbitrary parameters to shaders).
Once SDS patches are working, I'll focus on the RIB reader.

If the design allows it, one could e.g. replace the shader engine with OpenGL shaders, or experiment with "special effects" like SSS or global illumination.
Yes, the framework is rather open ended. That's the main reason I went with the design from Production Rendering. If you wanted to write an "immediate mode" OpenGL renderer, it would fit right into the framework.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am


Return to Development

Who is online

Users browsing this forum: No registered users and 2 guests

cron