Shaders

Ideas, enhancements, feature requests and development related discussion.

Shaders

Postby sascha » Fri Nov 30, 2007 12:02 pm

This is a follow up to the discussion about shaders in the OpenGL renderer thread and is about requirements for OpenGL and RenderMan shaders.

I like the idea of channels, so each material (or, if you like, shader) should support 6 channels:
  • Ambient (Ka)
  • Diffuse (Kd)
  • Specular (Ks)
  • Reflection (Kr)
  • Emission (Ke)
  • Opacity
The job of the shader is to compute an RGB value for each channel at each pixel and sum then up the allow the renderer to finally assemble the image.
I think the names of the channels are quite self explaining. For complex materials, there could be a kind of pattern generator (e.g. Perlin-Noise) that generates the Ka, Kd, Ks, etc. values for each pixel, but for the sake of simplicity, let's start with fixed values (that is, no texture, just colors).
The shader could then use the ambient occlusion data, compute the amout for the pixel in question, and subtract it from the ambient channel. It could use spherical harmonics or subsurface scattering data and use it to modify the diffuse channel. It would use shadow maps to create shadows (thus modifying the diffuse and specular channels), and finally would use environment maps to add light to the reflection channel. Emission is simply light emitted from the object (thus the distinction from the ambient channel), although this would most likely not be used to light other objects (unless it's used as the basis from some radiosity or ambient occlusion algorithm). Opacity finally is used to specify which colors can pass through the surface (thus making it semi transparent). Note that opacity has an RGB value, not just a scalar (like an alpha channel), thus it can be used to filter light (e.g. only red light passing through). This might not be possible with OpenGL, but RenderMan will use it.

So my vision of a first test shader would be this:
The shader takes the 6 RGB values (one for each channel) as input parameter and computes the color and opacity values for each pixel. There'd be a "basic material" class in JPatch that defines just these 6 values for a material, this way we have simple, colored (but untextured) materials we can start with.
Later, the values of each channel can be computed by shaders too, e.g. by calling a noise function.
There are some extra parameters needed, e.g. a shininess parameter (needed to compute the spacular part) and a similar term for reflection (to allow for angle-of-incidence based reflection).
We should start with defining what extra parameters have to be passed to the shader to make all the fancy stuff possible that David has been working on in Inyo and with his new OpenGL renderer, and what extra attributes have to be added to each vertex.

Another thing I'd like to add are fake back/rim lights. This would add light, depending on the angle of incidence, to the diffuse (or specular?, or ambient?) channel, which gives a cheap but nice looking rim-light effect.

The shader should be modular, so each step could be encapsulated in a function (e.g. computeAmbientOcclusion(), etc.). There would then be a main part, that calls all required functions to calculate the RGB values of each channel and finally sums them up.

What do you think?
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby dcuny » Fri Nov 30, 2007 5:43 pm

In general, I think it's a good idea, but I'd like to be a bit more clear on what the goal is here. Do you want to be able to use multiple renderers and get the same results, or something else?

Most "interesting" shaders depend on fairly complex lighting information that's not available until render time - for example, you shouldn't put a specular highlight on an area in shadow. I'm also not sure how is approaches the problem of complex shaders, like glass (which has IOR and Fresnel effects) or hair.

I like the approach that Brazil takes. Most common materials can be emulated by using a combination of diffuse shaders and specular shaders. So they've got the following diffuse shaders:

  • Default
  • Lambert
  • Oren-Nayer
And the following specular shaders:

  • Phong
  • Blinn
  • Sheen
This combination allows you to build up a general set of materials with a fair amount of flexibility. It's the same thing that Blender does.

In addition, they implement a set of specialized shaders:

  • Glass
  • Chrome
  • Toon
  • Car Paint
  • Wax
  • Glow Worm
  • Velvet
  • Ghost
  • Skin
This hides the complexity from the user, because the can select a general class of shader, and fiddle with the parameters. The actual implementation of the shader depends on the renderer, which can take advantage of specialized coding (if available) to render an effect.

Are these differing goals?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Shaders

Postby sascha » Fri Nov 30, 2007 7:49 pm

Do you want to be able to use multiple renderers and get the same results,

My primary focus is RenderMan. I think that OpenGL would be a nice previz renderer, and if GLSL can be used to emulate some of the shader funcionality (to preview the shading), it's even better. For some projects, OpenGL might even be sufficient as a final renderer, but I guess it will take some time until GL/GLSL offers everything RenderMan and RSL does.

I'm aware that different RenderMan renderers will produce different images, but that's nothing to worry about.

I like the approach that Brazil takes.

I like that. It could still use the channel approach, but you could select which function to call to e.g. compute the specular amount. One other option would be to offer different functions for reflection (e.g. one using environment maps, and another one using ray-tracing).

We could start implementing the functions in RSL, once they're working we could implement them in C (RenderMan shaders can call compiled C funtions) for better performance. I'm sure that plenty of RSL code is available on the internet.

To test it, we'd need the following things:
  • a Java program that loads some .obj files and generates a RIB
  • A basic material definition, at first without textures
  • A skeleton implementation of the shader
I could start with all three of them, but would appreciate any help.
The Java program should eventually also call the renderer to create shadow- and env-maps.
Next would be
  • Add AO and other code to the Java program, include the precomputed data in the RIB (as vertex attributes)
  • Split the shader code into specialized functions (e.g. for specular, phong, or blinn shading of the highlights). Depending on the material definition, have the Java code assemble the shader source and call the RSL compiler to compile the shader on the fly
That's where you come into play ;-)
And eventually
  • Add support for procedural textures and image maps
  • Integrate both, the Java material definitions and the Java->RenderMan code into JPatch
  • Translate RSL functions to C

If the work could be done in parallel for GLSL and possible Sunflow (I've read that it supports Java/Janino shaders), even better.

The goal should be a system that utilizes much of the power of RenderMan and RSL, without needing the user to be a RenderMan expert. Idially all the user must provide is a path to the renderer and shader-compiler executables.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby sascha » Fri Nov 30, 2007 8:48 pm

Rethinking it, it doesn't make much sense to write a seperate Java application. I'll add basic RenderMan support to the version I plan to release end of this year, and we could then use this as a starting point.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby dcuny » Sat Dec 01, 2007 6:28 am

sascha wrote:My primary focus is RenderMan.

There's nothing the matter with that, so long as it doesn't preclude other renderers.

I think that OpenGL would be a nice previz renderer, and if GLSL can be used to emulate some of the shader funcionality (to preview the shading), it's even better.

I think OpenGL would make a good previz renderer, as long as it doesn't require GLSL. That is, if GLSL isn't available, it would still run, but without the advanced shaders.

For some projects, OpenGL might even be sufficient as a final renderer, but I guess it will take some time until GL/GLSL offers everything RenderMan and RSL does.

The main reason I'm looking at OpenGL is for the speed of the rendering. I think it'll make a "good enough" renderer - the images I'm getting with AO and SH are pretty impressive. But I'd still like to see Sunflow as an option in JPatch. For one thing, the progressive refinement renderer would be a very handy option to have. Plus, it would make a good "software only" fallback renderer - it's actually quite fast if you don't use any advanced features.

One other option would be to offer different functions for reflection (e.g. one using environment maps, and another one using ray-tracing).

I agree, it would be good to be able to swap between the two.

We could start implementing the functions in RSL, once they're working we could implement them in C (RenderMan shaders can call compiled C functions) for better performance. I'm sure that plenty of RSL code is available on the internet.

So do we agree that the list I posted is a good "basic" list? Are there any general categories of materials that are missing? How about metals (copper, gold) or wood (ash, oak) and things like that that are missing?

Once we've got the general categories set out, it should be pretty easy to find RSL examples on the Net.

Also, did you have any particular version of Renderman you want to target? I haven't got any loaded on my machine right now.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Shaders

Postby sascha » Sat Dec 01, 2007 9:01 am

o do we agree that the list I posted is a good "basic" list? Are there any general categories of materials that are missing? How about metals (copper, gold) or wood (ash, oak) and things like that that are missing?

The list is OK. Metals usually filter the reflected light, so if you've got a golden (metal) sphere, the reflections and highlights will be tinted (as opposed to plastic, where the highlights are white and the reflection isn't tinted, regardless of the color of the plastic). Thus, if you're diffuse color is [1, 0.75, 0.25] (gold), set reflection and specular channels to the same color to get a metallic look, or set it to white for a plastic look.
One thing I'd add for metals is anisotropic scattering for the highlights, this gives a nice "brushed metal" look.
As for wood: You'd need procedural textures here. I can write the functions for some default patterns (noise, wood, onion, etc.), but to start with I'd focus on the "BRDF" part of the shader: Take the input colors, use the view-direction and loop over the lightsources to compute the output colors. The input channels are constant. Once that's working, it should be easy to use a pattern generator instead of a constant color for each channel, but I'd try to keep these two separate (BRDF and pattern-generation).

Once we've got the general categories set out, it should be pretty easy to find RSL examples on the Net.

I agree. I've also got a copy of "Advanced Renderman", which has some nice example shaders (I guess they're also available online). There's a cartoon shader (they say it's not production quality, but it's a good starting point). It uses du/dv to "detect" edges and paints them black, but it only works with smooth surfaces - even the slightest crease is detected as edge and painted black, that's why it didn't work well with patch models (which are inherently susceptible to creases). SDS are much smoother, so I'll try it again with an SDS model.

Also, did you have any particular version of Renderman you want to target? I haven't got any loaded on my machine right now.

I'd like to support the free-software renderers (Aqsis and Pixie, and jRMan once it supports SDS), but for reference I'd use 3Delight. It's commercial software, but AFAIK they still offer free licenses for non-commercial use - and it's pretty fast and supports ray-tracing :-)
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby sascha » Mon Dec 03, 2007 12:05 pm

Here are some thoughts about the inner workings of the shaders (and how a GUI might look like):
Let's start with a simple plastic shader (click to enlarge):
plastic-shader.png

On the left side are the shader's input parameters. The black boxes in the middel are, you might have guessed it, black boxes. ;-) Based on some input parameters (a color and the lightsources in this case) they compute the output color. The color is then summed up, which yields the color of the pixel to be shaded. Transparency is, in this case, simply passed through from input to output.

There should be some pre-defined black-boxes (e.g. for standard diffuse and specular lighting, for raytraced and envmap reflections, etc), but all in all these are just shader-language functions, so there should also be a possibility to insert a custom function here.

Note that we haven't got any patterns here, but if we simply say that a constant color is a pattern too, we get pattern generation on the left side and lighting computations on the right side of this diagram.

Now let's move on to the more complex example of a wood shader (click to enlarge):
wood-shader.png

The basic principle is the same, on the left side is pattern generation, on the right side ligthing. The pattern generation is a bit more involved here though: We've got a wood pattern, followed by a function (ramp, triangle, sine, etc.), which drives a color map. The output of the color map is then used for diffuse and ambient light.

All input parameters are visible in JPatch as material attributes. They could be constant, but each of them could be animated too. The matrix part for the pattern is used to tranform (rotate, scale, translate) the wood pattern. It could be attached to any node in the scenegraph. This shader also has a set of float/color mappings (for the color map) and two additional float parameters that specify the ambient- and diffuse- factors.

Note that I've already implemented a framework for pattern-generation (for Inyo), which can do that and even more: there is a set of patterns, like wood, gradient, perlin-noise, etc, there are color-maps and there are even pigment maps (which don't map to colors directly, but to other pattern-generators). This can be used to create quite complex patterns (e.g. the rock texture for the Moai).
What's missing are the black boxes on the right side (for the lighting part). That's where your work comes in handy - there could be black-boxes for each of the shaders you've mentioned (e.g. there could be an ambient-occlusion black-box that takes AO-data from the vertices and subtracts it from the resulting color).

For RenderMan, each black-box would be a function (with input and output parameters), and JPatch would simply generate a shader-source based on how the user has "wired" the shader in the material-editor GUI. Of course, other languages would be possible too (plain Java, GLSL, etc.)

What do you think?
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby dcuny » Mon Dec 03, 2007 1:01 pm

It looks a lot like the shader builder that DarkTree has, or the one in Art of Illusion, only with Renderman specific parameters.

Have you looked at either of those?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Shaders

Postby sascha » Mon Dec 03, 2007 1:31 pm

I've seen some of them, including AoI's and ShaderMan. IMHO this is the logical way to represent shaders.

I'll give it more thought once the more important features are implemented. For JPatch 0.8, general OpenGL like material definitions must do, although I might add a bridge to RenderMan shaders (where JPatch looks up the input parameters of the shader and presents them as material attributes to the user).

JPatch 1.0 on the other hand should feature a nice GUI material editor.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby dcuny » Mon Dec 03, 2007 7:50 pm

sascha wrote:I've seen some of them, including AoI's and ShaderMan. IMHO this is the logical way to represent shaders.

I agree.

It would be nice if the GUI could automatically build a set of sliders for the control. So you would have two possible views of the shader: the "editor" view that you're talking about, and a "slider" view that would present the user with a list of sliders of adjustable parameters. You could specify which parameters would be adjustable in the editor.

That way, a "normal" user could have the complexity of the shaders hidden from them, and only need to tweak relevant values. The "power" user could go into the shader editor and modify what they needed to, or create a new shader from scratch.

For JPatch 0.8, general OpenGL like material definitions must do, although I might add a bridge to RenderMan shaders (where JPatch looks up the input parameters of the shader and presents them as material attributes to the user).

That seemed to have worked fairly well before.

I'm assuming that the "input" parameters are those of the "core" shader, and if you selected a different type of shader (such as "glass"), you'll get another set of parameters, such as "index of refraction" and so on.

JPatch 1.0 on the other hand should feature a nice GUI material editor.

Another "nice to have" feature would be the option to construct a uv map from the shader. One reason for doing this would be to speed up rendering, especially if you had a complex shader. It would also provide a simple way to provide compatibility with other renderers, which might not support complex shading, but did support uv mapping.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Shaders

Postby sascha » Tue Dec 04, 2007 8:58 am

That seemed to have worked fairly well before.

I'm assuming that the "input" parameters are those of the "core" shader, and if you selected a different type of shader (such as "glass"), you'll get another set of parameters, such as "index of refraction" and so on.

I meant parsing the shader source (or even the compiled shader) and look up the input parameters. These would then become material attributes (which could be animated).

Another "nice to have" feature would be the option to construct a uv map from the shader.

You mean to bake the texture to an u/v map? That's a nice feature, I agree. Of course it will only work for patterns, not for the lighting part of the shader. Procedural textures are not slow per se, and u/v image-maps could become prohibitively large, so one has to be careful with that.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby sascha » Fri Dec 14, 2007 11:13 am

I've started to work on a very simple RenderMan shader. It uses (any number of) lightsources, and adds some (fake) skylight plus some (fake) rimlights. Here's how it looks like, rendered with Aqsis (the artifacts are from the model, it's an exported patch-model from 0.4, no SDS!).
Click to enlarge:
rsl_test.png
Top left: (yellowish) light from a standard distantlight (without shadows)
Top right: some fake (bluish) skylight
Bottom left: some fake (white) back/rimlights
Bottom right: Everything summed up

Because there is no AO involved, there's way too much light inside the mouth and the gloves, and of course shadows from the distantlight are missing. But I think it's a good starting point, what do you think?
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby dcuny » Fri Dec 14, 2007 11:47 am

It looks nice! (Although there's an odd bright patch on the right wrist with the rim light shader. ;))

I've posted the AO code some time back, if you want to look at that, but I think you'll be disappointed at how slow it runs.

The spherical harmonics code would also be a trivial port to RenderMan, if you're interested.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: Shaders

Postby sascha » Fri Dec 14, 2007 12:29 pm

there's an odd bright patch on the right wrist with the rim light shader

I've noticed it too - could be a bogus normal exported from the JPatch 0.4 - I'll have to try it with an SDS.

I've posted the AO code some time back, if you want to look at that, but I think you'll be disappointed at how slow it runs.

I'll have a look at it. I also keep thinking about how to speed this up. I'm not interested in any radiosity-like approach (i.e. shooting rays), I'm rather curious about the "approximate faces as disks and loop over them" approach. The problem is that a naive implementation is O(n^2) (for each vertex, loop over all vertices). This will work for 1000 vertices (1 million iterations), but not for 10000 (100 million iterations) or more. So we'd need a way to tame the algorithm to be O(n) or at least O(n log n).

I can think of two approaches that could be combined:
1st: Start with the base mesh (not subdivided). If a vertex is very close to other vertices, subdivide its adjacent faces. Repeat until detail is sufficient or a maximum level has been reached. This should keep the vertrx count in a reasonable range, but provide the detail when needed. Note that this would still translate to probably more than 10000 vertices for a typical scene, and we still have the O(n^2) problem.

Getting the data to the renderer could be a problem though. If the renderer supports hierarchical meshes, there's no problem, but if not we'd have to subdivide each object to the maximum level the AO step encountered. This can be problematic: Imagine a large ground plane and some small objects on it. AO would suggest to subdivide the large plane only in the vicinity of the smaller objects, but if we can't pass a hierarchical mesh to the renderer, we need to subdivide the entire ground-plane down to the level AO needs.

Because of this, I wouldn't start with that. I'd rather use a user-defined subdiv setting for AO, so the user can set the AO subdivision level on a per-object basis.

2nd: Use a spatial subdivision scheme to only search for nearby vertices. Nearby is relative, so whether some vertex is nearby depends on its disk's size. I came up with the following idea:

Initialization:
* Categorize the vertices depending on the disk-area, e.g. into 4 categories, each one doubling the area of the previous one.
* Build 3d-grids of cubic cells, one for each category (the size of each cell is the "radius of influence" for the respective disk-area).
* Sort the vertices into the cells (cells stored in a hash-map to save memory - a grid could easily be 10000x10000x10000 cells, so we can't use arrays).

Actual AO loop (for each vertex):
* Fetch vertices from grid-cells (using the own cell and all it's neighbors - 3x3x3 = 27 cells) for each size-category.
* Do AO computations only for those vertices.
* Fade out vertices as they approach the "radius-of-influence" boundary to not cause temporal aliasing.

In an optimal case, this should be O(n) - so the time needed for AO computation grows linearly with the number of vertices (not quadratically as in the original algorithm).

In a last step, to get rid of possible artifacts, blur the AO data with some filter kernel. E.g. the new AO value for a vertex could be 0.5 x it's original value + 0.5 x the average of it's neighbors (vertex-neighbor information is available in the mesh structure). This step could also be repeated multiple times to increase the blurring.

I'd like to give this one a try. Thoughts?
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Re: Shaders

Postby sascha » Fri Dec 14, 2007 3:55 pm

Ok, I've got a naive AO approach working with RenderMan. Here's an example, this time using the SDS version of the rabbit:
(click to enlarge)
ao_test.png
Left: Just the same shader as before
Center: AO data
Right: Shader + AO

At the fold of the pullover (at the rabbits hip) and possible at the neck too, you actually see the backside of the surface. This means that the normals point in the wrong direction, which causes my AO code to go havoc, but this can be fixed.
There are other artifacts visible (especially around the neck), I think because the mesh resolution was too low for the AO computation.

For AO it used a mesh with 9128 vertices, and the AO computation (without any performance-hacks) took about 15 seconds.

What do you think?
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Next

Return to Development

Who is online

Users browsing this forum: No registered users and 1 guest

cron