IRTC anim topic: Impostor!

General discussion about JPatch

IRTC anim topic: Impostor!

Postby sascha » Sun Jan 16, 2005 7:52 pm

The new animation topic is "Impostor!", deadline is April 15, 2004...
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Mon Jan 17, 2005 5:12 am

Something along the lines of "The Prince and the Pauper" comes to mind.
    Inside of castle, or perhaps a cathedral. A man (referred to as "Imposter") and woman ("Princess") stand before some sort of official, perhaps a Bishop. There is some important ceremony ceremony going on - perhaps a marriage of a coronation. If one were to look at the face of the Imposter, one would notice a rather blank expression on his face.

    Official: And now, by the power vested in me...

    There is a large boom that echos through the hall, and a off-camera voice - the "Prince" - calls out:

    Prince: Stop the ceremony!

    Startled, the official looks up, and the man and woman turn around. Cut to the Prince standing in the doorway, flanked by suprised guards. He points to the Imposter and yells:

    Prince: I am the true Prince!

    Cut back to the Imposter and Princess. The Princess swoons and collapses. There are gasps from the assembled crowd. The Imposter whirls around to face the Prince, and shouts back:

    Imposter: Guards, take him away!

    Medium shot of the Prince as he walks up towards the Imposter. The camera moves with him, and the Prince says:

    Prince: I have the Royal Birthmark!

    Medium shot of the Imposter and Official. The Imposter turns to the Official.

    Imposter: Royal Birthmark?

    Official: All royalty are have them.

    Imposter: Oh.

    They turn their attention to the Prince again. Medium, above the waist shot of the Prince. He turns with his back to the Official and Imposter, and struggles with his belt. He smiles with triumph as it comes loose, and holds the belt up to the camera. There is a "swish" as his pants fall down (mercifully, out of the frame), and a gasp from the people in the hall.

    Close up of the Official. He recoils in shock, then leans his head forward, blinks and recognizes him.

    Official: Your Majesty!

    Cut back to medium shot of Prince. He walks forward, holding his pants up with one hand, and reaches forward with the other.

    Side shot of Imposter. Prince advances to him, continuing to reach out.

    Prince: This man...

    Prince grabs Imposter's mask. There is a gasp from the crowd. He holds up the mask:

    Prince: ... is an imposter!

    Camera zooms in on Imposter's face to reveal a "Terminator" sort of robot head, with glowing red eyes, and a fierce, jagged metal teeth. It emits a low growl.

    Close up of Prince's face, as he realizes something is amiss.

    Prince: Oh, oh...
Thoughts? :D
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby pndragon » Mon Jan 17, 2005 6:36 am

How about a "Prisoner of Zenda" kind of thing...
pndragon
 
Posts: 591
Joined: Sun Dec 05, 2004 1:27 am
Location: North Carolina

Postby miyoken » Mon Jan 17, 2005 1:54 pm

Something along the lines of "The Prince and the Pauper" comes to mind.


The Cannes is waiting for you....
miyoken
 
Posts: 39
Joined: Mon Jun 07, 2004 11:16 am
Location: Japan

Postby sascha » Mon Jan 17, 2005 3:30 pm

I like David's script a lot! Difficult to realize though (or should I say challenging :) )

The Prince and the Pauper, The Prisoner of Zenda:
I have not yet read any of these books. Fortunately the texts are available at gutenberg.net:
The Prince and the Pauper
The Prisoner of Zenda

From a technical point of view, JPatch would need support for u/v mapping (faces, cloths, etc.), the bone system and (optionally) inverse kinematics.
I could start with u/v mapping and once it's done we could do all the shots that don't require bones (no walking, hands grasping, etc.).
If the bone system is ready in time, we can add the shots that depend on it.

I really like Inyo and will definitely continue to improve the interworking between it and JPatch, but this time I'd like to use Aqsis: I've never made an animation with RenderMan and would like to gain some expirence with it (like precomputing shadow- and environment-maps, writing/using sophisticated shaders, etc.)...
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby pndragon » Mon Jan 17, 2005 4:09 pm

The Cannes is waiting for you....

It's a race with Pixar.... :P
pndragon
 
Posts: 591
Joined: Sun Dec 05, 2004 1:27 am
Location: North Carolina

Postby dcuny » Mon Jan 17, 2005 9:18 pm

My thoughts of what's needed in JPatch:Animator, most of them not Inyo specific:
  • If you could get simple zbuffer output from JPatch, that would be great for fast pre-rendering.
  • Being able to play animation back in JPatch (i.e. playing a bunch of .png frames rendered in Inyo/Aqsis/POV files) would be nice.
  • Integrating JLipSync's .avi creator would be nice.
  • An option for calling and external command-line routine to compile .avi or .mpg files from within JPatch would be nice.
  • Support for lights is a requirement.
  • Bones with FK are a requirement.
  • IK would be nice. Have you had a chance to use the pin and drag IK system in Art of Illusion yet? It's the best IK system out there, and the IK class looked pretty portable when I last looked at it.
You'll note that U/V mapping isn't on the list. It's an important feature, but I think you can get away with a cartoon-style character that doesn't use texture mapping:

Image

OK, so you probably need a good anisotropic shader for the hair. One popular solution is just to put hats on characters:

Image

For example, a Bishop's mitre on the official, a crown on the Imposter, a Robin Hood style hat on the Prince, and a conical hat on the Princess. They would still have some hair, but the hat would cover most of it so it, so there wouldn't really be a need to have a texture on it.

I think a better solution is to just create very stylized characters that don't need any hair. We're working in the animation genre, so why not take advantage of it?

Image

People are willing to forgive a lot with characters that aren't supposed to look realistic. They'll accept choppy animation for puppet animation (like the shots above), but are highly critical when looking at something like the Final Fantasy film.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Tue Jan 18, 2005 8:47 am

If you could get simple zbuffer output from JPatch, that would be great for fast pre-rendering.

I'll start with a wireframe preview (saved as individual frames), and once the zbuffer workes with perspectivity I'll include a shaded preview.
Being able to play animation back in JPatch

mplayer can do:
Code: Select all
mplayer mf://name*.png -mf fps=24

Integrating JLipSync's .avi creator would be nice.
Is it compressed or uncompressed? I'l have a look at it. Maybe JMF is also an option, I'll hava a look at it too.
An option for calling and external command-line routine to compile .avi or .mpg files from within JPatch would be nice.

The only crossplatform tool I know is mplayer, but the avi's or mpeg's it creates don't play back on windows media player. Maybe JMF is an option too.
Support for lights is a requirement.

Yes

Bones and U/V mapping:
I mayby implement it in parallel: I always wandered how automatic u/v mapping could work - you basically need to know around which axis to unwarp the geometry. That's where the bones come into play: If a point is attached to a bone, simply take that bone as axis.
The result should be a nice u/v map that only needs a minimum of manual fine-tuning.

Hair:
We could try RenderMans curve primitive. As the patches do have a front and backside now, it would be easy to use a patch as "hair emitter". The hair would follow a number of user defined curves - the controlpoints of that curves could be animated, and later be moved using physics based simulations (of gravity, acceleration, wind, etc.).
As you mentioned, there are some papers out there about ray-curve intersection tests. Would be a nice feature for Inyo.

People are willing to forgive a lot with characters that aren't supposed to look realistic. They'll accept choppy animation for puppet animation (like the shots above), but are highly critical when looking at something like the Final Fantasy film.

I know, and realizm isn't what I'm looking for (for realistic images use a camera, not a renderer :) )
Personally I like the mix of photorealizm and cartoon style in most GC movies (Shrek, Nemo, etc...)
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Tue Jan 18, 2005 10:31 am

JLipSync's .avi writer uses uncompressed .bmp, which makes for some massive files. If there's anything else out there, I'm sure it does a better job than what I've written.

I thought JMF was verboten, but since you mention it: yes, it's supposed to be able to create .avi and .mpg files. :)

I've read good things about A:M's approach to hair. Basically, they set up "guides" at the vertices of patches, and the hairs are interpolated from that. It's similar to what I've seen for other hair programs. For example, here's a screenshot from Maya's PaintFX:

Image

The A:M online reference on hair pretty well covers all the details. It looks like it would be fairly straightforward to implement a basic (i.e. no physics) hair system.

Before I forget, did I mention that bones should be part of the morph system? Animating bone-driven poses via sliders means you could use the current JPatch Animator module with the current user interface. :)

One of the issues in the story I suggested is the "crowd" that is alluded to a number of times. Having lots of people means spending lots of time rendering these people... not a fun thing to do, because it'll eat up lots of processing time.

One solution is to only include five of six "extras", and line your shots up very carefully. It's like that old rule of film miniatures: never build what people won't see.

Another option is to create some low-resolution models that are obviously "dummies" for the shot. You've probably seen this sort of thing in puppet animation, where there are static characters standing around. You see them doing nothing, and you know you're supposed to ignore them.

Along those lines, you could render a static background of people, and composite the shot together. A touch of focal blur would help hide the fact that it's static, especially if the foreground had strong action occuring in it.

I'm also kicking around a "wolf in sheeps clothing" story as well, just for backup. Sometimes, simpler is better.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Tue Jan 18, 2005 2:11 pm

The JMF is not verboten :-)
It's just that I dont want to force users to download a lot of extra stuff. As long as it is optional, it is OK.
This means that JPatch must compile and run without the JMF installed, which makes accessing it a bit more diffucult, but it can be done (I've already had optional Java3D support).
I will look into it, but the next step is png-frame output for pre-viz (ATM wireframe, but soon shaded).

I'm not sure I understand how bones should be part of the morph system. What I can imagine is that you can store a pose, and apply it using a slider (thus, being able to mix poses, or blend between them).

Btw, while making the Moai animation, I recognized that you were right about thinking in frames: I did only use the frame-step buttons... So I'll change the UI and the display, but I will add a possibility to access "fractional" frames (like turning off grid snapping).
Do frames start with 1 or 0? That means, should the counter run
from 0:00.00, 0:00.01 to 0:00.23, 0:01.00, 0:01.01... or
from 0:00.01, 0:00.02 to 0:00.24, 0:01:00,...
(the first two nubers are minutes:seconds, the number behind the comma is the frame number). Don't worry, I'll add a simple frame-counter as well, it's just that I'd like to know the time without always having to divide by 24...
The display would then look like:
Time: 0:01.02 Frame: 26
(reads 0 minutes, 1 second, 2 frames OR frame 26)

The animator can now load and save animations. I'll soon put it online.
I have a problem with paths: The animation file (it's an xml file too) refers to the models simply by the filename. But that makes animations not portable to another system. A quick solution would be to ask the user for the directory to load the model from if it can't be found, but maybe you've a better idea.
I do not want to include the model-data in the animation-xml because it should be possible to change the model in the modeler but still use it in the animation.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Tue Jan 18, 2005 7:47 pm

I was wondering if you'd give the option to use Java3D. :D

You can check out Art of Illusion for an example of bones being included in morphs (AoI refers to them as poses). They're just another chunk of information that is stored with the pose.

A:M is a bit more complex in its approach. There are Actions, which are reusable chunks of animations, and Choreography Actions, which are no-reusable, basically tied to the current timeline. A:M distinguishes between Muscle (morphs) and [b]Skeletal[/i] (bone) actions. One of the main differences between the AoI approach and A:M is that in AoI, a pose is a single target, while in A:M, an action is a series of frames.

I think (for now, anyway) that taking the AoI approach of combining bone and morph information in to a single pose that can be triggered by a slider is probably the simplest approach.

(I know that JPatch isn't trying to be an A:M clone, but you might want to spend time browsing through the A:M Online Reference. There's a lot of interesting information in there!)

Phoneme targets are a classic example of where you might want bones and morphs combined. For example, for many targets, you need to rotate the jaw open (bones) and shape the mouth (morphs).

It would be nice if JPatch supported reusable action sequences at some point - especially for walk cycles.

Frames should begin from the number 1. Only programmers start counting from zero. ;)

With JLipSync, I pop up an error message if a file can't be found, and allow the user to locate the file. It might be good to have the user declare a JPatch "home" directory, and put all projects there.

You might want to rename the "Load File..." option in the Animator to "Include File..." so it's more clear what's going on.

Another option (at some point) to consider is a "Bundle" sort of file, which would be an XML file of all the files referred to in a project. It would wrap all the files used in an animation into a single file, which could be then be saved, or sent to someone else.

Could you give me a link to your most current version of Inyo? I'd like to get them back in sync again.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Tue Jan 18, 2005 8:20 pm

Another option (at some point) to consider is a "Bundle" sort of file, which would be an XML file of all the files referred to in a project. It would wrap all the files used in an animation into a single file, which could be then be saved, or sent to someone else.

Yes, I thought about it. There is support for zip files in Java, so I think I use it for project files - the archive would then contain xml files and binary data such as images (for rotoscopes or textures), etc.

Could you give me a link to your most current version of Inyo? I'd like to get them back in sync again.

Yes: http://www.jpatch.com/temp/inyo_20050118.zip
This is as it was used to render the sunset shot. There are some quick and dirty hacks: I've hard-coded the lightsources and I've documented out the adaptive supersampling code (to supersample the water). This reminds me that it would be a nice feature if a material could set a flag, telling Inyo to always supersample it.
I'm not sure if I changed anything else :? If your latest version already has the --skytexture option it is most likely up to date... :roll:
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Tue Jan 18, 2005 8:52 pm

Yeah, the idea of a texture flagging that it needs oversampling occured to me as well. I'll add a needsOversampling flag into RtPathNode that the texture can toggle to true.

I don't think I implemented the --skytexture, so I'll just do a file compare when I download your code. There are a number of other changes (such as focal blur support) that I added in the mean time.

The Inyo website needs to be updated (now that the export file format has sort of stabilized), and I need to add additional code credits for you and the other math library. :?

Did you get a chance to send me a list of criteria for judging the IRTC animations? I've written up some notes on each of them, and I can send them to you if you want.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby dcuny » Tue Jan 18, 2005 9:55 pm

Regarding raytracing hair, I think the A:M approach lays out all the details:
When raytracing hair, first a bounding box around all of the hairs on a patch is calculated. If it is intersected, bounding boxes are hierarchically calculated around each individual hair. If one of them is intersected, bounding boxes are hierarchically calculated around the line segments (16) of the individual hair. If one of them is intersected, a polygon is determined that faces the intersecting ray, using the two end points determined by its line segment order, and the hair “thickness” specified. The line segments also increase in transparency the closer they are to the end of the hair spline, (100% transparent at the tip).
If Inyo were to follow a similar plan, hair would be grouped by patches, with each patch having n hairs, each hair having 16 segments. Inyo would then construct a bounding box around all the hair.

Constructing bounding boxes around each hair is deferred until a ray actually intersects a patch's hair bounding box. When that happens, each hair gets a seperate bounding box, and is tested.

Bounding boxes around each hair segment is also deferred until a ray hits the hair's bounding box.

Only after a hair segment's bounding box is intersected is any actual geometry constructed for the hair. A plane perpendicular to the ray is constructed, and a polygon (two triangles) is constructed based on the segment, and that is tested for a ray intersection. This geometry is then discarded, because the next ray will approach from a different direction, and the triangles will lie in a different plane.

I assume the bounding boxes that were constructed are retained, but it's a tradeoff of speed against memory. I should probably allow the user to have it both ways.

Being lazy about constructing bounding boxes means that it only builds structures that are actually needed, saving a lot of time and memory.

One very cool thing about A:M hair is that it can have varying thickness - each of the 16 segments can have varying widths:

Image

Since the geometry is built on the fly, it's not really any more expensive than regular hair, but you can do stuff like feathers, too:

Image

Hrm... There are clearly some differences between the Tech Note I quoted (posted 7/17/2003) and the new hair system (released in 2004). If (according to the Tech Note) the segments are created in a plane perpendicular to the incoming ray, that means that they are perpendicular to the camera. Since this isn't how the lighting appears on the image above, clearly something is different. There's no option to manually rotate a hair, so the Face Camera option must be the new bit here:
Face Camera: The hairs are always rendered as strips. With this option on (100%), all hair strips will face the camera. This gives the illusion that the hairs are rendered as tubes (good for hair). If it is turned off (0%), they lie in the direction of the surface that they came out of (good for feathers or grass blades).
In any event, these are fairly advanced properties. For basic hair, the routine described should work just fine.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Tue Jan 18, 2005 10:05 pm

I don't think I implemented the --skytexture

I did it, it should already be part of the last version. IIRC the most recent changes were only some quick hacks to achive certain effects, so most likely you don't have to change anything in your version of Inyo.

now that the export file format has sort of stabilized

We need a tag for lightsources, I recommend something like
Code: Select all
l <pos x> <pos y> <pos z> <radius> <r> <g> <b> <intensity><shadows> <soft-shadows> <hightlight> ...
This can be extended to support spotlights, light fading, etc.

Did you get a chance to send me a list of criteria for judging the IRTC animations?

They did not yet set up a votiong and comments page (this may take a week), but once the did you can simply add your comments there - commenting is open to the public and not restricted to judges (but I think you have to register anyway).

To vote, every image/animation needs to be judged in three categories:
1) Artistic Merit,
2) Technical Merit,
3) Concept, Originality, Interpretation of Theme

For each animation, give it a rating between 1 and 20 (inclusive) in each of the three listed categories. 1 is the lowest rating, and 20 is the highest. I'll average your scores with mine and submit it. Just send me a textfile or spreadsheet with your scores via email.

It is important to focus on the animation (i.e. your vote should not be influenced by the soundtrack). Take a look at
the rules at ftp://ftp.irtc.org/pub/anims/RULES
especially paragraphs 2c, 3a-d, 5a-g
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Next

Return to General discussion

Who is online

Users browsing this forum: No registered users and 1 guest

cron