What's Needed for a Complete Animation Solution?

Ideas, enhancements, feature requests and development related discussion.

What's Needed for a Complete Animation Solution?

Postby dcuny » Sat Aug 25, 2007 7:36 pm

Being lazy, I'd prefer that I could do as much as possible inside of JPatch. The most immediate things that JPatch is missing are lipsync and compiling frames into video.

There are tools (Papagayo) for doing lipsync, but I think the following would go a long ways to supporting lipsync:
  • Display of the waveform on a track. I don't think Java supports this well, but I've got some code around to facilitate doing this.
  • Tags in the timeline. This is for displaying the phoneme breakdown. We've already talked about adopting this.
  • Audio playback, with scrubbing. I've done this, but my code does this pretty badly.
Once the code's stabilized, I'd be happy to either work on this, or write some support code for JPatch.

As far as compiling frames goes, I haven't found any native Java tools which will create .mpeg files, although IBM may have some tools. According to this chart, Java already has support for .avi and .mov file formats.

Another option is to have the facility to call an external tool such as ffmpeg. This would be the simplest solution.

Compositing has been brought up before. You've got some IMP tools, and I've written a set of tools as well. If you want, I could work on putting together a prototype for that as well.

Thoughts? :?
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Sun Aug 26, 2007 9:07 am

I agree with all of that, but it was to wait until more important features are ready. First priority is bones, morphs and animation right now, but I think a fully functional SDS modeler is more important than a composition tool.

Lip sync will be implemented sooner (I'll start with Yolo/Papagayo import, but I'll also try to incorporate it into the timeline).

The new application framework should be able to support both (lipsync editor and compositor), so it should be a lot easier to write both tools on top of it (with the additional benifit that it should nicely integrate into JPatch's GUI and gets undo/redo support for free).
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Sun Aug 26, 2007 10:17 am

I'm not trying to push more features onto JPatch (at least, not in the current schedule), I'm just trying to make sure that we're not leaving anything important out.

I guess what I could do is put together a mockup of how the compositor might behave, to see if it matches what people might want, and what the framework will support. I'm thinking of Blender (since it's what I'm most familiar with).

While importing from Papagayo and company is important, lipsync is one of those places were you've convinced me that "less is more" - catching only a few mouth shapes is much more effective than trying to match each and every phoneme. :D
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Sun Aug 26, 2007 1:15 pm

I guess what I could do is put together a mockup of how the compositor might behave, to see if it matches what people might want, and what the framework will support. I'm thinking of Blender (since it's what I'm most familiar with).

I won't stop you :P
Serisously, that would be cool - adapting it to fit into JPatch should be possible at a later time.

While importing from Papagayo and company is important, lipsync is one of those places were you've convinced me that "less is more" - catching only a few mouth shapes is much more effective than trying to match each and every phoneme.

I'll have to do much more lip-syncing to gain more experience, but I too think that especially with sentences that are spoken very quickly it makes sense to just hit every 2nd or 3rd "visime". Here's a 2 pass breakdown from Stop Staring:
Code: Select all
fountain      F-AH-OO-N-T-IH-N      F-AH-OO-IH
photograph    F-OH-T-OH-R-AH-F      F-OH-T-AH-F
shepherd      SH-EH-P-R-D           SH-EH-P-R
stop staring  S-T-OH-P S-T-EH-R-NG  S-OH-P S-EH-R-IH

So I think the lipsync tool should have a feature to "hide" certain visimes to go from the first to the second pass.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Sun Aug 26, 2007 7:03 pm

I'll have to play with it again, but by recollection was that you could pretty much look at the wave and see where it changed. That's the point where there's an audible phoneme.

So for the first pass, I'd block in the words that needed to be lipsynced, placing a tag at the start of each new wave. These would be useful for placing expressions into the animation as well. That's your first column.

For the second pass, I'd hit the start of each new wave within the word, marking where the phonemes fall. I wouldn't necessarily want to put a viseme on each phoneme, but I'd like to know where to know what options I could choose. That's the second column.

Now it's time to actually animate the mouth. I've suggested before that JPatch have a special "lipsync" track of mutually exclusive morphs, so that putting a morph on the track will automatically crossfade the prior morph out and the new morph in. That's the track I'd drop the visemes onto.

But I think your suggestion to be able to "hide" (deactivate) visemes from the track is pretty clever, and would work nicely. In that case, there's no need for that second track.

If JPatch automatically dropped the visemes onto the lipsync track (like Papagayo does), you could quickly drag them into place, and then hide the ones you don't want to use.

One caveat: sometimes you get two visemes falling on the same frame.
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Postby sascha » Sun Aug 26, 2007 8:06 pm

but by recollection was that you could pretty much look at the wave and see where it changed

I think for a really convincing lip-syncing it would be best if there was a video of the voice recording session. Not only could you base the mouth shapes on those of the real actor, you'd also have a nice hint about all the body language involved.

One caveat: sometimes you get two visemes falling on the same frame.

Yes, some people manage to say an entire sentence within a single frame, but that's best handled using motion blur :-)
Seriously, I wouldn't worry - take a look at some live action footage, remember the dialog and watch it frame by frame. You'll be surprised how many phonemes are simply invisible.
For an example how not to do it, watch my initial Moai animation - at some points the mouths rush from one shape to the next literally every frame - this doesn't look very convincing. The first tests looked even worse, it got better after I've simply dampened some of the mouth morphs to about 50% or less. The correct solution would have been to simply skip some of the shapes - of course it requires experience to know when to skip which shape, but I'm sure there's no black magic involved.
sascha
Site Admin
 
Posts: 2792
Joined: Thu May 20, 2004 9:16 am
Location: Austria

Postby dcuny » Sat Sep 01, 2007 10:57 am

I've split the tail of this post off onto a Video Editor topic, partly because it had become specific to the editor, and partly because I wanted to see how the phpBB Split option worked. :P
dcuny
 
Posts: 2902
Joined: Fri May 21, 2004 6:07 am

Re: What's Needed for a Complete Animation Solution?

Postby deloresi » Fri Jul 31, 2009 10:39 am

How to make an animation that will last about thirty seconds and will rotate the camera angle around my project. My friend needs an answer to this question for a school project. "How to make an animation that will last about thirty seconds and will rotate the camera angle around my project."
_____________
matrimonial
deloresi
 
Posts: 2
Joined: Sat Jul 25, 2009 8:40 am


Return to Development

Who is online

Users browsing this forum: No registered users and 2 guests

cron