OK. The basic workflow is covered in the manual p. 248: you have a switch layer full of mouth shapes, you have imported a dialogue sound file, you assign this in the switch layer tab and voilĂ : mouth opens and closes according to sound. Let's explain this bit by bit.
The following example here works with bitmaps, but it's the same with mouth shapes in vector layers. As you see, my head isn't complete, there's holes for the eyes (much easier setup than masking), and the chin is not drawn completely. Why? Because the chin (and cheeks) move while talking.
For each head view (I had 5 different ones) there's one set of mouth shapes, named
mouth_0 (the closed one) to
mouth_3. Nothing fancy here, no phonemes, but this works great for average dialogue and realistic characters.
Put the mouth shapes into a switch layer like this:
- mouth switch layer
- - mouth_3
- - mouth_2
- - mouth_1
- - mouth_0
The
closed mouth always goes to the bottom, the top layer is the one with the mouth widest open.
Prepare your sound files. They should be normalized, so the sound uses the whole dynamic range. The sound files for lip sync don't have to be the same you will use for the final mix: they could be layout sound (by a different voice actor) or something clean and loud especially for this automatic lip syncing, replaced by sound with effects in the final edit.
The manual says you can control the sound file volume (in the audio layer tab), but I never used that. Sometimes I had a problem with loud voices where the mouth was wide open all the time. To adjust this, I duplicated the
mouth_1 layer 1 or 2 times - now the
mouth_3 was hit less often.
You assign the sound file to the switch layer, hit OK, and it fills the timeline with keys. Usually you will need to adjust those a bit: erase some when the mouth gets to busy, assign a more closed one for consonants which are too loud, and so on. This shouldn't take much time. Take care to really have the mouth closed at the end of a phrase.
I often had sound files with different speakers, but that's easy to solve: assign the same sound to the mouth switch layer of all characters, then go into each timeline and erase the parts which are not for this character.
"But what about phonemes?" you may ask. Well, you have to decide: it's either phonemes or automatic, not both. Well ...
There is a way, but it makes your character setup more complex. See this:
- top mouth switch leyer
- - automatic mouth switch layer
- - - mouth_3
- - - mouth_2
- - - mouth_1
- - - mouth_0
- - phoneme mouth switch layer
- - - FV
- - - L
- - - O
- - - WQ
- - - smile
- - - scream
In this setup, you could assign a sound file to
automatic mouth switch layer and a switch data file to
phoneme mouth switch layer. You use the
top mouth switch leyer to manually switch between those two sets. And you still can select special expressions like
smile or
scream.
This works fine with bitmaps and equally well with vectors. But with vector mouth shapes it's better to work with bones instead of switch layers.
If you have Moho Pro, you can use Smart Bones to control mouth movement automatically. I create a mouth action (from closed to wide open) and assign a smart bone to it. I import the audio as usual. Now I select that smart bone and select
Script/Sound/Bone Audio Wiggle. Assign the sound file. Two more options:
frame interval (I use 2, tha's good enough), and
Max Angle. This one is kind of a volume control: 360 means full volume, 180 means half volume at the loudest part of the sound file. I use this to fine tune the mouth movements according to emotion.