Lip sync: Switch layers vs Smart Bones

General Moho topics.

Moderators: Víctor Paredes, Belgarath, slowtiger

Post Reply
sfz95
Posts: 4
Joined: Fri Jun 13, 2014 3:11 am

Lip sync: Switch layers vs Smart Bones

Post by sfz95 »

Which method do most people prefer?

With Switch Layers, you only have to set up each phoneme once, and if you don't mess around with the number of points, you can interpolate between them as well. You can use the same method for other non-speaking mouth shapes, such as a smile, or frown, or grimace. The one disadvantage of this method is that if you want your character's mouth to show more than one emotion, you have to set up several sets of mouth shapes for each emotion, possibly leading to switch layers inside switch layers.

While with smart bones, you have to fiddle around with several dials for each and every mouth position. But you can have one dial to control the "happiness" of the mouth as well, so there's less set up required.

So in short, one requires more set up, but is faster to animate with. The other is less setup, but more dials and fiddling when it comes to the actual animating. Is this about right? Which setup do you prefer?

(I'm using AS Pro 9.5 Free Trial, and I will probably buy version 9.5 rather than version 10 once the trial ends due to the high price of the current version. This is also my first post here, so hey everyone!)
Danimal
Posts: 1584
Joined: Thu Nov 15, 2007 3:06 pm
Location: The Danimal Kingdom
Contact:

Re: Lip sync: Switch layers vs Smart Bones

Post by Danimal »

I prefer Switch Layers with no interpolation. Way easier and looks better to my eye as well.
~Danimal
Furpuss
Posts: 80
Joined: Mon Jan 13, 2014 11:53 pm

Re: Lip sync: Switch layers vs Smart Bones

Post by Furpuss »

I've tried both systems and I prefer interpolated switch layers to bone dials. As my characters use about 25 mouth shapes, it's a lot of work but it gets the best results. There may be some combination of the two which is even better but I haven't tried it yet.

Furpuss
ddrake
Posts: 274
Joined: Mon Nov 11, 2013 9:25 pm

Re: Lip sync: Switch layers vs Smart Bones

Post by ddrake »

Furpuss wrote:There may be some combination of the two which is even better but I haven't tried it yet.
At the beginning of my current project I attempted a Switch Layer/Smart Bone combo, which was successful to some degree.

I abandoned it in favor of just smartbones, because I didn't have time to work out all the kinks. But I think for a mouth constructed in a simple enough way, the combo could be pretty effective to allow a lot of variation.

The concept was straightforward enough(I think):

Put a switch layer in a bone layer with nested bone control on.

Design your resting mouthshape -stacked with all the components for other shapes(teeth/tongue)- in a single vector inside the switchlayer and create a few simple smartbone actions for it. Up/Down for the corners, scaling width, position...etc. (Simplicity I think is key, as my first approach had quite a lot)

Then duplicate this layer (now all copies will be influenced by the smartbone action) and create your main basic phonemes from the duplicates.

Then just use the switch layer how you normally would, but the outer Bone layer will simultaneously adjust each switch and you can use the smartbones to alter the mouth corners and whatnot after your main switch layer lip-sync is done.

Now, I'm not saying it would be efficient for everyone, or anyone, but I tend to like to tinker with things as I go, and this approach could give some added control without tossing in a bunch of new "happy" or "sad" mouthshapes into your switch.


***Edit***

In case that sounded like nonsense, here's a quick test of concept I just whipped up in about 5 min with very simple shapes as an example. (It's done in ASP10, but I swear my first test worked using 9, but don't recall how without nested bone control)
https://www.dropbox.com/s/dkpmm2fkvva8m ... Mouth.anme
-ddrake
User avatar
dueyftw
Posts: 2174
Joined: Thu Sep 14, 2006 10:32 am
Location: kingston NY
Contact:

Re: Lip sync: Switch layers vs Smart Bones

Post by dueyftw »

If you just have a simple mouth like most anime. I would just use one or a few smart bones. If the mouth has like 7 phonemes a switch an papagayo works well. Sometimes a combination of the two. It a matter of need and what will work.

Dale
Telemacus
Posts: 189
Joined: Thu Oct 10, 2013 3:15 pm
Location: OZ_ually.

Re: Lip sync: Switch layers vs Smart Bones

Post by Telemacus »

sfz95 wrote:(...) and if you don't mess around with the number of points (...)
Which is, my opinion, one of several reasons why AS should have a point counter.
Another is to alert you when your layer is beginning to have a huge number of points.
User avatar
dueyftw
Posts: 2174
Joined: Thu Sep 14, 2006 10:32 am
Location: kingston NY
Contact:

Re: Lip sync: Switch layers vs Smart Bones

Post by dueyftw »

Telemacus wrote:
sfz95 wrote:(...) and if you don't mess around with the number of points (...)
Which is, my opinion, one of several reasons why AS should have a point counter.
Another is to alert you when your layer is beginning to have a huge number of points.
Just counting the points will not help. Each point has like a hidden label. Draw a square in one direction and then draw a rectangle in the opposite direction. Morph the two and see what happens.

Dale
Pesto
Posts: 107
Joined: Wed May 29, 2013 5:32 pm

Re: Lip sync: Switch layers vs Smart Bones

Post by Pesto »

To interpolate between switch layers you have to just check the box in the the Switch panel in the Layers setting dialog box, but how do you adjust the interpolation in the motion graph? As far as I can tell, you can't. Am I correct? If not, can some tell me how to do it because I can't figure it out.

Thanks
User avatar
heyvern
Posts: 7035
Joined: Fri Sep 02, 2005 4:49 am

Re: Lip sync: Switch layers vs Smart Bones

Post by heyvern »

Pesto wrote:To interpolate between switch layers you have to just check the box in the the Switch panel in the Layers setting dialog box, but how do you adjust the interpolation in the motion graph? As far as I can tell, you can't. Am I correct? If not, can some tell me how to do it because I can't figure it out.

Thanks
This is one reason I don't like using switch layers for lip sync. It only allows one type of interpolation. You can only use "step" or "cycle".

Another issue for lip sync is that there are so many layers to edit if you need to change something. Even something simple like a new shape or reconnecting some points. A tiny simple change has to be done EXACTLY the same for every single layer. Another limitation which seems like a contradiction is that there IS only ONE layer for a switch layer. You can't have multiple layers for your mouth shapes. They have to be on one layer. If you need to mask a mouth switch layer you have to duplicate the switch and set it as a mask plus keep all the keys for each switch synced up.

I can't tell you how many times I've had a character all ready to go, mouth switch layers all set up and ready to go... then... OOPS! I realize I messed up one of the shapes, or needed to disconnect and reconnect some points or add points. I get that sort of sinking sick feeling in my stomach thinking about all of those layers that have to be either fixed individually or start over again.

(a way around this is to create my switch layers with keys first on one master layer that I keep, then duplicate that layer for each switch, copy the key to frame 0 and delete the other keys... that's still a lot of effort).

I prefer using bones or smart bones for lip sync. There is just "one" set of mouth points whether on multiple layers or with masking, just one set of points to worry about. Editing and changing is simpler. You have access to all of the key frame interpolation options not just one.
EHEBrandon
Posts: 125
Joined: Mon May 26, 2014 2:16 am

Re: Lip sync: Switch layers vs Smart Bones

Post by EHEBrandon »

It depends on the style. If I'm doing a Japanese anime style I will use switch layers. If you ever watched anime the lip sync is very simple so with switch layers you will make 3 different mouths which is open, half open, closed. Same goes for the different expressions as well. Now if we are doing a more toon style then I would use smart bones since in a lot of animations that's the best way to go since there is so many different mouth shapes.
Phazor
Posts: 22
Joined: Tue Oct 06, 2020 9:48 pm

Re: Lip sync: Switch layers vs Smart Bones

Post by Phazor »

:idea: In my opinion, this is why it seems we need those who know how to write scripts to come together to create a solution for smart bone automatic lip syncing. I don't think we would be discussing this issue if such a script existed.

Papagayo is great, and so is the bone Audio Wiggle Script, but there's one specific reason why these options are not full solutions in comparison to switch layers, and that is, the fact that it requires user manual labor to set the keyframes. Switch layers don't require manual labor in placing the keyframes in the appropriate places on the timeline, and that makes this method fast and convenient for animation, whereas the other methods are still tedious and lengthy, even though they are better than the traditional method.

There needs to be a Script or Tool that can adjust the smart bones automatically in the right locations in the timeline.

:D In fact, you could hypothetically create an empty/blank switch layer group, name the individual switch layers according to their phonemes: (AI, E, L, FV, etc, MBP, O, U,
WQ, rest), assign that empty switch layer to the audio file from the layer settings, and then use the keyframe data that Moho Pro generated from those switch layers on the timeline to translate it into bone angles. The empty switch layer already provides the locations of where the key frames are supposed to be, which in theory could make it much easier on the person making the script to make it just as effective as using switch layers while requiring much less work from the user.


If enough people point this out and request this in the community, I'm sure that someone who knows how to develop it could make a great script.
User avatar
synthsin75
Posts: 9964
Joined: Mon Jan 14, 2008 11:20 pm
Location: Oklahoma
Contact:

Re: Lip sync: Switch layers vs Smart Bones

Post by synthsin75 »

Maybe not the best look to volunteer others to do work for you. Maybe people who need it should start a fund, to finance such work.
User avatar
MrMiracle77
Posts: 181
Joined: Mon Jun 24, 2019 2:30 am

Re: Lip sync: Switch layers vs Smart Bones

Post by MrMiracle77 »

I use smartbones for the upper/lower lips, upper/lower teeth, tongue, and the corners of the mouth and the rotation of the mouth, but then create 2-frame Standard Actions using those smartbones for common mouth positions. I just select and insert the appropriate standard action into the timeline, then advance the timeline forward until I reach the audio for the next mouth position. I can do 1 second of lip-sync in about 8-12 seconds that way.

I could probably go even faster if there were keyboard shortcuts for selecting and inserting actions into the timeline.
- Dave

(As Your GM)
Phazor
Posts: 22
Joined: Tue Oct 06, 2020 9:48 pm

Re: Lip sync: Switch layers vs Smart Bones

Post by Phazor »

I don't know if I explained what I was referring to well enough about making a lip syncing script, so maybe I'll try to communicate it more clearly to show how simple this could be for anyone who knows how to write scripts.

I'm talking about making an automatic Smartbone Lip Syncing Script that utilizes the automatic keyframes from an empty switch layer group and translates them into bone angles for specific mouth poses. The user will assign the number of bones involved, the name of those bones, and the specific bone values that result in a specific mouth shape, and the script will apply those adjustments to the smartbones to create keyframes in the appropriate location on the timeline based on the Empty Switch Layer group's keyframe data.

The switch group layer is named "Phonemes", And all of the switch layers within it are named according to the different phonemes (i.e. "AI", "E", "L", "FV", "etc", "O", "U", "WQ", and "rest").

I got the idea from another lip syncing script called "msLipSync". It has all the functionality Of creating key frames for a specific bone/mouth poses, But you still have to manually place the keyframes in the correct places in the timeline. However, if a person is able to somehow use the data from an empty switch layer group that uses Moho Pro's automatic Lip syncing feature, This will inform the script where specifically in the timeline the keyframes should go, making the process entirely automatic without any user manual labor.

It would be quick, instant, and easy for animators to do lip syncing with smartbones. It would solve the problem.

Here's a link to the msLipSync script so that you can see how it works, and that the only thing that would need to be replaced in this script Is adding the feature to translate the empty group layer keyframes into those smart bone poses.
https://mohoscripts.com/script/msLipSync
Post Reply