Use Phoneme Data to Drive a 2.5 D Bone Rig?

General Moho topics.

Moderators: Víctor Paredes, Belgarath, slowtiger

Post Reply
Monkey Possum
Posts: 33
Joined: Thu Jul 26, 2018 1:29 am
Location: Los Angeles
Contact:

Use Phoneme Data to Drive a 2.5 D Bone Rig?

Post by Monkey Possum »

I'm playing around with different ways to do lip sync. I have a pretty nice 2.5D head rig with lots of bones to control head turns, facial features, including mouth. It's a lot like the one in this video:

https://www.youtube.com/watch?v=60rCuqXi53w

But I really want to automate lip sync with data like from Papagayo. Papagayo is designed to work with switch layers. With interpolation on, and plenty of artistic decisions, some smooth lip sync can be done. But unless I'm wrong, you can't incorporate the switch layer system into the 2.5 D head rig. It seems to be one or the other.

First, am I right that it's basically a EITHER a switch layer system, OR a 2.5 D bone rig?

Second, my real question - is there a way to use phoneme data to drive the bones? I am imagining two scripts (1) Allows user to manipulate bones, name and save that configuration as a target pose; and build up an entire set of poses; and (2) Converts frame-numbered phonemes (like Papagayo data) into keyframes driving the bone manipulations, according to the set created.

I can't be the first to think of this. Does it exist? Maybe? Is it impossible?

Thanks.
User avatar
synthsin75
Posts: 9935
Joined: Mon Jan 14, 2008 11:20 pm
Location: Oklahoma
Contact:

Re: Use Phoneme Data to Drive a 2.5 D Bone Rig?

Post by synthsin75 »

Monkey Possum wrote:I'm playing around with different ways to do lip sync. I have a pretty nice 2.5D head rig with lots of bones to control head turns, facial features, including mouth. It's a lot like the one in this video:

https://www.youtube.com/watch?v=60rCuqXi53w

But I really want to automate lip sync with data like from Papagayo. Papagayo is designed to work with switch layers. With interpolation on, and plenty of artistic decisions, some smooth lip sync can be done. But unless I'm wrong, you can't incorporate the switch layer system into the 2.5 D head rig. It seems to be one or the other.

First, am I right that it's basically a EITHER a switch layer system, OR a 2.5 D bone rig?

Second, my real question - is there a way to use phoneme data to drive the bones? I am imagining two scripts (1) Allows user to manipulate bones, name and save that configuration as a target pose; and build up an entire set of poses; and (2) Converts frame-numbered phonemes (like Papagayo data) into keyframes driving the bone manipulations, according to the set created.

I can't be the first to think of this. Does it exist? Maybe? Is it impossible?

Thanks.
Ramon's script LipSyncWithActions, that Victor reposted here ( viewtopic.php?p=85214#p85214 ), allows you to activate actions by Papagayo dat file. So if you make a regular action for each of the smart bone posed mouth shapes and name them after the Papagayo phonemes, the script should add those actions to the timeline as the dat file dictates. I haven't used it in forever, but I think you need to run the script from the same layer you create the actions...top or head bone layer.

This script basically does what you want. You just define the phonemes as regular actions.
User avatar
ahmed111
Posts: 3
Joined: Fri Oct 05, 2018 11:26 am
Location: sydney

Re: Use Phoneme Data to Drive a 2.5 D Bone Rig?

Post by ahmed111 »

Great thread
Monkey Possum
Posts: 33
Joined: Thu Jul 26, 2018 1:29 am
Location: Los Angeles
Contact:

Re: Use Phoneme Data to Drive a 2.5 D Bone Rig?

Post by Monkey Possum »

Wow, thanks Wes! And thanks Ramon! This script works right away. Here's a tip:

The script asks you to "Insert action as" either "reference" or "copy".

"Reference" keyframes will play the actions as per the keyframes on the action timelines.

"Copy" keyframes will copy the keyframes from the action timelines onto the main timeline.

"Copy" works like what I had in mind. Thanks again.
Post Reply