top of page
Search
Writer's pictureJamie Hurcomb

Putting Facial Mocap and Key Frame Animation Together

As I presumed, putting both animation features together was not as easy as plugging in the facial mocap onto my now keyframed avatar and calling it a day. There was a lot of additional face keying that needed to be done in order to get the right facial animations working in harmony with the head/body movements that I keyed into my animation timeline. Also, juxtaposing my first attempt against my reference footage showed me, for the first time, there was some serious re-timing that needed to be implemented. I wish iClone had a feature where you could bring in a reference footage window to play alongside your animation, but alas, I'm not sure a lot of 3D animators work this way. Most of their creations are entirely novel creations, or, they are using mocap data as their reference (something I will be exploring later on given the fact Ryerson University will be lending me a Rokoko SmartSuit).


For the sake of education, I'd like to be fully transparent with my many attempts at harmonizing my animation (body and face mocap) with my reference footage. And some of my outputs were quite off. As a disclaimer, the goal of this blog was never to perfectly replicate photorealistic footage - as I'm not interested in complete and total synchronicity between the two. I think there are aspects of the uncanny valley that I'm really fascinated by and this is something I continue to explore throughout this MA program. But I must reiterate, the goal here is not to AVOID the uncanny valley necessarily or say that 3D animation can be used as a seamless replacement for photorealism in the documentary landscape.


Attempt #1: Needed further facial animation keying (expressions), re-timing to match photorealistic footage and re-framing.

Attempt #2: Better timing, but still some animation jitters that needed to be smoothed out


I must say that juxtaposing the animation against the photorealistic footage will always heighten the uncanny valley effect of a 3D animated likeness. Though trying to recreate the video footage was indeed the point of this assignment - I prefer it to watch the output as its own, standalone piece without the comparison. In my final MRP, this is the way I'll be moving through my virtual production workflow. I don't intend of releasing the Zoom interview clips I'll be working off as interviews. However, for this exercise, I do see the benefit of outputting the two clips side by side.


Attempt # I lost count: The cleanest output with smoother transition curves applied and more keyframe facial editing


Overall, my impressions of starting to work with animation in iClone revealed that although this workflow is extremely tedious, time-consuming, and frustrating (no surprises there), if you're willing to put in the effort to constantly refine your keyframing, you can create some interesting outputs. And no, these outputs are by no means perfect but as an introduction to this workflow, it was a valuable exercise. Over the next few months, I'll keep trying to find better and faster ways of integrating lipsync mocap (using the Liveface app) with iClone's manual animation features like keyframing and motion clips. It will also be interesting to put the Rokoko Motion Capture suit to the test and see whether that streamlined process for motion animation can alleviate any of the current issues I've run into (i.e. jittery/robotic animation).


Next series of posts I'll be testing out more virtual production features like lighting, camera movements, and alternate rendering methods for smoother exports.

コメント


bottom of page