Awards

Visual Effects and the Rise of the Machines

Visual-effects artists battle to conquer many challenges on each project that comes their way. As the onslaught of superhero films continues, vfx teams must find ways to combine the intricate, organic work of actors with the enhancements of CG and vfx without landing somewhere that no one wants to go: the “uncanny valley” where things look artificial, unbelievable or downright disturbing.

As such unforgettable characters as Gollum and work by vfx house WETA set a high-water mark in “The Lord of the Rings” and “Hobbit” films, vfx supervisors still push to fuse the work of actors and animators to find something new for audiences who may have already seen it all. That’s where machine learning — which teaches computer systems to make decisions from large data sets and is one of the biggest tech jumps in recent years — comes into play. Already in use to create the character Thanos for “Avengers: Infinity War,” machine learning makes it possible to incorporate the tiniest movements in actors’ faces as they portray a character into the work done by visual-effects teams to create anything and everything they need to make for latest fantasy/action/superhero story.

“Early on, Digital Domain said to me that if you’re not using machine learning, then you’re doing it wrong,” says Dan Deleeuw, vfx supervisor for Marvel Studios. “And that pretty quickly proved to be right because we knew that if Thanos didn’t work in our film, then the film wasn’t going to work the way we wanted.”

They were after a depth of detail that hadn’t been seen in previous films and they wanted to mine everything that Josh Brolin would bring to Thanos. So, they rethought how they would take the actor’s performance into CG.

“Without machine learning, it would be so complicated because of all the little points, all the little controls you’d have to have for each little of a quarter inch of skin, and then you’d have basically a million controls all over the face of the character,” says Kelly Port, vfx supervisor at Digital Domain. “It would be ridiculous and basically not usable because it would take up so much time to make and control each rig, but if you can capture all that detail, and you have an underlying, relatively simple rig underneath, then you can raise an eyebrow a little bit more or create a slight lip compression.”

Vfx house Digital Domain used its own secret sauce — proprietary machine learning software called Masquerade — for a more natural-looking capture of Brolin’s performance. The vfx team also used Medusa, a performance-capture system created by Disney Research in Zurich, to help create Thanos. And Direct Drive, another custom Digital Domain tool, was used to transfer motion-capture information to the CG character.

During filming, the team set up about 100-150 tracking markers on Brolin’s face and then used two vertically positioned HD cameras in order to get a kind of low-res scan of Brolin. This information is then given to a machine-learning algorithm that compares them to a library of high-res facial scans for reference. From there, this kind of artificial intelligence decides on the best looks for Brolin’s face. Port and his team would make tweaks each time the algorithm would give them a particular look. And the algorithm would “learn” based all this information how to generate the most realistic face for Thanos. 

The vfx team also kept the motion capture going even when Brolin was no longer doing an official “take” so that they could get additional information about how he moved and watch him while he experimented with the character in-between shooting. This gave them a better sense of how he moved his eyes and other parts of his face.

In the past, on films including “Beauty and the Beast,” high-res facial captures would be done separately when the actor wasn’t around other performers or on set, so the vfx team wasn’t able to get the same kind of spontaneous capture despite doing a higher-res capture of the face overall.

After capturing Brolin on set, the vfx team looked at the machine-generated version of his performance directly next to Brolin’s actual performance. Through careful examination, more adjustments are made and the algorithm is then given more information that that can be used to “learn” Brolin’s face.

“Josh Brolin is a great collaborator,” says Deleeuw. “Thanos is definitely his performance and everything he brought to that role was incredible, and he was really interested in what we were doing and how it was going to take his performance into this new realm. After we saw the first test for Digital Domain, we knew it was going to be something that nobody has ever seen before. We were able to go back and show Josh, and he got this giant smile on his face. He recognized what he put into the performance he can actually see in the CG, and he said, ‘This is the first time I’ve seen what’s in my mind on the screen.’”

While machine learning was able to crush its close-ups in “Avengers: Infinity War,” it’s also being used to take another type of shot out of the dreaded uncanny valley: crowd simulations. When you take shots of a smaller group of people and then try to reproduce and randomize those shots to make it appear that a larger crowd is present, the eye can easily pick up movements that look artificial or fake, and even pick up patterns like the color of a shirt that seems to pop up on every 20 people or so. Then the audience is suddenly aware they’re looking at 10 people who’ve been cloned to look like they’re a group of 10,000. Machine learning can make movements seem more natural and more believable, and give animators more time to tweak what they’ve done so it looks more real overall.

Since AI can learn from its previous passes at a take, vfx artists can “teach” it to make realistic faces or environments that are put together more quickly. Human feedback will still be an invaluable part of the process, but the algorithm will give animators and vfx artists more opportunity to experiment with possible looks for any given environment or character.

“Without a doubt, creating Thanos has been one of the most complex things we’ve done, and I don’t know if machine learning and artificial intelligence is going to revolutionize effects but it will change it,” says Port. “We’re at a point where there’s really nothing that can’t be done in visual effects, but you always have to look at the production time frame and machine learning makes it possible to do more within that time frame.”

Deleeuw agrees. “There’s a lot of time and energy and effort that just goes into getting to the point where you can actually get to the screen and look at something and comment on it,” he says. “And then you have to get to that creative point where you’re spending more time working on a shot, making it look real. These can be challenging shots that need some time to try out different solutions. With machine learning you get there faster so you can spend more time creating something.”

Articles You May Like

AmericanaFest Pre-Grammy Salute Pivots to Become a MusiCares Benefit, With John Hiatt as Subject of All-Star Tribute (EXCLUSIVE)
I Saw ‘Presence’ in a ‘Haunted’ House and Survived to Tell the Tale
Tan France Joins Hulu Comedy ‘Deli Boys’
From ‘The Stringer’ to ‘Move Ya Body,’ BIPOC Filmmakers Share the Untold Legacy of Their Communities in Sundance 2025 Documentaries
Girl in Red to Make Feature Film Debut in Maipo’s ‘Low Expectations’ (EXCLUSIVE)

Leave a Reply

Your email address will not be published. Required fields are marked *