Further demonstrating the power of artificial intelligence when it comes to photorealistically altering footage, researchers from Disney have revealed a new aging/de-aging tool that can make an actor look convincingly older or younger, without the need for weeks of complex and expensive visual effects work.
When watching a blockbuster movie like 2018's Ant-Man and the Wasp, most viewers can easily spot the work of the many visual effects studios that contribute to these films, what with their flashy moments when Ant-Man shrinks or grows to gigantic proportions. But it’s sometimes the more subtle VFX work that can be the hardest to achieve photorealistic results with, like the shots featuring younger versions of actors Michelle Pfeiffer and Michael Douglas. To get results like those seen in the movie, talented artists either need to spend weeks erasing wrinkles and other telltale signs of age from an actor’s face, or entirely replace it with a computer-generated double.
Visual effects are a powerful filmmaking tool, but there are plenty of reasons to find ways to make them easier to create; from lightening the load on already over-worked and underpaid artists, to making the tools accessible to filmmakers not working with immense Hollywood-sized budgets. Of course, even for major studios, there’s a profit motive in being able to automate this kind of work, too.
That’s why companies like Disney invest in research to help advance the art of visual effects, but in recent years these researchers have also been exploring how artificial intelligence can simplify VFX work. Two years ago, Disney Research Studios developed AI-powered tools that could generate face swap videos with enough quality and resolution to be used for professional filmmaking (instead of as questionably low-res GIFs shared around the internet). This year, the researchers are demonstrating a new tool that leverages AI tricks to make actors look older or younger, minus the weeks of work usually needed to perfect those kinds of shots.
Using neural networks and machine learning to age or de-age a person has already been tried, and while the results are convincing enough when applied to still images, they hadn’t produce photorealistic results on moving video, with temporal artifacts that appear and disappear from frame to frame, and the person’s appearance occasionally becoming unrecognizable as the altered video plays.
To make an age-altering AI tool that was ready for the demands of Hollywood and flexible enough to work on moving footage or shots where an actor isn’t always looking directly at the camera, Disney’s researchers, as detailed in a recently published paper, first created a database of thousands of randomly generated synthetic faces. Existing machine learning aging tools were then used to age and de-age these thousands of non-existent test subjects, and those results were then used to train a new neural network called FRAN (face re-aging network).
When FRAN is fed an input headshot, instead of generating an altered headshot, it predicts what parts of the face would be altered by age, such as the addition or removal of wrinkles, and those results are then layered over the original face as an extra channel of added visual information. This approach accurately preserves the performer’s appearance and identity, even when their head is moving, when their face is looking around, or when the lighting conditions in a shot change over time. It also allows the AI generated changes to be adjusted and tweaked by an artist, which is an important part of VFX work: making the alterations perfectly blend back into a shot so the changes are invisible to an audience.