Animal Logic’s Darin Grant on using technology to “empower greater creativity”

Captain Marvel © Marvel Studios

Independent production studio Animal Logic has added visual effects sequences and AI-assisted animation to many recent Hollywood films. The company's Chief Technology Officer Darin Grant spoke to the Goethe-Institut about the company’s current work.

Jochen Gutsch

The Animal Logic story began in 1991, in the Sydney suburb of Crows Nest, where a small team of artists and technicians came together to form a company. The organisation now has its headquarters at Fox Studios in Sydney’s Moore Park, with a second studio in Vancouver, Canada, and a development office in Los Angeles.
 
Animal Logic’s Chief Technology Officer Darin Grant spoke to the Goethe-Institut about the company’s current work and the use of AI in animated films.
 
The list of high-profile projects Animal Logic has been involved in seems to grow on a daily basis. What are you working on at the moment?
 
Our Sydney and Vancouver studios are currently working on Super Pets, a Warner Brothers feature length animated film, set for release in 2022. Our Sydney studio recently wrapped on the sequel to Peter Rabbit, which audiences will be able to see in early 2021.
 
As Chief Technology Officer, do you need to understand the nitty-gritty of data storage - rendering, writing code and designing algorithms - or is your role more about corporate relations and strategic leadership?
 
Both are equally important. In order to provide strategic leadership and establish the best external partnerships, I need to understand how our artists and support teams use technology day-to-day. To me, leadership involves understanding the problems and enabling those best able to fix them.  That combines setting a vision along with clearing the path for our team to get there however I can.


Darin Grant Darin Grant | © Darin Grant The production of animated movies is notoriously time-consuming and resource intensive. What are the steps in the process that could be streamlined through machine learning and AI?
 
There are a couple of key areas where AI can help. Anything quantitative is an easy target.  If machine learning can hit the same result of a computationally more complex solution, then it’s a place to focus. Things such as rendering (creating an image from a 3D scene), and simulation (determining the animation of water, clothing, fur, or even an explosion through mathematics versus artistic interpretation) are areas that have already made some significant breakthroughs thanks to machine learning.
 
The tricky area of focus is where machine learning can improve qualitative work. If you can’t prove something is better, how can you train a machine that it is? In those cases, we look to machine-learning assisted workflows. On the artistic journey to creating our content, there are still many time-consuming tasks that must be completed. Computers have already achieved some of this without machine learning.
 
The “in-betweener” role in traditional animation of a human filling in the blanks between key poses created by an animator has been replaced by machines even without machine learning. Now is the time for us to focus on what those next level tasks are that could be used to assist, but not replace, our key talent. We want to find ways to use technology to empower greater creativity.
 
If machines can be trained to fill in those blanks and if they consistently improve, perhaps those key frames could be moved further and further away from each other, leaving more and more creative space to the computer programs. If we take this idea to the extreme, machines could eventually take over the entire process and create a complete film. Do you see this as a realistic scenario?
 

In some cases, but there is a balance. Ultimately, the reason you have an actor and animator combining to create a character are the subtle nuances of performance that are non-deterministic. For example, we could feed all the facial animation in for a character alongside the actor’s stems (vocal tracks) in order to train a model to produce facial animation based on new vocal tracks.

While that may reduce the amount of facial animation necessary to be achieved by hand, you can bet that there would still be artistic interpretation necessary in many scenes in order to get the right tone and nuance to convey the story. That’s why in those cases, we look to machine learning assisted workflows that help the artist eliminate work but doesn’t ever look to replace them.
 
Science fiction and comic artists have long fashioned worlds in which machines have evolved beyond human control. The 2019 movie ‘Captain Marvel’ features a so-called Supreme Intelligence, and Animal Logic was tasked with creating several sequences in the film. How can something as abstract as AI be represented on screen?

 
It’s as much of a challenge as representing human intelligence on screen. I was fortunate enough to have worked on the opening sequence to Fight Club which involved a tour through the main character’s brain as neurons fired to reflect the dire situation he was in. I think creative liberties need to be taken in order to visualise the intent to the viewer as the actual process would be pretty uninteresting to watch.
 
Some AI advocates envisage a world in which machines take on laborious, repetitive and dangerous tasks, freeing up humans to focus on creative and intuitive things. In your industry, is it possible to draw a line between these areas?
 
Yes. As mentioned, we want to focus on removing tasks from the artist’s workflow in order to allow a focus on creativity. The focus is on assisting versus replacing workflows. It’s surprising how much of the day is spent prepping for that bit of creative time that an artist was trained and hired to do! A guiding principle for us at Animal Logic is “artists doing art” and machine learning looks to allow us to break new ground in achieving that goal. Happy Feet Animal Logic did the animation for „Happy Feet“ which won an Oscar in 2007 | © Warner Bros We have reached a point where cinema audiences can hardly distinguish between real and computer-generated elements anymore. Is there a danger of audiences losing touch with reality?
 
No, I don’t believe photorealism is the progenitor to a loss of humanity or reality. People lose touch with reality today without realistic visuals in computer games, online chat, and other mediums. In fact, I believe putting humanistic faces onto more interactions could help to keep people closer in contact with humanity than before. I’m saying this in the middle of a global pandemic, stuck at home, and yet I’m able to see and connect to my co-workers all over the world on a daily basis. I can’t imagine how reality-blurring this experience would have been, had we not had video conferencing, and better visuals in media will continue that trend.
 
From 2006 until 2011 you worked for DreamWorks Animation. Since then, have you seen technological changes that significantly re-shaped the positions of the main players in the industry?
 
I think that the main players are still there today but there have been a couple of fundamental changes enabled by technology. Ultimately, the thing that distinguishes the main players of today from those in the future is whether or not they are investing in R&D in order to enable newer ways of working. As long as companies keep investing in innovation, they have the opportunity to remain at the forefront of the industry.

Recommended Articles

API-Error