Alan Wake and Quantum Break developer Remedy is working with tech giant Nvidia to create a streamlined motion capture and animation system.
The new technique uses a deep learning neural network — which runs on Nvida’s eight-GPU DGX-1 server — to generate accurate 3D facial animations based on videos of voice actors recording lines.
As reported by ArsTechnica, Remedy began the experimental process by feeding the network information on existing animations so it had a basic understanding of the final outcome.
Then, after supplying it with around five to 10 minutes of facial capture footage, the network was ready to begin producing animations of its own. Once the network has been suitably trained, it’s also apparently able to create new animations using nothing but audio input.
Even at this stage, it looks like a surprisingly effective technique that has the potential to drastically speed up the animation process, giving artists more time to focus on other areas of production.
How to Automate Video Content Marketing in Under 1 Hour
5 Easy Video Lessons +
Bonus Free Toolkit