We adapted First Order Motion Model for generating real-time avatars in Zoom/Skype.
Only one image is required to build an avatar of the person.
Avatarify streams generated avatars to the virtual camera. Since Zoom/Skype doesn’t distinguish virtual and ordinary web cameras, you can use Avatarify camera for conferencing.
At the moment, Avatarify runs on GPU (33 FPS on 1080 Ti). Supported systems are Linux and Mac OSX. CPU is supported but not yet optimized (~1 FPS on MacBook Pro 2018).
The goal of this project is to make neural avatars widely accessible, so any contribution to the code is highly appreciated! We’ll be glad to answer any questions in the comments.
Demo video: http://youtu.be/Q7LFDT-FRzs
Fun video with Elon Musk: http://youtu.be/lONuXGNqLO0
Read the Full Article here: >Machine Learning