3D Animate Hub: Convert your Photo to 3D Avatar
A picture generated by 3D Animate Hub
3D Animate Hub by Soft AI lets users upload or take a picture and then converts the picture to a 3D character version of itself (avatar) from a selection of art styles and then animates it.
There has been a great rise in social media use of short form videos appearing in various networks such as: TikTok, Instagram reals, Facebook story, WhatsApp Status, etc. This has brought a great need for a solution that can enable users to explore their creative sides and bring more entertainment to their fans and followers. Currently, the existing pipeline and workflow for creating such high-quality pictures and videos requires a lot of resources, both in terms of cost and hours.
Our system allows users to be able to create more interesting and entertaining videos that they would otherwise require a high level of expertise and a long time to create. It also allows them to easily share the output to various other apps such as WhatsApp and Instagram. By leveraging our Artificial Intelligence System, a user can achieve all these with the simple click of a button in less than 30 seconds. The web application demo was created using Flutter web. We made use of Visual Studio Code as the code editor due to its versatility. Various Alibaba Cloud products were used for the back-end which includes storage, AI, Database, hosting, etc.
Alibaba Cloud Products Used
A database was created which includes the users’ login details, their likes and shared history, uploaded pictures and cartoon history.
This was used to store the uploaded pictures of the users and the output cartoon, the motion transfer network also its input from the storage Service and saved the output gif there also. The OSS API was used for uploading and downloading the contents.
A model based on first order motion by Aliaksandr Siarohin and others was used for the video motion. A source image was provided in this case the cartooned image and a reference video for the transfer of motion, both media were resized to 256x256 as needed by the model.
To load the PyTorch model for the motion transfer a demo script from the GitHub repository was imported and the generator and detector checkpoints were loaded with model configuration being specified as vox-256.yaml and the model checkpoint path which was uploaded to the NAS using fileZilla because of its size.
After the specification of the generator and detector from the model, the making of the animation was done using a function from the demo script taking arguments of source_image, driving_video, generator, kp_detector and after a successful run the animation was done.
Web hosting and DNS services were needed to host the web app. We purchased a .ltd domain name through Alibaba Web Hosting, and we used Alibaba Cloud DNS to Upload the demo, Bind the domain, and resolve the Domain Name System.
About the Team
My name is Abdul-Hadi Hashim. I have always been curious about technology starting from an early age where I apparently destroyed my dad’s PC when I was “trying to see the people inside” and destroying many toys trying to figure out how it works, which ultimately helped me pave my way into Engineering.
I am a self-taught Software Developer who learned mostly through reading books and online resources. I have been a freelance developer for the past three years while being a full-time student. My curiosity has led me to work on all sorts of projects ranging from IOT projects with Arduino, Mobile games, VR games, web apps, IOS and Android Apps. I am also fluent in three languages: English, Arabic, and Hausa.