A group of researchers from Imperial College London has come up with a way to render 3D faces from a single picture with the help of artificial intelligence, VentureBeat reports.
Their paper made its way to the ongoing CVPR conference, which is being held in a virtual format from June 14 to June 19.
200 pictures for training an AI system
The described technique, called AvatarMe, represents the very first methodology that allows creating high-quality render-ready 3D busts from nothing but arbitrary images.
The researches deployed a training dataset that consists of 200 pictures of people’s faces that were acquired with the help of ‘state-of-the-art’ facial capture methods.
Those individuals whose photos were used in the experiment belong to different age groups and have different characteristics.
This data collection was necessary for training GANFIT, an AI model at the core of the experiment that is used for inferring information about texture and shape from a single image.
AvatarMe is not without flaws
AvatarMe was able to avoid artifacts even when trying different poses and occlusions. It was possible to realistically reconstruct portraits and black-and-white photographs.
However, it is worth noting that AvatarMe still shows poorer results with input phonographs of lower quality.
On top of that, the researches also failed to include certain ethnicities in the study to make the technique more universal.