Pocket-lint is supported by its readers. When you buy through links on our site, we may earn an affiliate commission. Learn more

(Pocket-lint) - Research involving machine learning and artificial intelligence has resulted in some mind-blowing experiments, like deepfake videos. Now, it's being applied to old photos and paintings so that they can appear animated for the first time.

Sure, the results aren't perfect, but it's a start, and it's an absolutely fascinating one. In a paper published by Samsung AI Center, which is available here on Arxiv, you can learn about a new method that's been developed for, in a nut shell, making a target face do what a source face does. Using only a single image of a person’s face, a video can be generated to show the face moving and speaking.

Samsung AIMona Lisa Brought To Life Samsung Ai Makes Famous Painting Move And Speak image 2

The Moscow-based researchers were able to take famous pictures of Einstein and Marilyn Monroe and even paintings like the Mona Lisa and make them appear lifelike. They have expressions and move and speak like a real person. The models used in their research require a tonne of data, but the results are convincing, though we'll admit they won't trick you as well as current deepfakes already do. 

For instance, to create the Mona Lisa composite, they used three different source videos, each of which produced different and sometimes odd results. Watch the video above for the technical details behind it all. You'll learn about something called a "Generative Adversarial Network", which the researchers used to pit two models against each another to try to achieve realism.

It's amazing to see their technology at work.

Writing by Maggie Tillman. Originally published on 23 May 2019.