The creation of fake videos was not born yesterday and with the advent of systems called "deepfakes" it has become simpler, but even with the help of artificial intelligence it was necessary to have a good number of photos of the subject to "clone" for achieve a good result.
Today a Samsung search it may have speeded up and simplified the process enormously.
The new model can be able to extract a Fake video of a subject from just ONE photo of him, improving accuracy if more are used: the video accompanying the results of this research is impressive.
There is a corollary: the technology can also be applied to paintings! Seeing the Mona Lisa come to life and move is damn fascinating.
The other side of the coin? We know it: it will always be easier to build fake news and scams.
“These results push the boundaries of technical progress even further,” notes one of the researchers involved, Hany Farid. “They will lead to the production of visual content that is practically indistinguishable from the real thing.”