Duke University researchers have developed an artificial intelligence tool that can transform blurry and unrecognizable images of people's faces into compelling computer-generated portraits, with finer details than ever.
Methods prior to Pulse GAN could detail a pixelated image up to eight times its original resolution. But Duke's team devised a way to take a handful of pixels and create realistic-looking faces with up to 64 times the resolution, "imagining" features like fine lines, lashes and wrinkles that weren't in the first image. .
No images with this resolution have ever been created before
Cynthia Rudin, computer scientist, Duke University
It is not an identikit
The Pulse GAN system cannot be used to identify people, say the researchers: it will not turn a blurry and unrecognizable photo from a security camera into a crystal clear image of a real person. Rather, it is capable of generating new faces that don't exist, but seem plausibly real.
The same technique could theoretically take low-resolution photos of almost anything and create sharp, lifelike images, with applications ranging from medicine and microscopy to astronomy and satellite imagery, the co-author said. Sachit Menon, double specialization in mathematics and computer science.
The researchers will present their method, called PULSE GAN, from tomorrow to June 19 at the 2020 Computer Vision and Pattern Recognition (CVPR) conference.


Traditional approaches start with capturing a low-resolution image with blurry pixels and "guessing" which extra pixels are needed by trying to match them, on average, to the corresponding pixels in high-resolution images that the computer has seen before. As a result of this averaging, textured areas in hair and skin that may not line up perfectly from pixel to pixel may appear blurry and indistinct.
The Duke team has taken a different approach
Instead of capturing a low-resolution image and slowly adding new details, the system searches for examples of AI-generated high-resolution faces (now become very good at this), looking for the ones that look as similar as possible to the input image when reduced to the same size.
The team used a machine learning tool called GAN, or 'adversary generative network'. I have talked about it more in depth in this article and others on this site. GANs are neural networks trained on the same photo dataset. One network features human faces created by artificial intelligence that mimic the ones it was trained on, while the other takes this result and decides if it's compelling enough to be mistaken for a real photo. The first network gets better and better with experience, until the second fails to distinguish. They compete with each other, in other words, and by competing they improve.


PULSE can create realistic looking images from noisy and poor quality inputs. From a single blurred image of a face it can emit an unlimited number of realistic possibilities, each of which looks subtly different.
Even given pixelated photos where the eyes and mouth are barely recognizable, "our algorithm can do something about it. Something traditional approaches can't do." Word of the co-author Alex Damian, mathematician at Duke.
Pulse GAN, the "fantasy" in power
The system can convert a blurry pixel image, or a 16x16 pixel image into a 1024 x 1024 pixel image in seconds, adding more than a million pixels, similar to HD resolution. Details such as pores, wrinkles and strands of hair, imperceptible in low resolution photos, become sharp and clear in the computer generated versions.
The researchers asked 40 people to evaluate 1.440 images generated via PULSE GAN and five other resizing methods. Their judgment? A number on a scale of one to five. and Pulse GAN has done best of all. What's more, he scored almost equal to high-quality photos of real people.
See the results for yourself http://pulse.cs.duke.edu/.