In a Press release University of California, a bioengineering team has announced the development of high-performance 3D bionic cameras. Not only do they successfully mimic the “multiple vision” of flies and the natural sonar of bats, they also overcome their inherent flaws.
The new camera replicates these extraordinary natural features through computational image processing. And it achieves amazing results: it even recognizes the size and shape of objects hidden behind corners or other objects.
3D cameras inspired by bats ...
The new device uses echolocation that works like that of bats: the birds emit high-frequency squeaks that bounce off the surrounding environment and are then heard by their ears. By assessing the time it takes for the echo to reach them and the strength of the sound they determine where things are, what is in their path and whether there is potential prey nearby.
… .And insects
Flies and other insects have "eyes" containing hundreds or tens of thousands of individual visual units. This means they can see things from multiple angles at the same time.
How 3D imaging inspired by bugs and bats works
“Seeing through and around different obstacles and distances was not easy at all,” says the study leader Liang Gao, associate professor of bioengineering at UCLA Samueli School of Engineering.
“To address this problem, we have developed a new computational imaging framework, which for the first time enables the acquisition of a wide and deep panoramic view with simple optics and a small array of sensors.”
The new technology is known as “Compact Light-field Photography”, for CLIP friends. Research suggests it can be used to “see” hidden objects. This capability is enhanced by a kind of LiDAR, commonly known as “Light Detection And Ranging”, in which a laser scans the surrounding environment to build a three-dimensional map.
The secret? Seven LIDAR cameras with CLIP technology
To be precise, this newly developed camera uses as many as 7 LiDAR cameras combined with CLIP to capture a low-resolution image of the scene, process what the individual cameras see, and finally reconstruct the combined scene into high-resolution 3D images.
“If you're covering one eye and looking at your laptop, and there's a cup of coffee just hidden behind it, you might not see it because the laptop is blocking your view,” Gao explains. “But if you use both eyes, you will notice that you will have a better view of the object. That's pretty much what's happening here, but now imagine seeing the cup with the compound eye of an insect. More views are now possible.”
Possible applications? Autonomous vehicles (for obstacle perception and relative avoidance) and medical imaging. The research was published in Nature Communications, and I link it here, so you can take a look at it.
Or a thousand. It depends on what eyes you have.