Bioengineers look to the animal kingdom to create super 3D bionic cameras

Reconstructed 3D images rendered in different perspectives for the letter “X”. Credit: Intelligent Optics Laboratory, Liang Gao/UCLA

A pair of UCLA bioengineers and a former postdoctoral researcher have developed a new class of bionic 3D camera systems that can mimic the multiview vision of flies and the natural sonar detection of bats, resulting in multidimensional imaging with a range of extraordinary depth that can also sweep through blind spots.

Powered by computer image processing, the camera can decipher the size and shape of objects hidden in corners or behind other features. The technology could be integrated into autonomous vehicles or medical imaging tools with detection capabilities well beyond what is considered state-of-the-art today. This research was published in Nature Communication.

In the dark, bats can visualize a dynamic picture of their surroundings using a form of echolocation, or sonar. Their high-frequency squeaks bounce off their surroundings and are picked up by their ears. The tiny differences between the time it takes for the echo to reach nocturnal animals and the loudness of the sound tell them in real time where things are, what’s bothering them, and the proximity of potential prey.

Many insects have geometrically shaped compound eyes, in which each “eye” is made up of hundreds to tens of thousands of individual sight units, allowing the same thing to be seen from multiple lines of sight. For example, the bulbous compound eyes of flies give them nearly 360-degree vision even though their eyes have a fixed focal length, making it difficult for them to see anything from afar, like a fly swatter held in place. the air.

Inspired by these two natural phenomena found in flies and bats, the UCLA-led team set out to design a high-performance 3D camera system with advanced capabilities that take advantage of these advantages, but also remedy the shortcomings of nature.






3D imaging by occlusion using Compact Light-field Photography, or CLIP. Credit: Intelligent Optics Laboratory, Liang Gao/UCLA

“Although the idea itself has been tried, seeing through a range of distances and around occlusions has been a major hurdle,” said study leader Liang Gao, associate professor of bioengineering at the UCLA Samueli School of Engineering. “To address this issue, we have developed a new computational imaging framework, which allows for the first time wide and deep panoramic view acquisition with simple optics and a small sensor array.”

Called “Compact Light-field Photography” or CLIP, the frame allows the camera system to “see” with an extended depth range and around objects. In experiments, the researchers demonstrated that their system can “see” hidden objects that are not detected by conventional 3D cameras.

The researchers are also using a type of LiDAR, or “Light Detection And Ranging,” in which a laser scans the surroundings to create a 3D map of the area.

Conventional LiDAR, without CLIP, would take a high-resolution snapshot of the scene but miss hidden objects, much like our human eyes would.

Using seven LiDAR cameras with CLIP, the network takes a lower-resolution image of the scene, processes what the individual cameras see, and then reconstructs the combined scene into high-resolution 3D imagery. The researchers demonstrated that the camera system could image a complex 3D scene with multiple objects, all placed at different distances.

“If you cover one eye and look at your laptop, and there’s a cup of coffee slightly hidden behind it, you might not see it, because the laptop is blocking your view,” Gao explained. , who is also a member of the California Institute for Nanosystems. “But if you use both eyes, you’ll notice that you’ll get a better view of the object. That’s kind of what’s happening here, but now imagine seeing the cup with the compound eye of an insect. Now , multiple views of it are possible.”

According to Gao, CLIP helps the camera network make sense of what is similarly hidden. Combined with LiDAR, the system is able to achieve the echolocation effect of bats so that one can detect a hidden object based on the time it takes light to bounce back to the camera.

The co-lead authors of the published research are UCLA bioengineering graduate student Yayao Ma, a member of the Gao Smart Optics Lab, and Xiaohua Feng, a former UCLA Samueli postdoc working in the lab. de Gao and now a researcher at the Research Center for Humanoid Detection at the Zhejiang Laboratory in Hangzhou, China.


New search “discovers” hidden objects in high resolution


More information:
Xiaohua Feng et al, Compact Light Field Photography Towards Versatile Three-Dimensional Vision, Nature Communication (2022). DOI: 10.1038/s41467-022-31087-9

Provided by University of California, Los Angeles

Quote: Bug eyes and bat sonar: bioengineers look to the animal kingdom to create super 3D bionic cameras (2022, August 12) retrieved August 12, 2022 from https://phys.org/news/2022-08-bug- eyes-sonar-bioengineers-animal.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.

Leave a Reply

Your email address will not be published. Required fields are marked *