&Bullet; physics 14, 65

Using a single-pixel detector and a pulsed emitting device, researchers can create a 3D image of a room from multiple echoes.

Self reflection. This simulation shows an image of a person in a room, provided a camera can record all reflections from surfaces. The figure on the right in black is the direct image of the person, while the other figures are indirect (multi-bounce) images of the same person (see videos below).Self reflection. This simulation shows an image of a person in a room, provided a camera can record all reflections from surfaces. The figure on the right in black is the direct image of the person, while the other figures are indirect (must … show more

A bat can reconstruct its surroundings by emitting a chirp and listening to the waves ricochet off nearby objects. A new system takes this echo localization to the next level by recording waves of sound or light that are reflected multiple times from walls and other objects in a room [1] . The technique uses a bare bones setup with a solid emitting device and a single pixel detector. To extract a 3D image from the signal, the system relies on a machine learning algorithm that is first trained with images of the room that were recorded with a normal camera. The method could result in unobtrusive surveillance systems for home or hospital use.

The concept of “photographing” has expanded in recent years as researchers have shown that they can decipher what appears to be random wave data to create an image. Examples of this are the impact of light rays around an obstacle (see summary: Seeing around corners in real time) or the collection of scattered light from an opaque material (focus: reversing the light scattering with a handful of photons). These methods usually use a multi-pixel camera or a light source moving in some way to create a scan. Now Alex Turpin and his colleagues at the University of Glasgow, UK, have shown that simpler devices that can record multiple bouncing echoes can create 3D images of a room.

Echo collector. This experimental setup drawing shows the echo recorder (blue) in a room with a single rectangular block (yellow). The device’s emitter generates a radio or sound pulse, and some of the waves (red arrows) bounce off walls and other surfaces before returning to the device’s single-pixel detector. A time-of-flight camera (black) next to the device is used to train the system.Echo collector. This experimental setup drawing shows the echo recorder (blue) in a room with a single rectangular block (yellow). The emitter of the device generates a radio or sound pulse and some of the waves (red arrows) bounce off the … show more

The method involves a fixed source that emits short pulses in all directions and a single pixel detector. The team tested two separate setups: one with radio waves at GHz frequency and the other with sound waves at kHz frequency. At first glance, it seems impossible to recreate a 3D image from the detector, the output of which indicates only the amount of wave energy that hits the sensor at any given moment after a pulse. There are many different room arrangements that can produce the series of echoes recorded by the detector. To overcome this so-called degeneracy, Turpin and his colleagues took a completely different approach: instead of using physical principles to determine the 3D structures that would generate the echoes, they used a neural network, a type of machine learning system.

The training began with the installation of a 3D camera (time-of-flight) next to the single pixel detector. Every “true” image recorded by the camera was linked to an echo signal from the detector. The team repeated this process of mapping images to echo signals thousands of times as people and objects moved in space. After this workout – which typically took ten minutes – the neural network had a “volumetric fingerprint” of the room, which enabled it to instantly generate images of echoes bouncing off any person or object in the room, Turpin says.

The team was curious about the number of bounces it took to map a room with their system, so they ran simulations with different echoes. “The more paths you have, the better you can locate your object and extract information about the shape and orientation of the object,” says Turpin. However, the team found that collecting waves that ricocheted more than four times wouldn’t improve the image much because the additional reflections didn’t provide much additional information.

Two versions of a person moving in a room accelerated by

2×

, where the color represents the depth (distance to the viewer). On the left is the reconstruction obtained from high frequency data collected by the single pixel detector. On the right you can see a video from a camera collecting time-of-flight information to be able to infer the depth.Two versions of a person moving in a room accelerated by

2×

, where the color represents the depth (distance to the viewer). On the left is the reconstruction obtained from high frequency data collected by the single pixel detector. On the right side there is … show more

The echo-based images have a very low resolution: you can tell when a person is in the room, but not who they are. The technology can be useful in a hospital where you just want to know if a person is in bed or moving. The detectors would work in the dark without violating anyone’s privacy, Turpin says. Another advantage is that the technology does not require special instruments – even the high-frequency antenna in a mobile phone could work.

One disadvantage is that the neural network only works in the room in which it was trained. The team is currently exploring a system that can be trained in hundreds or thousands of different rooms and other building environments. The device can then be operated without training in a room that it has never seen.

Gordon Wetzstein, an expert in computer imaging at Stanford University, finds this new method exciting. He compares it to non-line-of-sight imaging, which involves observing a hidden object by reflecting laser light off a wall and capturing the reflected light scattered by the object. This paper shows for the first time that higher order bounces can be used reliably for this type of application, which is vastly improved [the image quality] about existing results, ”says Wetzstein. He envisions this technique that will help autonomous driving, which uses reflection-based radar or lidar to detect objects on the road.


–Michael Schirber

Michael Schirber is the corresponding editor for physics based in Lyon, France.

References

  1. A. Turpin et al., “3D imaging from multipath temporal echoes” Phys. Rev. Lett.126174301 (2021).

More information


Subject areas

On the subject of matching items

Bring high resolution x-rays to the lab
Illuminate heart cells with tiny lasers
Finding light in dark atomic clouds

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here