Seeing around Corners
A new algorithm extracts enough information from a dull shadow to recreate a hidden scene
By Liz Sheeley
Seeing through walls or around corners has long been categorized as an unrealistic a super-human power like mind-reading and invisibility. To be able to know what is in a room without directly looking into it is a goal and area of research called non-line-of-sight vision. Until now, the only techniques that have been able to solve this problem aren’t easily translatable to the field. Associate Professor Vivek Goyal (ECE) and his team have developed a method of seeing around corners that uses just a simple digital camera and their custom-built algorithms. Their work has been published in Nature.
“Non-line-of-sight optical imaging techniques like this one could be very useful for search and rescue teams, emergency response and autonomous vehicles,” says Goyal. “If these techniques can be reasonably easy to deploy then they could be carried around by first responders.”
Goyal’s research has focused on using very small amounts of information, like a weak light signal, to extract much more information than seems possible. In this paper, Goyal, postdoctoral fellow John Murray-Bruce (ECE) and doctoral student Charles Saunders (ECE), built a way to extract a full-color 2D picture of a scene from a photograph of indistinct shadows, known as a penumbrae, on a neighboring wall as shown in the figure below.
Currently, their technique only works when an object on known shape is blocking the scene in question. But compared to other techniques, this requires fewer situational necessities.
Murray-Bruce and Saunders say that most of the methods before this one use lasers shine light onto a visible surface in the hidden area, and then use the light that’s reflected back from the lasers to reconstruct an image of that area. Those techniques are called time-of-flight methods. And although the groundwork has been laid to use these types of techniques to recreate a color image, it hasn’t been done yet. The researchers would need color lasers, and to shine each color separately, collect that information, and then reconstruct the image.
There’s another non-line-of-sight technique that does not involve lasers, but does require the occluding object to move within the scene. This new method does not require calibration, controlled lighting, time-of-flight detection or scene motion; it takes under a second to snap the image, or multiple images, and about 20 minutes to recreate the scene in full color.
“Weak signals buried in a lot of noise are important and useful,” says Goyal. “There is a universality to shadows that can be exploited to extract a lot of important information, even in low-light scenarios.”
Although they have to know the shape of the occluding object to recreate the scene, they don’t need to know where the object is. They can learn that information while also recreating the scene. And, if there is motion within the scene while the photos of the penumbra are being snapped, they can recreate an even more accurate picture.
Goyal believes that his group’s and time-of-flight methods are complementary. Future research will look at how to combine both sets of principles to further advance non-line-of-sight vision.