New Frontiers in Self-driving Cars


Lidar, used in most self-driving cars, models the world around them by creating 3D representations of a scene in view.
Photo by John D. SL/Shutterstock

One of the most promising developments born out of the 2005 DARPA Grand Challenge was the use of lidar technology in self-driving cars, also known as autonomous vehicles (AVs).  Lidar is a remote sensing and ranging method that uses light in the form of a pulsed laser to measure variable distances. Lidar-based systems are a key component in most self-driving cars, enabling AVs to detect objects and distances to model the world around them.  With major automotive manufacturers and big tech now in the race to develop driverless cars, the global lidar AV market has exploded, with analysts projecting a market value of USD $3.21 billion by 2027.

As AVs inch closer from experimental test drives to “in the wild” deployment on public roads, the limitations of today’s multi-photon lidar systems are becoming evident.  BU CISE research affiliates from the College of Engineering, led by Professor Vivek Goyal (ECE), have been advancing the capabilities of an emerging technology called single photon lidar (SPL). SPL offers performance characteristics more conducive to the requirements of autonomous navigation, such as photon-efficient, eye-safe, long-distance imaging capabilities. With a 2014 paper published in Science, Goyal established the field of depth imaging from extremely low light levels—as little as one photon detected per pixel—using pulsed lasers and time-resolved photon detectors. Last year, ECE student Charles Saunders, former postdoc John Murray-Bruce, and Goyal published a paper in Nature describing a non-line-of-sight (NLOS) imaging technique. The technique uses an ordinary digital camera to capture color images, albeit with no depth information and requiring the shape of a hidden occluding object to be known in advance. Now Goyal, his students, and research collaborators have published a series of groundbreaking papers in Optics Express, Optica, and Nature Communications aimed at more practical implementations of SPL, making real-world implementations of applications such as driverless cars, closer to reality.

Professor Vivek Goyal (ECE)

“Single-photon lidar has been an exciting area of emphasis for my group,” says Prof. Goyal.  “Pure intellectual curiosity about how much information can be gleaned from a single detected photon inspired us to learn about the inner workings and limitations of lidar hardware.  It is enormously gratifying for that effort to lead to methods that provide substantial improvements in real-world settings.”

 Modeling the World through Lidar

Lidar allows AVs to model the world around them by shining a pulsed laser on a target and measuring the round-trip distance back to the sensors. These models, called “depth maps,” are like photographs; the difference is that each pixel not only represents a brightness but also specifies the distance between that part of the image and the camera used to capture the scene. For self-driving cars, depth maps provide a 3D representation of a scene in view, allowing the vehicle to “see” how close and how far away objects are. Single photon lidar is emerging as a preferred method for forming fast and precise depth maps.  While conventional lidar requires hundreds or thousands of photon detections per pixel to form accurate 3D images, SPL systems have shown accurate images formed from as little as one photon detection per pixel. Still, most SPL systems have been limited to laboratory-scale conditions rather than the real-world settings in which they would be deployed.

Improving Safety in Self-Driving Cars

Driverless cars operate in widely diverse environments and as such, need to be able to respond quickly and accurately to unexpected objects and challenging situations.  Conventional photon detection time models can significantly limit data acquisition capabilities, resulting in distorted distance estimates and objects that can lead to unsafe AV driving scenarios. In Optica and in Optics Express, BU alumnus Joshua Rapp (Ph.D. ECE ’20) and Goyal describe two developments that expand conventional photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment.

“Forming depth maps both quickly and safely is a key challenge for autonomous vehicles but is a difficult problem to solve through hardware alone,” explains Rapp. “We really need to understand the factors affecting each photon detection and harness that information so that accurate depth measurements can be made in short timeframes and from very little light. Our goal with these two papers was to show that careful mathematical modeling and signal processing tailored to non-ideal hardware could lead to unconventional measurement strategies that do much better than standard approaches.”

 Development 1: Mitigating Dead Times in High Flux Photon

From photon generation to detection: (a) the incident light intensity causes (b) the sequence of photons incident on the detector; (c) incident photons generate photoelectron arrivals with probability ; (d) photoelectrons cause an avalanche in the SPAD if they do not arrive during a detector dead period; (e) avalanches are registered as detection events if they do not occur during an electronics dead period.

In the Optica paper entitledHigh-Flux Single-Photon Lidar,” Goyal and Rapp team up with former BU postdoc Yanting Ma and Draper Laboratory to address the dead time effects of current instrumentation, which limits fast data acquisition.  They describe how the physical electronics limitations of today’s detector technology can distort images and create invalid distance estimates when there is a mix of bright and dark objects.  For example, a broken-down black car on the side of the road may not necessarily be detected by an AV if the field of view also includes a reflective road sign closer to the car.  That’s because each time a photon is detected, the commonly used single-photon avalanche diode (SPADarray shuts off to reset. The measurement settings that give unbiased estimates for the bright reflective road sign despite these dead times also cause the darker object to barely be seen at all, which results in hazardous driving scenarios.

To overcome this hardware limitation, the researchers establish a new way of modeling the absolute sequence of photon detection times, as a Markov chain. Experimental results demonstrate that correctly compensating for dead times in a high-flux measurement can decrease data acquisition time by 100-times. By enabling more accurate and faster detection of objects and distances, this method could lead to safer real-time autonomous driving.

Development 2: Enabling Accuracy Beyond the Timing Resolution of the Sensor

A diagram showing the conventional SPL setup modified for a subtractively-dithered implementation. Varying delays are inserted by the digital delay generator between the laser and TCSPC triggers to implement dither. The amount of delay is controlled by the computer, which allows the delay to be subtracted from the photon detection times after acquisition.

Imagine officiating a 100-meter dash with Usain Bolt, but only having a stopwatch that records times in seconds. You would only be able to tell that Bolt ran the race somewhere between 9 and 10 seconds, but without any further precision. Now imagine that you had 10 stopwatches, each still with only 1-second resolution. However, by starting each stopwatch in sequence with a delay of one-tenth of a second between starts, you could cleverly combine the ten separate measurements of the race to get sub-second resolution.

Although the speed of light is much faster than the fastest human on Earth, timing photons with coarse clocks can follow the same principle. Today’s SPAD array detectors promise fast depth imaging but trade off a higher pixel count at the expense of timing resolution. This limits the precision of distance estimates.

In the Optics Express paper entitled “Dithered depth imaging,” Goyal and Rapp, in collaboration with Draper Laboratory, address how to overcome this hardware limitation. They describe use of a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the start times of the photon timing circuits. They then demonstrate this new method, in conjunction with careful modeling of the laser pulse shape, outperforms methods based on conventional assumptions. The researchers propose that the resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite coarser timing systems.

Non-Line-of-Sight Imaging Edges to Reality

Simulations of confocal measurements for a planar, T-shaped object (a) highlight the resolution and field-of-view limitations of conventional NLOS methods. Laser illumination positions are shown as red dots throughout. Reconstructions using the f–k migration algorithm12 (b–d) show degradation as the aperture sizes and numbers of measurement points decrease. Hence, large-scale scenes (e) yield poor reconstructions (f), even from an aperture size of (1 m)2. By taking advantage of existing vertical edges (g), ERTI is still able to accurately reconstruct large scenes despite a small aperture and few measurements.

SPL has the potential to form images of objects that are hidden from our direct field of view. This growing area of research, called Non-Line-of-Sight (NLOS) imaging, holds tremendous potential in helping advance the safety of driverless cars by anticipating obstacles even beyond the capabilities of a human driver. While a number of NLOS imaging methods have been previously proposed, those developments have limited practical applications. Typically they require scanning a laser over large sections of flat walls, which are not always available in practical settings. The reconstructed image tends to have a small field of view and requires a large number of measurements. Before autonomous navigation can become reality, advanced methodologies, beyond laboratory-scale experiments, are required.

In the Nature Communications paper, entitled “Seeing Around Corners with Edge-Resolved Transient Imaging,” Goyal, Rapp, Saunders, and Murray-Bruce team up with researchers from Draper Laboratory, Massachusetts Institute of Technology, and Heriot-Watt University, Edinburgh to develop a new paradigm for Non-Line-of-Sight Imaging using methods that could be more easily implemented in real-world settings. The researchers develop a SPL technique that can map out large-scale scenes of hidden rooms by pulsing a laser to a small region of the floor near a vertical edge, such as the bottom corner of a doorway. In experiments, the researchers demonstrated the ability to form 2.5 dimensional images of large hidden rooms, up to 3 meters in each dimension, with a 180-degree field of view. Impressively, only 45 measurement locations were required, far fewer than the thousands of points typically required. The new work takes advantage of the fact that an occluding object, the vertical edge in this case, is visible and uses probabilistic modeling of photon detections to improve the computational methods. Listen to the researchers present this work here.

The safety and success of self-driving cars depend on their ability to accurately map and respond to their surroundings in real time. This research demonstrates the viability of using SPL methods to achieve safety goals with practical, process-efficient methods. While autonomous vehicles are the prime driver of lidar development, lidar advances are expected to impact a growing number of applications, including robotic vision, environmental monitoring, and military reconnaissance, among others.

Learn more about recent advances in signal processing techniques for AV Lidar in an IEEE Signal Processing Magazine paper co-authored by Rapp and Goyal entitled Single-Photon Lidar for Autonomous Vehicles.