Can you see me? Lidar tests establish benchmarks for range

01 March 2023
Jeremy Bos
large highly reflective signs, or “confusers,” are placed near test targets
In the second configuration, large highly reflective signs, or “confusers,” are placed near test targets (slender gray objects). Photo credit: Peter Hallett

Like many optics and photonics technologies, lidar is an enabling technology. One of the its most promising applications is in autonomous vehicles (AVs) and advanced driver assistance systems (ADAS).  These systems rely on information from several sensors fused together in perception subsystems. State-of-the-art ADAS and AV systems may use cameras, lidars, radar, ultrasonic sensors, and a host of other sensors. Lidars are sought after in perception systems because the physics limit of their range detection accuracy can be on the order of centimeters, whereas radar systems, their closest replacement, generally have ambiguities of several tens of centimeters or more.

While lidars used in defense and science missions often operate near the physics limit, the same cannot be said of AV/ADAS units. Here, factors like cost, manufacturability, and packaging considerations take precedence over performance. In negotiating with automakers, performance is generally a given. Instead, suppliers negotiate on piece price and these other considerations.

Because of these—often hidden—design trade-offs, users of these lidars are likely to see performance variations when changing from one supplier to another. It was with this in mind that a team of lidar researchers and experts began an effort in 2019 to benchmark the performance of lidars aimed at the AV/ADAS market. Automakers, government agencies, and consortia have conducted similar evaluations. However, in most cases, their results have not been made public, resulting in frustration for AV/ADAS engineers. The simple fact is that no matter how well-designed, lidars with similarly advertised specifications often perform quite differently in the field.

Originally planned for April 2020, but paused due to the covid-19 pandemic, we held our first benchmarking activity in April 2022 at the SPIE Defense + Commercial Sensing (DCS) conference in Orlando. In this first year of testing, we focused only on evaluating range performance. Specifically, maximum range, range accuracy, and precision with respect to a RIEGL VZ-400i survey-grade reference. Test targets were provided and calibrated by Labsphere to be 10 percent reflective with purely Lambertian reflection characteristics; they have no hot spots or specular reflection. The targets are small and narrow, intended to duplicate the minimum cross-section of a small child just learning to walk. These targets were placed in increments of 5 m between 5 m and 50 m, and 10 m between 50 m and 100 m. Five additional targets were placed between 120 m and 200 m. Testing was done in two configurations: one with and one without adjacent, highly reflective “confusers.”

We collected data from eight different lidars in this first round, and early results were presented on the industry stage at the conference. Over the summer, Michigan Tech graduate and undergraduate students at the university’s Robust Autonomous Systems Lab (RASL) worked to further process the data. These results were published in a special section on autonomous vehicles in Optical Engineering.

The results were interesting. First, despite claims of maximum ranges out to 100 m or 200 m, there were very few detections of targets beyond 50 m for any unit tested. On the other hand, precision (variation about the mean reported value) generally matched advertised values between 2 cm and 3 cm. We also observed that range error (the difference between our reference and the mean reported value) can be 10 cm or more for targets more than 25 m distant. However, the most significant finding is the role of the confusers in our second test configuration. When one of our test targets is located adjacent to a highly reflective confuser, the likelihood the intended target will be detected drops significantly, to between 24 percent and 65 percent, depending on the range and metric.

Are these findings of concern? Not really. AV/ADAS perception system engineers rely on multiple sensors; there will always be some redundancy when safety is involved. Each sensor will have advantages and disadvantages, and multiple modalities are needed to ensure a robust perception system. Beyond that, these engineers have likely done their own benchmarking and understand the performance of their sensors very well. Still, the differences highlight the need for standards used in evaluating lidar performance. We feel our approach of using calibrated test targets and a reference system a useful starting point. No doubt the variations in the test environment, like our adjacent confusers, are another important feature.

The lidar benchmarking activity will convene again at SPIE DCS in late April. We will plan this activity and a subsequent event to be held at DCS in 2024. Our aim for this second round of testing is to incorporate lessons learned and expand testing to as many as 30 units. New tests are also planned, for example, evaluating observed laser power with respect to eye-safety limits. Early plans call for interference testing, where a lidar is tasked with detecting a target surrounded by multiple, identical units—as might be the case, say, on an eight-lane freeway. Also in 2024, we hope to further iterate on our testing and add simulated rain and fog.

We aim to make test data as well as scoring code available to the public, a highlight to be noted. Our goal is to provide as much transparency as possible, while assuring anonymity for our participating lidar companies. Similarly, we do not aim to compare one lidar to another. The intent is to provide open benchmarking results and motivate standards activities, rather than evaluate one unit or manufacturer against another.

Automakers are already signing long-term purchase agreements with lidar manufacturers for use in AV/ADAS systems. As other applications in and out of the AV/ADAS space look to adopt these valuable sensors, the need for standards becomes even more evident.

We are grateful to SPIE for helping to organize this activity and hope that its output will be helpful to the community and enable further transparency. AV/ADAS systems exist for a reason, and we hope our work will result in better and safer autonomous systems.

The paper, along with test data and scoring code, can be downloaded from spie.org/OpenBenchmarkTest.

Jeremy Bos, PhD, PE is an associate professor of electrical and computer engineering at Michigan Tech and leads RASL. He is organizing the lidar benchmarking tests with Paul McManamon, president of Exciting Technology, LLC, and a team of other lidar researchers and experts.

 

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research