Comparing flash lidar detector options

Three lidar receiver technologies are compared using the total laser energy required to perform a set of imaging tasks.
17 July 2017
Paul F. McManamon, Paul S. Banks, Jeffrey Beck, Dale G. Fried, Andrew S. Huntington and Edward A. Watson

Lidar (light detection and ranging) is a method of surveying based on pulsed laser light that is becoming very common. It is used by the military and by many commercial applications, such as 3D mapping and navigation in autonomous cars and unmanned air vehicles. For these applications, sensitive lidar detectors are essential. But there are different types of lidar detection schemes, with corresponding strengths and weaknesses. Here, we compare three lidar receiver technologies using the total laser energy required to perform a set of imaging tasks (a more detailed description is available elsewhere1). The tasks are combinations of two collection types (3D mapping from near and far), two scene types (foliated and unobscured), and three types of data products (geometry only, geometry plus 3-bit intensity, and geometry plus 6-bit intensity). The receiver technologies are based on indium gallium arsenide (InGaAs) Geiger mode avalanche photodiodes (GMAPDs) (see Figure 1), both InGaAs and mercury cadmium telluride (HgCdTe) linear mode avalanche photodiodes (LMAPDs), and optical time-of-flight (OTOF) lidar using commercial 2D cameras. This last method combines rapid polarization rotation of the image and dual low-bandwidth cameras to generate a 3D image. We chose scenarios to highlight the strengths and weaknesses of the various lidars.


Figure 1. Schematic illustration of a diffused-junction planar-geometry avalanche diode structure. This is the structure for one of our detector options, the Geiger mode avalanche photodiode (GMAPD). The electric field (E) profiles at right show that the peak field intensity is lower in the peripheral region of the diffused p-n junction than it is in the center of the device. SiNx: Silicon nitride. i-InP: Indium phosphide p-i-n diode. i-InGaAsP: Intrinsic (i.e., this region of the semiconductor wafer is not intentionally doped either p- or n-type) indium gallium arsenide phosphide.

Table 1 summarizes the energy required for various imaging modalities. For the case of the InGaAs LMAPDs, we actually carried two bandwidth settings, but in the table we list only the bandwidth setting that required lower energy. GMAPD cameras operate with a low probability of return (i.e., reflection) on a single pulse, but require multiple coincident returns from the same range. The GMAPD cameras do well with bare-earth 3D mapping and 3D imaging through trees. In grayscale situations, the GMAPD cameras use somewhat more energy. The advantages of the GMAPDs are the following: they are thermoelectrically (TE) cooled; they are low energy per pulse, high-rep-rate lasers, which are easier to obtain (because laser diodes are continuous-wave and because of the damage thresholds of fiber lasers); they can passively image in the near-IR (NIR); they have little noise, so their performance can be easily predicted; they are commercially available and moderately priced; and their readout circuits are very simple. One disadvantage of GMAPDs is that they have a dead time of 400ns to 1μs after an avalanche. The probability of avalanches must thus be kept low, or the first return in a given detector blocks seeing any later returns. Due to this blocking issue, high background (such as a bright sun) can be a problem, requiring smaller apertures or increased resolution. But the innovative processing associated with using multisamples in a megapixel has mostly mitigated this problem. A second disadvantage is that forming the image requires significant processing, due both to coincident processing and removal of motion. Finally, the dynamic range must be narrow to ensure the right number of return photons.

Table 1. Summary of required energy for various scenarios and cameras. Grayscale calculations were not done in conjunction with foliage poke through, so there are blank portions in the table. ‘Bare earth’ refers to 3D mapping of an area with no foliage on it. ‘Foliage poke through’ refers to 3D mapping of an area under the trees. DAS: Detector angular sub-tense. mJ: Millijoules. InGaAs: Indium gallium arsenide. LMAPD: Linear mode avalanche photodiode. HgCdTe: Mercury cadmium telluride. OTOF: Optical time-of-flight lidar.
Large DASSmall DAS
BareBare
earth3-Bit gray6-Bit grayearth3-Bit gray6-Bit gray
(mJ)scale (mJ)scale (mJ)(mJ)scale (mJ)scale (mJ)
GMAPD 0.154 25 1601 8.9 2833 181,300
HgCdTe LMAPD 0.175 12.7 814 9.9 1527 97,729
InGaAs LMAPD 0.54 12.7 814 56.8 1527 97,729
OTOF 2 45 80 3,800
Foliage poke through (mJ)
GMAPD 0.164 8.9
HgCdTe LAMPD 0.28 27.6
InGaAs LMAPD 2.06 136

Like GMAPD cameras, InGaAs LMAPDs are TE-cooled, commercially available, and moderately priced. In addition, 3D images can be formed on a single pulse, quickly, and with simple processing. However, the gain of these cameras is relatively low (from 5 on up) due to excess noise and breakdown issues. They require a complex readout integrated circuit (ROIC). Moreover, because the gain is relatively low, it is necessary to keep track of all noise sources. Additionally, LMAPDs require relatively high energy-per-pulse lasers.

HgCdTe LMAPDs have k=0, meaning that all the carriers generated during an avalanche are electrons. This allows very high gains to be achieved. These cameras are therefore very sensitive and retain linear gain. Furthermore, they require very low energy for mapping in many scenarios. They have the additional advantage that 3D images can be formed on a single pulse, quickly, and with simple processing. They can image passively and actively from visible through to the mid-IR, and a day/night passive imager can be inherently co-boresighted (i.e., aligned) with an active imager. However, these cameras are not commercially available, and currently are more expensive than GMAPDs and InGaAs LMAPDs. HgCdTe cameras also need to be cooled to near 100K, so they require a cigarette-pack-size cooler. Finally, they require a complex ROIC and high energy per pulse lasers.

Optical time-of-flight (OTOF) lidars using low-bandwidth cameras with a Pockels cell have the advantage of using commercially available 2D cameras for flash 3D imagery. In the visible or NIR, it is possible to obtain huge-format cameras with tens of megapixels for hundreds of dollars, promising high performance at low cost. Even in the shortwave IR (SWIR) regime, a 1920 × 1080 pixel custom camera as well as smaller cameras can be had for as low as $ 25,000 from multiple vendors. These cameras are mature and offer low-noise, uncooled operation. This means that even though they have no gain, they can be relatively sensitive while providing high dynamic range. Their main disadvantage is that they require a Pockels cell, which costs. A secondary disadvantage is that using two cameras means having to align them carefully. OTOF cameras feature low-energy use for 3D mapping with grayscale. That said, it is likely that another of the sensing modalities will be able to adopt some of the noise-reduction techniques used in conjunction with grayscale imaging for the OTOF cameras.

In summary, for high-range resolution, the current choice of lidar receiver technology is either a GMAPD array or possibly an OTOF imager. GMAPDs typically have an advantage over LMAPDs in terms of inherent timing precision when detecting isolated optical pulses. This is because the current pulses generated by breakdown of a GMAPD pixel are stronger than the current pulses emitted by an LMAPD pixel in response to weak signals. When using LMAPD pixels, timing jitter is significant if the APD's response barely exceeds the detection threshold. Range precision improves for stronger signal returns. Consequently, scenarios that prioritize the best range precision with the least transmitted energy tend to favor GMAPD detectors, whereas scenarios that require penetrating obscurants or collecting reflectance information in a single observation (for instance to ‘freeze’ a dynamic scene) tend to favor LMAPDs. In this article we have attempted to select scenarios that straddle these respective areas of strength and weakness, but these general characteristics should be borne in mind when considering specific applications.


Paul F. McManamon
Exciting Technology LLC
Dayton, OH

Paul McManamon is the president of Exciting Technology and technical director of the University of Dayton Ladar and Optical Communications Institute. He chaired the laser radar study for the National Academy of Sciences and cochaired the optics and photonics study. He is a fellow of SPIE, IEEE, OSA, the Air Force Research Laboratory (AFRL), the Directed Energy Professional Society, the Military Sensing Symposium (MSS), and the American Institute of Aeronautics and Astronautics. He was the president of SPIE in 2006. Until May 2008, he was a chief scientist, AFRL Sensors Directorate. He received the Meritorious Presidential Rank Award in 2006.

Paul S. Banks
TetraVue
San Marcos, CA

Paul Banks is the founder and CEO of TetraVue, working to commercialize high-resolution 3D imaging for smart robotic vision. He received his PhD in applied physics from the University of California, Davis. His career includes work at Lawrence Livermore National Laboratory, and he was cofounder of a new Photonics Division at General Atomics. He has contributed in many areas of laser technology and applications, from ultrafast to directed energy.

Jeffrey Beck
DRS Network ø Imaging Systems, LLC
Dallas, TX
Dale G. Fried
3DEO, Inc.
Dover, MA
Andrew S. Huntington
Voxtel Inc.
Beaverton, OR

Andrew Huntington has led SWIR detector development at Voxtel Inc. since 2004, specializing in APD design and application of APDs to scientific and military sensing. His work at Voxtel has included computational modeling of impact ionization statistics to engineer lower noise multipliers for InGaAs APDs, APD epitaxial layer design, APD wafer fabrication process design, and performance modeling of sensor systems based on APD photoreceivers and focal plane arrays.

Edward A. Watson
Vista Applied Optics, LLC
Dayton, OH

Edward Watson is a distinguished researcher of sensor technologies for the University of Dayton Research Institute. He is also a chief executive of Vista Applied Optics, an optical consulting firm. He retired in 2012 from the Air Force Research Laboratory after 30 years. His research interests include lidar, optical phased array technology, and novel remote sensing, such as low-light-level imaging and speckle characterization. He is a fellow of OSA, SPIE, and MSS and is an AFRL Fellow.


References:
1. P. F. McManamon, P. S. Banks, J. D. Beck, D. G. Fried, A. S. Huntington, E. A. Watson, Comparison of flash lidar detector options, Opt. Eng. 56(3), p. 031223, 2017. doi:10.1117/1.OE.56.3.031223
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research