Optics of Autonomous Vehicles

Optics and photonics technologies provide eyes for self-driving cars.

01 January 2017
Jeff Hecht

Cameras and lidars are playing key roles in the development of self-driving cars. Tesla Motors of Palo Alto, CA (USA), used a front-looking camera in the autopilot system it introduced in 2014, and the new version announced in October 2016 includes eight cameras mounted around the car.

Experimental self-driving cars being tested by Google and others also sport roof lidars for three-dimensional mapping.

The optical systems are part of a suite of sensors that work with an onboard computer to map the local environment and steer the vehicle through a dynamic environment that contains traffic signals, pedestrians, other cars, tractor trailers, and even wild animals. The ultimate goal is a robotic system that drives better than error-prone humans, but most observers think that’s many years away.


Photos courtesy Google and Shutterstock
BOLD BETS AT TESLA, GOOGLE

Autopilot was a bold technological step for the all-electric Tesla. Officially, its role is only to assist the driver, who is warned to pay attention to the road and keep hands on the steering wheel “at all times.” But even cautious Tesla drivers take their hands off the wheel briefly to show how well it steers itself on clean, well-marked streets.

Further upgrades to fully autonomous cars are in our future but we aren’t there yet, as shown tragically by the fatal full-speed crash of a Tesla into a tractor trailer making a turn across a divided highway in Florida in May 2016.

“Neither the autopilot nor the driver noticed the white side of the tractor trailer against the brightly lit sky, so the brake was not applied,” Tesla wrote in a 30 June 2016 blog post. The car was headed east on a sunny afternoon, and the driver was not paying attention. The car’s one forward-looking camera missed the color difference between the truck and the sky, perhaps because it was monochrome.

Input from the car’s forward-looking radar was ignored because it could mistake overhead road signs for vehicles, causing false stops.

In October, Tesla announced new autopilot hardware designed “for full self-driving capability at a safety level substantially greater than that of a human driver.” It includes eight monochrome cameras. Three are in front: the main camera with a 150-meter range, a narrow-field camera with a 250-meter range, and a 120-degree fisheye camera with a 60-meter range.

Together with four cameras on the sides and one in the rear, they provide a 360-degree view, covering the “blind spots” of traditional cars. The system also includes a forward radar with a 160-meter range and side-looking ultrasonic sensors with an 8-m range to monitor cars in adjacent lanes and to aid in parking.

Tesla also is adding a new computer with 40 times more processing power. New software will warn hands-off drivers to keep their hands on the wheel and will disable autopilot if they ignore repeated warnings. Further upgrades are planned to make the car fully self-driving.


Credit: Yole Développement
TEST CARS GATHER 3D DATA

Google’s 58 self-driving cars are only test models, but they have driven more than two million miles in Austin, TX; Mountain View, CA; Phoenix, AZ; and Kirkland, WA. Their most obvious feature is a turret on the roof, housing a rotating lidar that records 3D data from its environment to help the car navigate the city and to build a 3D map to help guide other Google cars. The lidar’s short wavelength gives millimeter resolution, but each one cost Google tens of thousands of dollars. The cars also carry radars and cameras.

With speeds limited to 25 miles per hour on residential streets and 35 mph on major thoroughfares, Google’s self-driving cars are tortoises compared to the Tesla hares cruising at highway speeds. Google’s caution extends to having human operators sitting ready to take manual control if the car gets into a situation it can’t handle.

The car’s slow speed prevented injury when a Google car pulled into the path of a bus in Mountain View on 14 February 2016. It also helps Google cars collect more detailed mapping data than possible at highway speed.

CHOICES IN CAMERAS FOR SELF-DRIVING CARS

“The radars and lidars and cameras are your primary sensor suite,” says Jim Rehg of the Georgia Institute of Technology (USA). Radars can see through fog, haze, and rain that can block the higher-resolution lidar. Lidars give 3D point maps but lack the far-field resolution of cameras. Cameras rely on ambient light, which varies widely with weather and the time of day. Headlights and street lights help, but developers also are working on low-light cameras.

Tesla uses monochrome cameras in its upgraded autopilot system to avoid overloading the car’s processor. But Rehg says “color offers tremendous advantages” in recognizing objects. It aids in semantic segmentation, a technique that labels every pixel according to the type of object it represents, such as street lamps, trees, and buildings. Those labels help build 3D maps of complex urban environments where foliage may hide parts of buildings and signs or parked cars may obscure pedestrians.

So color could help self-driving cars interpret their changing view as they drive through a neighborhood or when they see a bright white truck in front of a bright blue sky. Multispectral imaging could offer even more information, but that advantage has to be traded off against costs and processing requirements.

Processing time is an important issue, especially when self-driving cars are moving fast and need to react very quickly. “In that setting, you can’t afford to just process one pixel at a time. You need to be more selective,” Rehg says. The human retina is inherently selective because cones are packed most tightly in the fovea, so we instinctively turn our eyes toward things we want to study carefully. He notes that rally car drivers have learned to be very efficient in scanning scenes and understanding what’s important, so he’s testing artificial intelligence techniques in scale-model race cars.

Racing is a stringent test because driving fast requires looking farther ahead and reacting faster, and crashes are bound to happen when pushing the limit. Racing with model cars on an enclosed track makes sure no test drivers get hurt in the process. The faster you go, Rehg says, the more important cameras become. Radar and lidar both have resolution limits at long distances, but cameras with 4K chips offer much better resolution. His group is developing an open-source platform that others can use to explore the limits of autonomous cars.

ADVANCES IN LIDARS

“Many people believe lidars are necessary for driving automation, and lots of companies are working on them,” says Raj Rajkumar, director of the Technologies for Safe and Efficient Transportation Center at Carnegie Mellon University (USA). Their appeal comes from their high-resolution 3D maps of the local environment. But Rajkumar says a roof-mounted lidar scanning a full 360 degrees costs around $55,000, close to the price of a new Tesla Model S. Smaller lidars cost around $8,000, but they scan smaller areas or have lower resolution.

A traditional lidar scans a single laser beam across the field as it fires a series of pulses, measuring the distance to one point at a time. An array of multiple lasers and matching detectors can speed operation by collecting multiple data points simultaneously.

A solid-state hybrid design developed in 2007 by Velodyne Lidar spins an array of up to 64 lasers and a solid-state detector array around a 360-degree field, collecting up to 1.2 million points per field.

Google uses the Velodyne technology, which has dropped in price since its introduction.

Quanergy, a California startup, has developed an all solid-state lidar which avoids mechanical scanning by steering the output beam with a phased-array transmitter. The company says it can map about a half million points per second.

Rajkumar says that an emerging generation of solid-state “flash” lidars promise to be faster, cheaper, and more reliable for driverless cars because they require only one laser and use no mechanical components. They use diverging optics to spread the beam across an area and focus the returning light pulses onto a 2D detector array that can measure return time as well as detect the light intensity. In that way, a single laser flash can simultaneously map points across a wide field in 3D.

NASA is investigating the technology for spacecraft landing on Mars and other planets. Several companies are developing flash lidars for self-driving cars, but they are not yet in mass production. Flash lidar would benefit from the economics of mass production, but multiple units would be needed on a car because each one sees only a limited field.

INTRODUCING AUTONOMOUS CARS INTO MARKET

Self-driving cars are hot. General Motors says it will introduce “Super Cruise” technology in its Cadillac CT6 model in 2017. Uber is testing its own autonomous cars in Pittsburgh, PA (USA). In October, Uber’s Otto subsidiary claimed the first commercial delivery by a self-driving truck, a modified Volvo 18-wheel tractor trailer. It drove more than 50,000 cans of beer on 120 miles of Colorado (USA) highway from Fort Collins to Colorado Springs, passing through Denver.

But don’t expect robocars to become routine yet.

“Lots of progress has been made in the past few years, but we still have quite a long way to go before we can fully automate the driving function,” Rajkumar says. He predicts gradual progress, with cars coming on the market in three years that can drive themselves in “geographically fenced regions” such as limited-access highways, where there are no pedestrians, bicycles, or other potential perils.

Autonomous driving in more general areas or over long distances will be a decade away, he says, but even then, it won’t be ready for chaotic traffic such as in Vietnam, where drivers don’t follow predictable rules.

And it’s anyone’s guess when self-driving cars might be ready for roads covered by snow or so poorly maintained that neither people nor robots can see the lines defining the lanes.

Jeff Hecht is a science and technology writerJeff Hecht is a science and technology writer and author of Laser Pioneers and Beam: the Race to Make the Laser.







Nano-antennas suggested for driverless cars

Oryx Vision is developing a new kind of photonic sensor technology allowing autonomous vehicles to “see” better.

The Israeli startup, founded in 2009 by former Vishay Intertechnology executive David Ben Bassat, believes that its nano-antenna-based technology, which operates in the far-infrared spectrum, will outperform the various camera, radar, and lidar options implemented in today’s early autonomous and semiautonomous vehicles.

While companies like Quanergy say that its “puck” lidar sensors have a range of up to 200 meters, Oryx is of the opinion that sunlight can still confuse optical sensors in driverless cars. Oryx’s solution is to use nano-antennas that operate in the far-IR region. The company says that the approach is able to detect tiny objects from 150 meters and work in the dark and extreme weather conditions.

The company says it has already demonstrated the technology and discussed its implementation with some car manufacturers as well as top tier automotive suppliers.

Bassat has filed various patents describing antenna elements operating in the terahertz spectrum.

Demand grows for optics in autos

Photonics systems based on visible, 3D, lidar, and night-vision cameras are exploding into the automotive space, according to Yole Développement, the French market research company.

In a November 2016 assessment of imaging technology for the sector, Yole predicts that revenue will grow at a compound annual growth rate of 20% between 2015 and 2021. And considering total systems sales value, revenue is expected to hit S$7.3 billion (€6.8 billion) in 2021.

Among the nine market segments identified in the report, Imaging Technologies for Automotive 2016, cameras designed for advanced driver assistance systems are the most important category, which alone will represent 51% in revenue by 2021.

Innovative lidar sensor wins CES award

A solid-state lidar that Quanergy Systems developed for autonomous cars has won the CES 2017 “Best of Innovation” Award in the vehicle intelligence category.

The laser-based technology features no moving parts and evaluates its surroundings by detecting the refl ections of laser pulses fired out in all directions. The S3 lidar can crunch through half a million data points per second to generate a live view around the vehicle.

Quanergy will demonstrate the sensor at the 2017 US Consumer Electronics Show.


TAGS: LIDAR
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research