Vehicle and System Autonomy: Views into the Future

A session from SPIE Defense + Commercial Sensing 2019 looks into the next horizons of autonomous systems.

17 April 2019
Daneet Steffens
A car from GM's Cruise Automation program negotiates itself through San Francisco traffic
A car from GM's Cruise Automation program negotiates itself through San Francisco traffic. Credit: GM Cruise

At the SPIE Defense + Commercial Sensing Symposium in Baltimore, the Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure conference was off to a fine start on Monday, 15 April. Conference Chair Michael C. Dudzik, of IQM Research Institute, introduced this relatively new conference as "a confluence of autonomy, sensors, cybersecurity, and processing itself, leading greater degrees of autonomy for different vehicle systems."

In the keynote address of the "Future Trends for Autonomous Operations" session, the Department of Defense's Richard Linderman described a future battlespace, emphasizing various aspects of autonomy, cybersecurity, high-performance computing, and the ways in which technology is transforming the landscape: it means, he said, changing the type of battlespace we see ourselves fighting in, including inverting the ration of humans to autonomous systems. With aerial drones and Global Hawks, for example, the number of people currently required to examine the video feeds can be many; that ratio might need to be flipped upside down, one person to 50 autonomous feeds. This, Linderman said, constitutes a "superhuman flow of information. This is what is shaping our planning." The big picture is about increasing systems capabilities and advancing production capabilities, driving lower costs and decreasing the ‘time to market,' an area, Linderman added, "where we can learn from industry."

Modernizing priorities include hypersonics (weapons that can travel more than five times the speed of sound); fully networked command, control, and communication; directed energy; space; quantum science; machine learning; microelectronics; autonomy; cyber; and missile defense. Critically, much of the progression requires a combination of affordability, rapidity, capability, and trust. "Academia plays a role in the area of trust," Linderman said. "We have to grapple with the fact that we need to be ready to trade off some capability to gain more trust and/or affordability. Data is the new oil: if you're refining it to make your new product, you better make sure no one is adding any water to it. Do not let people muck with your data. Protect the sanctity of your data. It's a precious resource. Don't let anyone shake the confidence in your AI. Maintain your pedigree. Who wrote your software? Know the source." Commercial industry, he said, should be concerned about this too.

Linderman repeatedly emphasized the need for trusted and verifiable solutions in order to achieve an autonomous battlespace. What does that achievement look like? A battlespace in which hundreds of unmanned underwater, surface and air vehicles (UxVs) deliver mass, resilience, flexibility, and robustness, and a multi-layer system achieves persistent wide-area Intelligence, surveillance, and reconnaissance (ISR) at high resolution, and delivers effect over a brigade area. Key elements include ground fusion center, communications and sensor jammer, persistent reconnaissance, airborne re-supply, conventional assets denied, and GPS jamming, as well as high levels of flexibility, connectivity, and interoperability. "We are," Linderman said, "setting the bar high."

BAE Systems' Stephen M. Jameson opened his presentation with two diverse situations. We are, he pointed out, in the process of moving from a well-understood environment; from simple tasks and small numbers of unmanned systems; from no adversary; from nominal communications and humans on the loop, to an unconstrained environment; to missions with complex, interdependent tasks; to multiple, heterogeneous unmanned systems; to a need to adapt to adversary action; to degraded and denied communications; and to limited human interaction with high trust in autonomous technology - a "fire and forget" system.

Adaptive intelligent processing for battlespace autonomy incorporates estimation, control, and learning, "and all three must be driven by mission objectives." In additions, flexibility is critical in order to make the process of "observe; learn; decide; act" as swift, robust, and efficient as possible.

In a more detailed discussion, Jameson explored estimation in terms of tracking, sensor and data fusion, and correlation. Control, he said, ranges from sensor resource management to dynamic programming, stochastic optimization, and mission planning. It requires flexible team structures as well as varying autonomy, including recommendations to manned platforms; objectives to highly autonomous platforms; tasks to less autonomous platforms; actions to non-autonomous platforms; and peer-to-peer task reassignment. The ability to be able to adapt to a rapidly changing communications environment is critical. And learning must incorporate pattern discovery, event alerts, and behavior characterization.

We are moving, Jameson noted, from centralized architecture to different levels of coordination which trade off optimality for ability to operate in an environment of limited communications. The biggest autonomy technical challenges at the moment, according to Jameson? Infrastructure; human-machine teaming; scalable teaming; and validation and verification.

Moving toward the commercial, General Motors Innovation Program Manager Jeremy Salinger, examined various pieces of the "automated driving puzzle:" localization, vehicle motion control, perception, planning and decision making, legal and regulatory aspects, human factors, economic, social and political factors, and verification and validation. There are also user and operational factors to take into account such as ride-sharing, delivery, mass transit, parking, freeway driving, urban driving, and off road. We require, Salinger said, RADAR, cameras, LIDAR, and inexpensive computing power in place, in addition to some redundancies so that the car remains controllable.

In all, he raised more questions than he delivered answers - "What quality maps will be available, including timeliness and accuracy? How good does the simulation environment have to be? We need simulations that are realistic enough to replace most of the testing in the real world"- though he did share this nifty video of a car negotiating itself in San Francisco traffic from GM's Cruise Automation program.

From addressing issues around environmental conditions, atmospheric conditions (think: clear/rain/snow/fog/blowing dust/leaves), road-surface conditions, geographic conditions - from freeways to rural dirt roads - and plain old parking, Salinger pointed to the challenges of adverse driving conditions such as heavy snow ("How does a car drive if it can't see the road?"), or a water-covered road ("How does the car manage uncertainty around water depth?"), protrusions behind trucks, confusing surface conditions like a blown tire, or even just a complex traffic jam. "We need sensing capability and intelligence to deal with these situations," he said. "We need to be able to manage turbulent motion."

The requirements to achieve that goal, include a balance between semantic reasoning and neural models; robust training of learning systems; verification and validation; addressing any legal/societal issues; and sensor-suite optimization: "These," said Salinger, "are the next horizons."

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research