The Sky’s the Limit for Autonomy and Artificial Intelligence

As AI and autonomous machines continue their upward development trajectories, so does the collaborative work of the Air Force Research Laboratory with industry, academia, and other government labs
09 January 2020
by Daneet Steffens
Ledé explaining how autonomy has evolved over time and how it will provide future benefits to Air Force operations. Credit: AFRL.
Ledé explaining how autonomy has evolved over time and how it will provide future benefits to Air Force operations. Credit: AFRL.

As the Commander’s Autonomy Technical Advisor at the US Air Force Research Laboratory (AFRL), Jean-Charles Ledé spends much of his time thinking about AI-driven autonomy. These intertwined technologies are in an escalating period of growth in both the military and commercial sphere, and part of Ledé's job is to monitor and leverage those developments. And, while he acknowledges some competition between business industries and the Department of Defense (DoD) — when it comes to recruiting talent, for example — the more important picture for him is the level of collaboration that connects government and industry.

“A lot of foundational research has been traditionally funded by government,” he points out. “What Silicon Valley is now enjoying in terms of great capabilities in artificial intelligence was actually funded by the federal government as research as far back as the 50s and 60s. The primary question now is, how do we work more closely together in a field that is evolving so rapidly?”

A little imagination goes a long way, adds Ledé. “From a government perspective, we often think we have unique problems, especially in the DoD: there are not many companies out there whose job is to defend the nation. In that sense our problems are unique, but their solutions also can have close analogs to commercial capabilities. I would contend that the majority of our problems, when we look at them in a slightly different light, would be of interest to industry, and it’s important for us to keep that in mind.”  

One example, he notes, is the automotive industry. Tolerance for mistakes when you are working on weapons for use during potential conflicts is very low; in developing autonomous vehicles, car manufacturers employ the same low-tolerant approach. “They have pretty much as low a tolerance for risk due to AI malfunction as we do,” says Ledé.

Another overlap lies in the technology itself. After all, algorithms are agnostic to their use. Perception issues are the same whether you’re a self-driving tank trying to avoid a wall, or an autonomous car trying to avoid the same: “We don’t always get to drive on the road, of course, and sometimes we do need to run into that wall, so we have different issues to our colleagues who get to stay on the road, but the fundamentals are similar enough for that collaborative work to be done.”

A wider difference between military and industry lies with the data environment. “We have mission objectives,” says Ledé, “and very specific constraints. Then there’s the adversarial nature of our work — by definition, we are facing an adversary who is intent on seeing us fail. In this area, we have different needs and different requirements than Silicon Valley does. What they are doing with big data is really big data — we usually have nowhere near that in the Department of Defense.” Synthetic data, he says, may offer a viable solution. “Those solutions are of potential interest to not just the DoD but also to industry. Coming up with AI capabilities that don’t require as much data, is certainly an area of common interest.”

Ledé and his group actively court that shared input, working with universities and businesses across the globe. Many algorithms can be developed on surrogate platforms, using surrogate programs and surrogate data. Ledé’s group tests them in an open environment and, once they ensure that everything is working, they can migrate them into their higher security environment. “We look at this early research, fundamental research, and consider how it might be applied to military problems,” he says. “Universities are not going to be working directly on military technology, per se, but they are going to be developing capabilities that, through our own researchers, we will be able to convert into military-relevant capabilities. To help develop Skyborg, a project that hopes to team some fighter pilots with drone “wingmen,” for example, the AFRL issued a request for information to industry early last year.

Another prime example of an open working environment is the Air Force Cognitive Engine (ACE), an AI development environment that the AFRL cultivates based on both open sources as well as internal research. The objective of ACE is to enable anybody within the Air Force to come and use the most up-to-date AI tools to create new applications rapidly and use those tools in an environment that allows them to advance their applications and concepts without having to reinvent the proverbial wheel. “How you ingest the data, how you manipulate it, how you learn, how you train your algorithm, how you deploy your algorithm in an operational environment — all of these are tasks are very important but frankly we don’t want people to have to worry about them. So ACE is really aimed at enabling a broad swath of personnel and people who are supporting us to develop AI projects more rapidly and more economically.”

Ultimately, it’s about “scaling up,” a topic which will underpin Ledé's plenary at SPIE Defense + Commercial Sensing in April. “A lot of progress has been made in AI,” affirms Ledé. “I think we are in an era where we can start thinking about large-scale implementation, and that’s what we are looking at: how do we approach scaling up and operationalizing AI at scale? That’s really the focus of the group that I work with. Speaking at the SPIE defense conference will allow us to continue to develop broad ties with the attendees, discuss how we can work on mutually beneficial projects, and let them know how they can best contribute to the Air Force mission. And I do believe that there is a big opportunity for the Air Force to leverage recent work in AI and to greatly benefit from it.”

 

 

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research