Want to Reconfigure Images on a Chip?

DARPA has a strategy for you
16 March 2020
By Ford Burkhart
DARPA’s content-based mobile edge networking
Field-proven: DARPA’s content-based mobile edge networking in action. Credit: DARPA

A top goal at DARPA these days is to multifunction, providing the ability to select a part of an image, say a battlefield, and look at that content with special software, on a chip.

"That's our desire," said Whitney Mason, a program manager at DARPA, the US Defense Advanced Research Projects Agency, at SPIE Photonics West in February.

"We want infrared imagers that can turn data into information at the chip level. This solves the power problem, the latency problem. Having smart cameras is the goal of all of this."

Mason, whose research areas include novel device structures, optics, and imaging electronics, was the keynote speaker at the Quantum Sensing session. Her focus was on a software-reconfigurable imaging DARPA program called ReImagine. ReImagine aims to produce underlying readout integrated circuits (ROICs) with "revolutionary capabilities," and to develop underlying theory and algorithms to collect the most valuable information when a sensor is configured for a variety of measurements.

These ROICs will show that efficient computation within an ROI can enable real-time analysis on much more complex scenes than traditional systems. The system will deliver "more actionable information to the warfighter than has ever been possible from a single imaging sensor."

Most current imaging systems perform only a single set of measurements. To multifunction with reconfigurable sensors is "an exciting area ... one of the biggest things that this program is going to be able to do," Mason said.

"We will be able to take different types of measurements across an array. We will be able to change the measurement based on the scene content."

In the selected portion, the user could, say, employ a faster frame rate and a higher resolution for certain objects.

"There are 8 billion cameras in the world," Mason said. "We need to be able to use AI-based embedded intelligence to help narrow down the content from the sensors to achieve the desired image."

She predicted the goal can be reached "in my lifetime," in maybe even ten years. Sony, she said, is already able to do processing against the focal plane array. "We'll see it," she added.

"We are trying to build a technology, not a product," Mason said. It's not a matter of just spotting movement in a scene. "Think about ferns moving on a pond, or bees moving on a pond. We need to pass on only information that is relevant."

-Ford Burkhart is a science and technology writer based in the US. This article originally appeared in the 2020 Photonics West Show Daily.

Related SPIE content:

Photonic integrated circuits for Department of Defense-relevant chemical and biological sensing applications: state-of-the-art and future outlooks

Mosaic Warfare

DARPA Challenges Drive Dual-Use Innovation


Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?