Gaylord Palms Resort & Convention Center
Orlando, Florida, United States
15 - 19 April 2018
Search Program:  go

Share your photonics research for UAS applications

Come see the latest optics and photonics for Unmanned Autonmous Systems at SPIE Defense + Commercial Sensing 2018

Hear sensing, imaging, and photonics technologies research for unmanned autonomous systems (UAS) applications at SPIE Defense + Commercial Sensing. Hear the latest research that can be used to enhance air, ground, and underwater UAS such as LiDAR, infrared, multispectral and hyperspectral imaging, and more.

Our topical tracks help you quickly locate potential items of interest in the 2018 Defense + Commercial Sensing program, such as sessions, papers, vendors, and courses. Explore the information below to see what may interest you. 

Hotel reservations
23 March 2018

Registration increases
30 March 2018

Register today

Attend the free expo

Learn more about the vendors at SPIE Defense + Commercial Sensing Expo 2018

The expo provides access to 400 exhibiting companies, industry sessions, a technology demonstration, and more.

Meet suppliers specifically offering UAS solutions

Attend industry sessions, including Lighting the Path Towards Autonomous Mobility
View all 2018 Industry Session topics 

Check out the LiDAR Demonstration
LiDAR is the core component of optical sensing technology powering autonomous vehicles, and high-performance data is required for safe self-driving. Stop by to see Liminar's breakthrough LiDAR perception capabilities enabling the autonomous future. 

Learn more about the expo

Recommended course

SPIE Defense + Commercial Sensing offers over 30 half-day, full-day and 2-day courses that take place at the event. Take advantage of face-to-face instruction from some of the biggest names in industry and research.

2018 recommended UAS course
Introduction to LIDAR for Autonomous Vehicles (SC1232)

See our other courses on LIDAR, sensors, and more—you can't go wrong with our money-back guarantee!

View all courses

Conferences of potential interest

Conferences of Interest

Out of 50 conferences at SPIE Defense + Commercial Sensing 2018 these 21 conferences have been identified as containing specific content that may be of particular interest for those seeking UAS content.

 • Unmanned Systems Technology
 • Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
 • Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping
 • Infrared Technology and Applications
 • Infrared Imaging Systems: Design, Analysis, Modeling, and Testing
 • Detection and Sensing of Mines, Explosive Objects, and Obscured Targets
 • Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing
 • Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR
 • Laser Radar Technology and Applications
 • Laser Technology for Defense and Security
 • Sensors and Systems for Space Applications
 • Situation Awareness in Degraded Environments 2018
 • Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery
 • Geospatial Informatics Motion Imagery Analytics
 • Long-Range Imaging
 • Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2018
 • Next-Generation Analyst
 • Hyperspectral Imaging Sensors: Innovative Applications and Sensor Standards 2018
 • Thermosense: Thermal Infrared Applications
 • Energy Harvesting and Storage: Materials, Devices, and Applications
 • Next-Generation Spectroscopic Technologies

Review the program, including 82 papers on UAS

Below are papers that include significant technical content related to UAS applications. SPIE Defense + Commercial Sensing 2018 includes 1,900 papers and many of them may be of interest to those interested in UAS applications, however these papers have been identified as containing specific content that may be of particular interest. 

Papers
The 82 papers below are listed by conference and paper number.

Opto-acoustic intensity probes for seabed target tracking and detection
Paper 10628-18

Author(s):  Cameron A. Matthews, Naval Surface Warfare Ctr. Panama City Div. (United States), et al.
Conference 10628: Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXIII
Session 9: Synthetic Aperture Sonar (SAS) I

Vector acoustic sensors represent a method for compact measurement of the kinematic properties of acoustic waves. The commercial sector has used intensity probes for in-air measurement for several decades, particularly in the automotive industry. The advent of piezoelectric technologies for undersea use allow for registration of acoustic sources with camera imagery in seabed sensing systems that can benefit the defense, fisheries and oil industries. A fused vector sensor and optical camera feed is detailed for consideration, with data from a prototype demonstration in the coastal waters of Panama City, Florida presented.


Fractal analysis of seafloor textures for target detection in synthetic aperture sonar imagery
Paper 10628-20

Author(s):  Thomas Nabelek, Univ. of Missouri (United States), et al.
Conference 10628: Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXIII
Session 9: Synthetic Aperture Sonar (SAS) I


UAV-based LiDAR and gamma probe with real-time data processing and downlink for survey of nuclear disaster locations
Paper 10629-11

Author(s):  Martin Pfennigbauer, RIEGL Laser Measurement Systems GmbH (Austria), et al.
Conference 10629: Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing XIX
Session 4: Radiological, Nuclear Sensing

A RIEGL VUX-1UAV laser scanner and a Hotzone Technologies gamma radiation probe mounted on a RiCOPTER UAV are employed for surveying nuclear disaster locations. Precise localization of the gamma radiation sources and simulation of corresponding radiation intensity patterns taking the up-to-date topography into account is achieved in real time. The results are displayed to the person in charge of action forces in an intuitive and user-friendly way in real-time and while the UAV is still in the air. We present results of extended field tests with live radiation sources to demonstrate real-time semi-autonomous flight path generation, data acquisition and processing.


Regional sensing with an open-path dual comb spectroscopy and a UAS
Paper 10629-31

Author(s):  Ian Coddington, National Institute of Standards and Technology (United States), et al.
Conference 10629: Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing XIX
Session 8: Vapor, Aerosol Detection

Dual-comb spectroscopy is a rapidly evolving spectroscopic tool that exploits the advantages of frequency combs for precision spectroscopy. Here we show that this novel spectroscopy source can be employed for regional monitoring using an array of retros or with an unmanned aerial systems (UAS). Both fixed and UAS systems combine the high-precision, multi-species detection capabilities of open-path DCS with spatial scanning capabilities to enable spatial mapping of atmospheric gas concentrations. The DCS systems measure the atmospheric absorption over long open air paths with 0.007cm-1 resolution over 1.57 to 1.66 um, covering absorption bands of CO2, CH4, H2O and isotopologues.


Mapping and reconnaissance imager, night-enhanced, for sensing of contaminants, oil, and unseen threats (MARINE SCOUT)
Paper 10631-17

Author(s):  Toomas H. Allik, Active EO Inc. (United States), et al.
Conference 10631: Ocean Sensing and Monitoring X
Session 4: Oil Spill Detection

MARINE SCOUT is a Puma- and other-UAS compatible, compact, lightweight, multi-spectral airborne sensor payload for remote sensing of oil spills and measurement of oil thickness in marine environments. The payload includes filtered InGaAs NIR/SWIR and long wavelength infrared (LWIR) cameras, capable of detecting oil, rejecting vegetative clutter, and identifying thick crude oils. It compensates smear due to image motion using a novel, enabling forward motion compensation technology. It also contains image orientation and alignment features. These capabilities plus a ground-station support mapping requirements, producing high-fidelity, mosaiced, geo-rectified, multi-spectral image stacks and full motion video for use by oil spill responders.


Mitigation of platform motion artifacts in laser imagery
Paper 10631-22

Author(s):  Derek M. Alley, Naval Air Systems Command (United States), et al.
Conference 10631: Ocean Sensing and Monitoring X
Session 6: Underwater Imaging

Laser-based systems have been developed to mitigate the difficulties encountered when performing optical imaging in turbid coastal and harbor waters. Since traditional approaches require that the laser and receiver are located on the same platform, the size, weight, and power requirements are incompatible with small, remotely-operated and autonomous vehicles. Researchers at NAWCAD have developed a laser imaging sensor that uses a bistatic geometry, placing the laser and receiver on separate, smaller platforms. With smaller platforms, stability is typically sacrificed for maneuverability, introducing distortions into the laser imagery. This paper demonstrates how an on-board inertial measurement unit is used to detect vehicle movement and how that information is encoded into the laser imagery. The receiver uses this information to mitigate any distortions caused by vehicle movement. The process behind the image correction is explained and initial are results presented.


Sea-ice detection for autonomous underwater vehicles and oceanographic lagrangian platforms by continuous-wave laser polarimetry
Paper 10631-34

Author(s):  Jose Lagunas-Morales, Takuvik (Canada), et al.
Conference 10631: Ocean Sensing and Monitoring X
Session 8: Lidar Sensing I

AUVs provide new spatial and temporal scales for the observational studies of the ocean. They offer a broad range of capabilities that reduce the need of research vessels. In the ice-covered Arctic ocean, navigation is only possible during the summer period. Moreover, safe underwater navigation in icy waters is subjected to the capability of detecting sea ice. We present a CW polarization LidAR capable of detecting 3 cm of ice at a distance of 12 meters, while providing a relative measurement of its thickness. This system is portable to AUVs such as Argo floats, sea gliders, and propeller-driven robots.


Characterization of the spectrofluorescence and reflectance properties of Arctic benthic algae as lidar targets
Paper 10631-35

Author(s):  Matthieu Huot, Takuvik Joint International Lab. (Canada), et al.
Conference 10631: Ocean Sensing and Monitoring X
Session 8: Lidar Sensing I

We consider the characteristics of macroalgal (kelp) targets of a LiDAR capable assessing algal 3D morphology and quantifying algal biomass via fluorescence or differential absorption. Spectral absorption, fluorescence emission, fluorescence efficiency, and temporal fluorescence induction dynamics of Arctic algae can differ by class due to variation in photopigment complement. Surface reflectance characteristics of macroalgae can vary by morphology and structure. We present a characterization of the absorption and fluorescence excitation-emission spectra of Arctic macroalgal targets and verify their fluorescence efficiency and regular reflectance. Simulations using these optical characteristics guide us in optimizing LiDAR configuration and performance under various operating conditions.


Comparing fluorescent and differential absorption LiDAR techniques for detecting macroalgal biomass with applications to Arctic substrates
Paper 10631-37

Author(s):  Eric Rehm, Takuvik (Canada), et al.
Conference 10631: Ocean Sensing and Monitoring X
Session 9: Lidar Sensing II

The physical and biological properties of Arctic ice and coastal benthos remain poorly understood due to the difficulty of accessing these substrates in ice-covered waters. A LiDAR system deployed on an autonomous underwater vehicle (AUV) can interrogate these 3D surfaces for physical and biological properties simultaneously. Using our understanding of the absorption, inelastic scattering (fluorescent), and elastic scattering properties of photosynthetic micro- and macro-algae excited by lasers, we present results of in situ tank tests using a two-wavelength (473 nm, 532 nm) prototype to evaluate both fluorosensor and differential absorption (DIAL) approaches using reflectance standards and selected macroalgae as targets.


Coherent 24 GHz radar system for micro-Doppler studies
Paper 10633-17

Author(s):  Duncan A. Robertson, Univ. of St. Andrews (United Kingdom), et al.
Conference 10633: Radar Sensor Technology XXII
Session 4: Micro-Doppler Exploitation

We will discuss the hardware design of a coherent 24 GHz radar system developed at the University of St Andrews for obtaining micro-Doppler data. The system is based on the Analog Devices MMIC-Radar evaluation board. The evaluation board consists of a transmitter, receiver and a PLL based fractional-N frequency synthesizer chips. Along with this, description of other key components will be provided. Three identical custom-made smooth-walled conical horn antennas were designed and built for this radar system which will be discussed in detail in the paper. It will be shown that the performance of these high gain (24.5 dBi) antennas agrees extremely well with the design simulation. Finally, field trial results will be shown to validate the radar system performance.


Artificial intelligence (AI) and machine learning (ML) for future army applications
Paper 10635-6

Author(s):  John Fossaceca, U.S. Army Research Lab. (United States), et al.
Conference 10635: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX
Session 2: Operationalizing AI/ML-Infrastructure

In this talk we provide an overview of novel algorithms intended to address Army specific needs for increased operational tempo and autonomy for ground robots in unexplored, dynamic, cluttered, contested, and sparse data environments. We will survey some of ISD’s recent research in online, non-parametric learning that quickly adapts to variable underlying distributions in sparse exemplar environments as well as a technique for unsupervised semantic scene labeling that continuously learns and adapts semantic models discovered within a data stream and a method for finding chains of reasoning with incomplete information using semantic vectors. The specific research exemplars provide ways to overcome the brittleness of current machine learning techniques making them applicable to army relevant scenarios.


2020: Faster than real time tactical ISR from the dismount, faster than real time strategic ISR to the dismount
Paper 10635-17

Author(s):  Richard M. Buchter, U.S. Army Research Lab. (United States), et al.
Conference 10635: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX
Session 5: Deep Learning and Data Analytics: Learning


Generative policy approach for dynamic collaboration in coalition environments
Paper 10635-30

Author(s):  Dinesh Verma, IBM Thomas J. Watson Research Ctr. (United States), et al.
Conference 10635: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX
Session 9: Coalition Operations and Interoperability

Coalition operations require autonomous vehicles, UAS to be supported by a computing infrastructure which lacks sufficient manpower to configure and manage. In order to function smoothly, this computing infrastructure needs to become autonomous, generating its own configuration and policies needed for the operation. One way to address this problem is to use generative policies – an approach where the devices generate policies for their operations themselves without requiring human involvement as the configuration of the system changes. In this paper, we describe a coalition scenario for a dynamic community of interest that requires support of unattended self-managing computing infrastructure with capabilities such as auto-scaling and moving target defense, and present an architecture based on generative policies which can seamlessly support the requirements of the scenario. A simulation implementation of the architecture, and lessons learnt from that simulation are presented.


Responding to unmanned aerial swarm saturation attacks with autonomous counter-swarms
Paper 10635-32

Author(s):  Michael Day, Georgia Tech Research Institute (United States), et al.
Conference 10635: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX
Session 10: Airborne ISR

We present a study of unmanned fixed-wing aircraft tactics wherein an aggressor swarm consisting of agents capable of seeking and destroying targets in the air and on the ground is engaged by a collaborative defensive swarm that employs wingmen tactics inspired by manned fighter doctrine. We use a simulator (SCRIMMAGE) capable of running hundreds of agents in Monte Carlo analysis with a standard fidelity flight dynamics engine. We then present a field test we performed in conjunction with the Naval Postgraduate School to test our tactics in games of up to 10 vs. 10 aircraft.


Real-time lidar from ScanEagle UAV
Paper 10635-33

Author(s):  Roy D. Nelson, Ball Aerospace & Technologies Corp. (United States), et al.
Conference 10635: Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX
Session 10: Airborne ISR

Ball Aerospace has recently repackaged their TotalSight Flash LIDAR system for operation on the Insitu ScanEagle. This system provides real-time color 3D georegistered imagery to the flight operator. This paper describes the LIDAR Assembly for ScanEagle (LASE), its operation and describes collected 3D LIDAR imagery.


An integrated forward-view 2-axis MEMS scanner for ultrasmall 3D Lidar
Paper 10636-14

Author(s):  Dingkang Wang, Univ. of Florida (United States), et al.
Conference 10636: Laser Radar Technology and Applications XXIII
Session 4: Compact Laser Radar Systems

This paper reports the progress of an ultra-small 3D LiDAR based on integrated forward-view 2-axis MEMS scanner for applications in small autonomous drones. The entire LiDAR scanner is less than 30 mg, more than 10× weight reduction compared to the state-of-the-art LiDAR scanners. This is enabled by the integration of a pair of specially designed vertically-oriented 2-axis scanning MEMS mirrors on a silicon optical bench. With a tethered thin optical/electrical cable, this MEMS LiDAR achieves a field of view of 18° and a ranging accuracy of 5 cm.


Design and performance evaluation of a SWaP-optimized short-range fully fibered monostatic laser rangefinder in various climatic conditions
Paper 10637-27

Author(s):  Guillaume Canat, Keopsys SA (France), et al.
Conference 10637: Laser Technology for Defense and Security XIV
Session 8: Laser Systems, Laser Materials, and Applications III

The need for SWaP optimized, cost effective and modular Laser Rangefinders with good range performance requires innovative developments. We report on an all-fibered monostatic LRF designed in a weight reduced kit format, based on a fiber laser, with a measurement frequency up to 40Hz. The performance in different configurations and in Eye Safety class configurations are reported. The achieved performance with the 50mm collimator is 33dB in 5Hz class 1M. The evolution of the SNR, measurement precision and false alarm rate for different climatic conditions are reported, detailing the impact of the humidity, temperature, wind speed, and time of measurement.


A large-scale multi-modal event-based dataset for neuromorphic deep learning applications
Paper 10639-65

Author(s):  Jared Shamwell, U.S. Army Research Lab. (United States), et al.
Conference 10639: Micro- and Nanotechnology Sensors, Systems, and Applications X
Session 13: Deep Learning and Neuromorphic Sensing/Computing for Small Autonomous Systems

We discuss our efforts with event-based vision and describe our large-scale, heterogeneous robotic dataset that will add to growing number of event-based datasets currently publicly available. Our dataset comprises over 10 hours of runtime from a mobile robot equipped with two DAVIS240C cameras and an Astra depth camera randomly wandering in an indoor environment while two other independently moving robots randomly wander in the same scene (VICON ground truth pose is provided for all three robots). To our knowledge, this is the largest event-based dataset and the only one with ground truthed independently moving entities.


An artificial intelligence platform for prediction and decision making in natural disasters
Paper 10639-70

Author(s):  Shankar Sankararaman, One Concern, Inc. (United States), et al.
Conference 10639: Micro- and Nanotechnology Sensors, Systems, and Applications X
Session 14: Autonomous C4ISR Systems of the Future: Autonomous Decision-Making Approaches: Joint Session with Conferences DS116 and DS133

Natural disasters such as earthquakes, floods, wildfires, etc. significantly affect human lives and infrastructure. Emergency management systems are typically short of resources when natural disasters strike. This paper presents One Concern’s artificial intelligence platform that supports simulation, prediction, and decision-making in the context of different types of natural disasters. Using this platform, it is possible to optimize the available resources in order to efficiently respond to such disasters. Using advanced machine learning techniques, sensing information, satellite imagery, along with a variety of data sources, the seismic response component of this platform will predict the impact of an earthquake on any given building in the vicinity. Data was collected from the past earthquakes in California to build a machine learning model that has been validated against several other datasets. The cities of San Francisco and Los Angeles, as well as the Bay Area communities of Woodside and Portola Valley, currently use the One Concern earthquake platform. In addition to earthquakes, One Concern’s methodology also applies to other disasters such as floods and wildfires although the data sets, as well as the modeling approach, are different. In the context of flooding, a systematic, judicious integration of physics-based modeling and machine learning is used to proactively predict the flood levels in cities and streets. Thus, it would enable the identification of areas that would need to be evacuated well in advance of the occurrence of the floods. Wildfires are challenging because they are not only time-dependent but also interdisciplinary in nature, i.e., the spread of the fire depends on the surrounding atmospheric conditions, while simultaneously the atmospheric conditions are also dependent on the surrounding fire intensity and other related characteristics. While it may not be possible to precisely predict the occurrence of natural disasters, One Concern believes that it is possible to proactively predict the impact of such disasters, and develop resilience into our societal infrastructure in order to save the maximum number of human lives.


Long-range visual detection of dynamic obstacles in full-size UAS approach to landing zone
Paper 10640-3

Author(s):  Lucas de la Garza, Near Earth Autonomy, Inc. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 1: Perception

An algorithm is presented for detecting moving ground vehicles at distances of 100 − 500 m and glide slope below 10◦ , to aid in landing safely with large, autonomous rotorcraft. Platform attitude disturbance and the geometry of the scene cause na ̈ıve approaches of background subtraction and optical flow to generate an unacceptably high amount of false positives. The algorithm utilizes local features to build dense change detection maps, which are filtered and combined over time. The algorithm is evaluated over several real landing approaches, and found to have 25/25 true positives and 1/30 false positives.


Automated, near real-time inspection of imagery using commercial sUAS
Paper 10640-5

Author(s):  Benjamin Purman, Soar Technology, Inc. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 1: Perception

Commercial small Unmanned Aerial Systems (sUAS) have become popular for real-time inspection tasks due to their cost-effectiveness at covering large areas quickly. They can produce vast amounts of image data at high resolution, with little user involvement. However, manual review of this information can’t possibly keep pace with data collection rates. For time-sensitive applications, automated tools are required to locate objects of interest. These tools must perform at very low false alarm rates to avoid overwhelming the user. We approach real-time inspection as a semi-automated problem where a single user can provide limited feedback to guide object detection algorithms.


Automated data interpretation, tasking, and coordination of UAS imaging
Paper 10640-6

Author(s):  Evan M. Lally, TORC Robotics (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 1: Perception

The Air Force Civil Engineer Center (AFCEC) and TORC Robotics are developing a Rapid Airfield Damage Assessment System (RADAS) that uses simultaneous data streams from multiple sUAS and ground sensors for computer-aided condition assessment and planning of airfield repair. Operators, aided by intelligent algorithms, remotely monitor incoming data and software tools to identify a Minimum Airfield Operating Surface (MAOS). Preliminary results demonstrate significant reduction in airfield assessment time, increased assessment accuracy, and remove humans from danger during the inspection process.


DRESH: DRone EnSnaring mesH
Paper 10640-9

Author(s):  David Erickson, Defence Research and Development Canada, Suffield (Canada), et al.
Conference 10640: Unmanned Systems Technology XX
Session 2: Special Topics

This paper describes the preliminary findings of novel anti-drone obstacles development, named DRESH - DRone Ensnaring mesH - designed for stopping, fouling, and trapping unmanned aerial vehicles by targeting the motors. This work is part of the Defeat Autonomous Systems (DAS) program investigating counters to available unmanned vehicle technology. Preliminary results demonstrate trapped drones, ensnared and fouled motors, and suggests this obstacle is new capability to deny areas to drone ingress and egress. This paper describes how anyone with inexpensive materials & tools bought from construction supply stores, armed with knowledge as described, can construct area denial anti-drone obstacles that can trap light drones. Results indicate there is exists a sweet spot in the known variables that justifies further investigation into obstacle dynamics, modelling & simulation, materials, and notional system concepts to deliver a defensive obstacle capability against drones.


blindBike: an assistive bike navigation system for low-vision persons
Paper 10640-10

Author(s):  Lynne L. Grewe, California State Univ., East Bay (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 2: Special Topics

blindBike is a system that uses multiple sensors including a smartphone camera, gyroscope, gps sensors and cadence sensor to assist in the process of bicycle driving and navigation for people with low vision. We propose that with the assistance of blindBike it may be possible for those with low vision to be mobile at a new level. However, blindBike can also be assistive to those with normal vision. Through the use of today’s smartphones, the blindBike app can affordably assist with navigation and Road Following. This work focuses on the road following and intersection assistance modules.


Robotics CTA Integrated Research Assessment 2017
Paper 10640-14

Author(s):  Arnon Hurwitz, U.S. Army Research Lab. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 3: Robotics CTA

The Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA) is a program intended to change robots from tools that soldiers use into teammates alongside which soldiers can work. This requires the integration of fundamental and applied research in robotic perception, intelligence, manipulation, mobility, and human-robot interaction. In November of 2017, members of the RCTA evaluated progress of research on perception, manipulation and human-robot interaction. This evaluation included the assessment of progress on the perception of objects and terrain, and improvements to user interface with the intelligence architecture. Additionally, improvements to the Robotic Manipulator (RoMan) platform’s ability to detect and grasp an object were evaluated under a variety of conditions.


Modeling and traversal of pliable materials for wheeled robot navigation
Paper 10640-15

Author(s):  Camilo Ordonez, Florida State Univ. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 3: Robotics CTA

In order to fully exploit robot motion capabilities in complex environments, robots need to reason about obstacles in a non-binary fashion. In this paper, we focus on the modeling and characterization of pliable materials such as tall vegetation. These materials are of interest because they are pervasive in the real world, requiring the robotic vehicle to determine when to traverse or avoid them. This paper develops and experimentally verifies a template model for vegetation stems. In addition, it presents a methodology to generate predictions of the associated energetic cost incurred by a tracked mobile robot when traversing a vegetation patch of variable density.


When does a human replan? Exploring intent-based replanning in multi-objective path planning
Paper 10640-16

Author(s):  Meher T. Shaikh, Brigham Young Univ. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 3: Robotics CTA

For command and control tasks executed in dynamic environments, we explore specific triggers when an operator might change to a better plan. These replanning triggers can occur at: (a) regular time intervals, (b) when the current robotic path deviates from human intent, (c) when a superior path from different homotopy class is found, or (d) when the user voluntarily replans. This discussion has potential to improve human supervisory control systems when human intent is embodied as a tradeoff in a multi-objective decision-problem.


Parallel approach to motion planning in uncertain environments
Paper 10640-17

Author(s):  Mario Harper, Florida State Univ. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 3: Robotics CTA

Real world motion planning often suffers from the need to replan during execution of the trajectory. This replanning can be triggered as the robot fails to properly track the trajectory or new sensory information is provided that invalidates the planned trajectory. Particularly in the case of many occluded obstacles or in unstructured terrain, replanning is a frequent occurrence. Developing methods to allow the robots to replan efficiently allows for greater operation time and can ensure robot mission success. This paper presents a novel approach that learns and updates replanning of the A* based path planning algorithm SBMPO. This approach utilizes a parallelization of SBMPO coupled with an anytime method to quickly search through multiple heuristic weights within its allotted replanning time. These weights are employed upon subsequent replanning triggers to speed up future computations and is updated to reflect new information.


Brain emotional learning-based intelligent path planning and coordination control of networked unmanned autonomous systems
Paper 10640-18

Author(s):  Hao Xu, Univ. of Nevada, Reno (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 4: Navigation

In this paper, intelligent path planning and coordination control of Networked Unmanned Autonomous Systems (NUAS) in dynamic environments is presented. A biologically-inspired approach based on a computational model of emotional learning in mammalian limbic system is employed. The methodology, known as Brain Emotional Learning (BEL), is implemented in this application for the first time. The multi-objective properties and learning capabilities added by BEL to the path planning and coordination co-design of Networked Unmanned Autonomous Systems (NUAS) are very useful, especially while dealing with challenges caused by dynamic environments with moving obstacles. Furthermore, the proposed method is very promising for implementation in real-time applications due to its low computational complexity. Numerical results of the BEL-based path planner and intelligent controller for NUAS demonstrate the effectiveness of the proposed approach.


Image-aided inertial navigation for an Octocopter
Paper 10640-20

Author(s):  Baheerathan Sivalingam, Norwegian Defence Research Establishment (Norway), et al.
Conference 10640: Unmanned Systems Technology XX
Session 4: Navigation

A typical unmanned aerial system combines an Inertial Navigation System (INS) and a Global Positioning System (GPS) for navigation. When the GPS signal is unavailable, the INS errors grow over time and eventually become unacceptable as a navigation solution. Here we investigate an image-aided inertial navigation system to cope with GPS failure. The system is based on tightly integrating inertial sensor data with position data of image points that corresponds to landmarks over an image sequence. The aim of this experiment is to study the challenges and the performance of the image-aided inertial navigation system in realistic Unmanned Aerial Vehicle flight with an Octocopter. The results show that the proposed system reduces the position drift drastically compared to the position drift of free-inertial. The results demonstrate the ability of the proposed image-aided inertial navigation system to cope with the GPS failure.


Removing the bottleneck: Utilizing autonomy to manage multiple UAS sensors from inside a cockpit
Paper 10640-22

Author(s):  Thomas Alicia, U.S. Army (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 5: Collaborative Robotic Teams: Joint Session with conferences DS117 and DS133

Army researchers developed a system to evaluate autonomy and advanced interface design elements for controlling multiple unmanned aerial systems (UASs) from a manned helicopter cockpit. Managing multiple UAS video streams from a manned helicopter cockpit can lead to increased survivability and mission performance; however, this can also lead to a cognitive processing bottleneck. Removing this bottleneck requires implementation of autonomous behaviors and human-centered design principles. This research evaluates these concepts in the context of multi-UAS manned-unmanned teaming (MUM-T). The research demonstrated that a single crewmember can manage at least three UAS assets while executing complex MUM-T tactical missions.


Real-time Inspection of 3D features using sUAS with low-cost sensor suites
Paper 10640-23

Author(s):  Benjamin Purman, Soar Technology, Inc. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 5: Collaborative Robotic Teams: Joint Session with conferences DS117 and DS133

Recently, commercial Small Unmanned Aerial Systems (sUAS) have become very popular for real-time inspection tasks due to their cost-effectiveness at covering large areas quickly. Within these tasks, many objects of interest are best characterized by their 3D geometry. This is particularly true when considering the false alarm rates associated with automated analysis of features with irregular appearance, but well-characterized geometry. However, sUAS and low-cost sensors present challenges for sUAS tasks due to the limitations in payload and sensor quality. We examine the effectiveness of multi-view stereo and commercial LIDAR in the domain of rapid airfield damage assessment.


Benchmarking a LIDAR obstacle perception system for aircraft autonomy
Paper 10640-24

Author(s):  Adam Stambler, Near Earth Autonomy, Inc. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session 5: Collaborative Robotic Teams: Joint Session with conferences DS117 and DS133

Any certification of aviation grade autonomy will require benchmarking of the obstacle perception sub-system and its effect on UAV performance. This results from the obstacle detections system’s fundamental role in any UAV’s ability to fly safely. As Near Earth has built its state of the art LIDAR obstacle detection system, it has developed the benchmarking and evaluation techniques required to begin this validation process. Through over 100 flights in 10 different locations, this paper explores the obstacle perception assumptions and test how its capabilities impact full autonomous helicopter performance


Automatic voice control system for UAV-based accessories
Paper 10640-26

Author(s):  Filip Rezac, CESNET z.s.p.o. (Czech Republic), et al.
Conference 10640: Unmanned Systems Technology XX
Session PS1: Posters-Tuesday

This article deals about the system for voice control of the UAV accessories with using of mobile device that runs on Android operating system. Theoretical part deals with theory of automatic speech recognition systems including its architecture. Hidden Markov models and artificial neural networks are described in the next part as an approaches to systems for automatic speech recognition. Converting speech commands to an instructions for UAV control in Android operation system are described in practical part including also a testing and optimization of the whole system. The test results and the conclusions are given in the final chapter of the article.


Enabling intelligence with temporal world models
Paper 10640-27

Author(s):  Philip R. Osteen, U.S. Army Research Lab. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session PS1: Posters-Tuesday

A system for storing knowledge along with the ability to query it, which together we call a world model, provides a central source of relevant information to the various processes that constitute an autonomous agent. We outline the development of a world model that supports loadable ontologies to describe the world, and provides interfaces for processes to populate and query the system, including queries over time. The world model is implemented as a tuple store with structural subsumption query support. We evaluate its effectiveness on an embodied system, and show that it can answer questions that were not explicitly programmed ahead of time.


Safety design for military robots
Paper 10640-28

Author(s):  Jacqueline Walter, U.S. Army Tank Automotive Research, Development and Engineering Ctr. (United States), et al.
Conference 10640: Unmanned Systems Technology XX
Session PS1: Posters-Tuesday

The fielding of military robots requires an exploration into design methodologies that allow for creation of robots that behave safely in interactions with humans. Current technology in fielding intelligent robotic systems is far short of what is needed to ensure safe operation of robots. This paper will discuss design methods, analysis, and standards for applying safety methods to military robots.


A robotic orbital emulator with lidar-based SLAM and AMCL for multiple entity pose estimation
Paper 10641-13

Author(s):  Dan Shen, Intelligent Fusion Technology, Inc. (United States), et al.
Conference 10641: Sensors and Systems for Space Applications XI
Session 3: Perception and Autonomy for Aerospace Applications

This paper revises and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motions. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. Lidar only measurements are used to estimate the pose information of the multiple robots. SLAM (simultaneous localization and mapping) is running on one robot to generate the map and compute the pose for the robot. Based on the map maintained by the SLAM robot, the other robots run the adaptive Monte Carlo localization (AMCL) to estimate their poses. The controller is designed to guide the robot to follow a given orbit. The controllability is analyzed by using a feedback linearization method. Experiments are conducted to show the convergence of AMCL and the orbit tracking performance.


Finding common ground by unifying autonomy indices to understand needed capabilities
Paper 10641-15

Author(s):  Chad Cox, KEYW Corp. (United States), et al.
Conference 10641: Sensors and Systems for Space Applications XI
Session 3: Perception and Autonomy for Aerospace Applications

Autonomous artificial systems promise to reduce the workload of human analysts by replacing some or all cognitive functions with intelligent software. However, development is retarded by disagreement among researchers at very basic levels, including what is meant by autonomy and how to achieve it. A variety of autonomy measures are reviewed, highlighting their strengths and weaknesses. These measures are also a means of comparing and contrasting approaches. We contend that any properly structured set of measures are not only useful for these functions, but it provides both a philosophical and practical justification, it outlines developmental steps, it suggests schematic constraints, and it implies requirements for tests. This paper alleviates confusion by synthesizing some of the more widely held viewpoints under the new approach.


Team-centric motion planning in unfamiliar environments
Paper 10642-22

Author(s):  Cory Hayes, U.S. Army Research Lab. (United States), et al.
Conference 10642: Situation Awareness in Degraded Environments 2018
Session 5: Systems and Processing II

Technological advances in artificial intelligence have created an opportunity for effective teaming between humans and robots. Reliable robot teammates could enable increased situational awareness and reduce the cognitive burden on their human counterparts. Robots must operate in ways that follow human expectations for effective teaming, whether operating near their human teammates or at a distance and out of sight. This ability would allow people to better anticipate robot behavior after issuing commands. In comparison to traditional human-agnostic and proximal human-aware path planning, our work addresses a relatively unexplored third area, team-centric motion planning: robots navigating remotely in an unfamiliar area and in a way that meets a teammate's expectations. In this paper, we discuss initial work towards encoding human intention to inform autonomous robot navigation.


Results from implementation of autonomous visual navigation with a commercial UAV
Paper 10642-27

Author(s):  Anthony Spears, Prioria Robotics, Inc. (United States), et al.
Conference 10642: Situation Awareness in Degraded Environments 2018
Session 7: GPS Denied Environments

Presented here are results from the integration of a visual navigation system on a commercial UAV platform in pursuit of safety in a communal airspace. The common navigational sensors leveraged in UAV autopilot systems include GPS systems and inertial measurement units (IMUs), but can be unreliable. Visual navigation can provide a robust augmentation to current navigation systems, especially in GPS-denied and experimental mission scenarios. The authors have integrated such a visual navigation system into our high-performance autopilot (built around an NVIDIA Jetson computing platform) to provide an embedded, real-time, failsafe mode of operation.


Relative visual localization (RVL) for UAV navigation
Paper 10642-28

Author(s):  Moulay A. Akhloufi, Univ. de Moncton (Canada), et al.
Conference 10642: Situation Awareness in Degraded Environments 2018
Session 7: GPS Denied Environments

Most of today’s UAVs make use of multi-sensor GNSS/IMU fusion for localization during navigation. Unfortunately, GNSS systems have been proven to be unreliable in multiple contexts. Radio communication systems are prone to availability and to signal alteration. In this work, we propose the use of local visual information to perform relative localization in an unknown outdoor GNSS-denied environment. The algorithm uses keypoint features to extract salient points from a set of images pertaining to possible matches during navigation. ORB feature extraction was the technique that gives the best compromise in time performance and match robustness. Experimental tests were conducted on outdoor videos captured using a quadcopter. The obtained results are promising and show the possibility of using visual data in a GNSS-denied environment to improve the robustness of UAVs navigation.


Safety enforcement for the verification of autonomous systems
Paper 10643-2

Author(s):  Dionisio de Niz, Carnegie Mellon Univ. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 1: Cyber and Software Security for Autonomous Operations

Verifying that the behavior of an autonomous systems is safe is fundamental for safety-critical properties like preventing crashes. Unfortunately, exhaustive verification fails to scale to real-system size. Moreover, the use of algorithms whose runtime behavior cannot be determined at design time, like machine learning, cannot be verified at design time making things worse. Fortunately, runtime assurance can be used. Runtime assurance adds components (known as enforcers) to the system that determine if the system’s output is safe and replaces it with a safe output if that is not the case. Then we only need to verify the enforcers.


Maintaining trusted platform in a cyber-contested environment
Paper 10643-6

Author(s):  David Hadcock, Alion Science and Technology Corp. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 1: Cyber and Software Security for Autonomous Operations

Maintaining the highest level of trust within each system device is necessary to counter increased cyber-attack surface in distributed environments. The goal is to provide and maintain a trusted embedded computing system while minimizing performance impact. Alion has developed a platform utilizing a heterogeneous system-on-chip that includes multiple processors, programmable logic, and memory allowing for hardware-based resilience technologies that extend or enhance traditional software techniques. Secure boot ensures trusted initial state. Hardware sandboxes and reference monitors limit information leakage and damage from rogue IP. Dynamic introspection detects anomalous conditions on-the-fly. Secure recovery can return compromised systems to a trusted state.


CNN-based thermal infrared person detection by domain adaptation
Paper 10643-8

Author(s):  Christian Herrmann, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 2: Object Sensing for Detection, Classification, and Autonomous Operations

Imaging sensors are vital for autonomous vehicles to understand the environment. While thermal infrared sensors promise improved bad weather and nighttime robustness compared with standard RGB-cameras, detecting objects is harder because image quality is worse. The currently impressive performance of deep learning based RGB object detectors drops significantly on low-quality data in a different spectral range. This work proposes the use of CNN-based object detectors which are pre-trained on RGB images. By appropriate preprocessing strategies for the IR data, it is transformed as close as possible to the RGB domain. This allows pre-trained RGB features to be still effective on the novel domain.


Improved video change detection for UAVs
Paper 10643-11

Author(s):  Thomas Müller, Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (Germany), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 2: Object Sensing for Detection, Classification, and Autonomous Operations

Unmanned aerial vehicles (UAVs) equipped with cameras are a valuable tool for surveillance and reconnaissance. Multicopters or fixed wing UAVs patrol while a video change detection identifies relevant changes between two patrols. In this way, e.g., a convoy can be protected from improvised explosive devices (IEDs) by detecting deployment traces, or in case of disasters imminent danger can be recognized. As a solution, a video change detection was realized recently. Since then, some improvements were realized which are described in this paper. First, a novel measurement for color differences is explored in order to maximize the detection rate of relevant changes and minimize clutter. Furthermore, it is examined to what extent shadow detections can be reduced when the sun intensity and/or angle is different in two compared patrols. The given results document the performance of the presented approach in different situations.


Enhanced pedestrian safety awareness at crosswalks via networked lidar, thermal imaging, and sensors
Paper 10643-13

Author(s):  Zachary A. Weingarten, Florida Polytechnic Univ. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 2: Object Sensing for Detection, Classification, and Autonomous Operations

The system makes use of thermal imaging, LiDAR, conventional imagers, and sensors to distinguish between cars, people, animals, and other objects that may interact with a crosswalk at or near an intersection. A mesh network of these systems as nodes enables the coordination of information, alerts and/or interfaces to coordinate control of the lights as well as alert vehicles and people crossing at a crosswalk or an intersection. The data could also be used to enhance coordination of IoT or mobile devices such as those integrated with autonomous vehicles and the Intelligent Transportation System infrastructure to predict how to handle a pedestrian interaction with crosswalks or intersection. The goal of the system is to enhance pedestrian safety at crosswalks or intersections via LiDAR, thermal imaging, conventional imagers, shared interfaces, networks and other resources.


A history and overview of mobility modeling for autonomous unmanned ground vehicles
Paper 10643-17

Author(s):  Phillip J. Durst, U.S. Army Engineer Research and Development Ctr. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 3: Networks and the IOT for Autonomous Systems I

Autonomous unmanned ground vehicles (UGVs) are beginning to play an ever-more critical role in military operations. However, as UGVs with increasing levels of autonomy are being fielded, tools for accurately predicting these UGVs’ performance and capabilities are lacking. The presented work will provide a review of the tools and methods developed thus far for modeling, simulating, and assessing autonomous mobility for UGVs. In light of this review, areas of current need will also be highlighted, and recommended steps forward will be proposed.


Mission critical decentralized resilient and intelligent control for networked heterogeneous unmanned autonomous systems
Paper 10643-19

Author(s):  Hao Xu, Univ. of Nevada, Reno (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 4: Networks and the IOT of Autonomous Systems II

Deeply integrating online fast reinforcement learning with real-time networked control, a novel mission critical decentralized resilient and intelligent control algorithm will be developed for network heterogeneous unmanned autonomous systems (UAS) in presence of limited communication, system uncertainties and harsh environment. Different from traditional decentralized control and learning algorithms, proposed design is a real-time applicable optimal and resilient solution that has particularly considered real-time mission completeness, the convergence speed of learning algorithm and the impacts from limited communication, system uncertainties and harsh environment.


UAVs for wildland fires
Paper 10643-23

Author(s):  Moulay A. Akhloufi, Univ. de Moncton (Canada), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 5: Autonomous Operations, Artificial Intelligence, and Navigation I

In the last decade, research was conducted to develop measurement solutions dedicated to forest fires and based on image processing and computer vision. Significant progress was achieved in developing such tools for fire propagation in controlled laboratory environments. However, these developments are not suitable for outdoor unstructured environments. Additionally, wildland fires cover large areas; this limits the use of vision-based ground systems. Unmanned Aerial Vehicles (UAV) with cameras for remote sensing are promising as their performance/price ratio is increasing over time. They can provide a low-cost alternative for the prevention, detection, propagation monitoring and real-time support for firefighting. In this paper, we give an overview of past work dealing with the use of UAVs in the context of wildland and forest fires, and propose a framework based on cooperative UAVs and UGVs for fires monitoring at a larger scale.


Probabilistic models for assured position, navigation and timing
Paper 10643-24

Author(s):  Andres Molina-Markham, The MITRE Corp. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 5: Autonomous Operations, Artificial Intelligence, and Navigation I

Position, navigation and timing (PNT) equipment produces position, velocity and time (PVT) estimates, combining measurements from multiple Global Navigation Satellite Systems and additional sensors. PVT estimates are computed using linear regression algorithms or Bayesian filters, which are susceptible to adversarial manipulation. We propose a novel approach to develop PVT estimators with trust assessments of inputs and outputs. We describe how these trust assessments enable us to define measures of PNT assurance, which PNT equipment translate into numeric assurance scores. Users use these scores to reason about the probability that the system produces outputs with expected accuracy, availability and continuity.


Robust hierarchical reasoning over sensor data with the Soar cognitive architecture
Paper 10643-30

Author(s):  Timothy Saucer, Soar Technology, Inc. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 6: Autonomous Operations, Artificial Intelligence, and Navigation II

Sensor fusion remains a difficult task, particularly in noisy environments or when data are incomplete. The Soar cognitive architecture provides the ability to reason over these data sources and make decisions based on the situational context. To feed into the reasoning system, we have created a hierarchical system of analysis components operating at differing levels of complexity. This approach of building up the reasoning system provides robust behavior is applicable across a variety of domains.


Optimizing cooperative cognitive search and rescue UAVs
Paper 10643-31

Author(s):  Mark D. Rahmes, Harris Corp. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 6: Autonomous Operations, Artificial Intelligence, and Navigation II

A need exists for a self-forming, self-organizing, cognitive, cooperative, automated unmanned aerial vehicle (UAV) network system to more efficiently perform UAV-based maritime search and rescue (SAR) operations. Although current search patterns (e.g., traditional “lawn mower” methods) are thorough, they result in too much time spent searching low-probability areas. This decreases the chances of a successful rescue and increases the risk of lost recovery opportunities (e.g., death due to hypothermia in the case of human search targets). Our goal is to optimize UAV-based SAR operations.


Power line-tree conflict detection and 3D mapping using aerial images taken from UAV
Paper 10643-32

Author(s):  Jun-ichiro Watanabe, Hitachi, Ltd. (Japan), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 7: Autonomous Operations, Artificial Intelligence, and Navigation III

For power distribution companies, power line inspection for stable power supply has been an important but costly task. We explored methods for the detection and visualization of power line-tree conflict using aerial images taken by drone. Here we show that CNN (Convolutional Neural Network) is effective for the recognition of power lines and trees in an image. Furthermore, mapping the candidate regions to 3D model, in which the power line position could be estimated taking the pole height into account even if it is difficult to reconstruct the power line itself in 3D, the user can make the final decision on power line-tree conflict checking depth and/or height directional relationship.


The state of solid-state 3D lidar for autonomous systems
Paper 10643-33

Author(s):  Frank Bertini, Velodyne LiDAR, Inc. (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session 7: Autonomous Operations, Artificial Intelligence, and Navigation III

Originally designed as the ideal robotic eye for the 2007 DARPA Grand Challenge, 3D LIDAR has quickly become the sensing modality of choice for autonomous vehicles and unmanned systems. Industry consensus predicts self-driving cars going mainstream in 2020, however many challenges still remain. This presentation details the state-of-the-art developments happening at Velodyne LiDAR which will lead to affordable, robust, and safe solid-state 3D LiDAR sensors for autonomous vehicles.


Autonomous power generation system for low-power applications as public lighting systems in Puerto Rico
Paper 10643-37

Author(s):  Miguel A. Goenaga-Jimenez, Univ. del Turabo (United States), et al.
Conference 10643: Autonomous Systems: Sensors, Vehicles, Security and the Internet of Everything
Session PS1: Posters-Tuesday

This works reports on a modelling of a coreless permanent magnet synchronous generator (PMSG) with an axial magnetic flux to be used on a vertical axis wind turbine. The design incorporates the working principle of an electric generator which contain a stator and a rotor. The stator or field winding is composed of nine magnetic copper wire coils and the rotor or armature winding is composed of twenty-four rare earth magnets, better known as neodymium magnets. Core is not used in the stator of the machine intended to be designed. Aim of this research is to provide both reduction of iron loss and making the machine become lighter. Moreover, easiness in the production stage of the machine is provided. Within this research, arrangements have been made depending on certain standards in order that permanent magnets and coils produce three-phase alternating current. The designed axial flux generator will have a maximum voltage of approximately twelve volts per phase.


Autonomous systems for nuclear crisis response, consequence management and forensics
Paper 10644-79

Author(s):  Lance K. McLean, National Security Technologies, LLC (United States), et al.
Conference 10644: Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV
Session PS1: Posters-Tuesday

Autonomous systems are potentially ideal candidates for the hazardous and time-consuming missions of probing unknown areas for nuclear contraband, measuring wide-area radiation contamination, and guiding evidence collection by nuclear forensics task forces. The agility afforded by small unmanned systems may provide distinct advantages in many circumstances, even though as yet, no viable autonomous replacement systems exist for the larger manned aerial and automobile-mounted systems currently in use. We describe our work using unmanned aerial and ground systems that employ optical, electro-optical and ionizing radiation sensors and the machine learning, optimization, and dimension reduction algorithms that allow them to work together.


Ground vehicle power line spectral sensing using GIS
Paper 10645-1

Author(s):  Mark W. Roberson, Goldfinch Sensor Technologies and Analytics LLC (United States), et al.
Conference 10645: Geospatial Informatics, and Motion Imagery Analytics VIII
Session 1: Geospatial Analytics I

Electrical power consumption is a measure of human activities. We collect magnetic field data from vehicles with time and location data in rural, interstate, and suburban areas with both overhead and buried power lines contributing to the signals. We analyze the data using ArcGIS to visualize the geospatial content and for both qualitative and quantitative comparison to imagery and data layers such as the zoning of the areas where the data was collected. We discuss the effects of the time-varying presence of other vehicles in modifying the detected signals as well as the changes in the spectral information over time.


Quadcopter sensing of magnetic and electric field with geospatial analytics
Paper 10645-3

Author(s):  Mark W. Roberson, Goldfinch Sensor Technologies and Analytics LLC (United States), et al.
Conference 10645: Geospatial Informatics, and Motion Imagery Analytics VIII
Session 1: Geospatial Analytics I

Monitoring an area over time by mapping of the field strengths presents detailed information of human activities. Transitioning from collect-and-view to real-time geospatial analytics over a continuous spatial coverage requires making more extensive use of moving sensors. Unmanned airborne systems (UAS) provide mobility in three dimensions, but also present self-sensing noise issues and severe weight constraints. We discuss our work with collection of multi-axis magnetic and electric field data from a quadcopter UAS. We discuss the results of our experiments in using the UAS to perform advanced processing on board in real time in order to initiate cross-cueing of sensors.


Targeted 3D modeling from UAV imagery
Paper 10645-13

Author(s):  Abe Martin, Brigham Young Univ. (United States), et al.
Conference 10645: Geospatial Informatics, and Motion Imagery Analytics VIII
Session 4: Geolocation and Registration

3D Reconstruction of UAV targets from EO imagery yields useful information, but can be time consuming and computationally expensive. View planning is a valuable tool for planning the optimal image set needed to reconstruct a scene, reducing processing time. This project demonstrates how view planning can be used to select a subset of images from a large existing image set in order to model specific vehicles or structures. Potential applications of the method include 3D target classification and geo-location, as well as on-board reconstruction. The algorithm reduces processing time for target models by up to a factor of 50.


Analysis of noise impact on distributed average consensus
Paper 10646-20

Author(s):  Boyuan Li, Univ. of Calgary (Canada), et al.
Conference 10646: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII
Session 5: Information Fusion Methodologies and Applications III

This paper considers the problem of distributed average consensus in a noisy sensor network in which noise will cause error bits and detriment the accuracy of the results. We use a bit-flipping model to model the noise effect and show that it will lead to biased results. We propose here an unbiased average consensus algorithm for noisy networks with dynamic topologies. We analyze the convergence speed and the mean square error and show that the noise can be suppressed by our method. The proposed algorithm is found effective in a network simulation with and without perfect bit error rate information.


A real-time object detection framework for aerial imagery using deep neural networks and synthetic training images
Paper 10646-39

Author(s):  Priya Narayanan, U.S. Army Research Lab. (United States), et al.
Conference 10646: Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII
Session 9: Signal and Image Processing, and Information Fusion Applications II

Real-time object recognition systems are extremely critical for several UAV applications since it provides fundamental semantic information of the aerial scene. In this study, we describe the development of an embedded real-time object detection frame-work which performs preliminary aerial detections using YOLO. The images are streamed to a base-station where a more advanced algorithm, ME-RCNN, provide enhanced analytics. Since annotated aerial imagery is hard to obtain, we use a combination of aerial data and air-to-ground synthetic images for training the neural network. This study quantifies the improvements with the use of the synthetic dataset and the efficacy of using ME-RCNN.


Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
Paper 10647-7

Author(s):  Youngsoo Kim, San José State Univ. (United States), et al.
Conference 10647: Algorithms for Synthetic Aperture Radar Imagery XXV
Session 1: Synthetic Data and Deep Learning

Deep Neural Networks (DNNs) have surpassed traditional feature-based detectors in classification accuracy for Automatic Target Recognition (ATR) and in some cases DNNs even surpass human accuracy. However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.


Recognizing objects in 3D data with distinctive self-similarity features
Paper 10648-12

Author(s):  Suya You, U.S. Army Research Lab. (United States), et al.
Conference 10648: Automatic Target Recognition XXVIII
Session 3: Advanced Algorithm in ATR II

This paper presents a technique for recognizing objects from 3D point clouds. The approach is based on the concept of self-similarity that captures the internal geometric layout of local patterns in a level of abstraction. A new type of distinctive features is developed to capture distinctive geometric signatures embedded in 3D point clouds. The resulting 3D self-similarity feature representation is compact and view/scale-independent, and hence can produce highly-efficient matching process. A complete feature-based ATR system is built based on the 3D self-similarity features. Experiments demonstrate that the proposed method increases robustness of recognition under different imaging conditions or modalities and is competitive to state-of-the-art methods.


Automated WAMI system calibration procedure based on multi-scale fusion and adaptive data association for geo-coding error correction
Paper 10649-19

Author(s):  Anastasiia Volkova, The Univ. of Sydney (Australia), et al.
Conference 10649: Pattern Recognition and Tracking XXIX
Session 4: Motion Sensing and Estimation Algorithms

Wide Area Motion Imagery systems used on surveillance aircraft may suffer from system calibration errors associated with frequent re-installation. These errors corrupt the quality of mapping of the tracked objects, ‘movers’, from the image frame into world reference frame. In this study, a comprehensive system for evaluation of the camera model parameters for a six-camera WAMI array has been developed. The automatic unsupervised camera parameter estimation was achieved through ordinary least squares optimisation using Euclidian distance between features robustly detected in each individual camera and features extracted from available satellite imagery. Adaptive data association using water outlines and/or road centrelines, and multi-scale data fusion were developed to extend the algorithm applications to challenging datasets. As a result, the positional errors of the movers detected in the sequences were reduced to less than 10 m which satisfies the operational requirements.


Vehicle tracking in full motion video using the progressively expanded neural network (PENNet) tracker
Paper 10649-20

Author(s):  Vijayan K. Asari, Univ. of Dayton (United States), et al.
Conference 10649: Pattern Recognition and Tracking XXIX
Session 4: Motion Sensing and Estimation Algorithms

Object trackers for full-motion-video (FMV) need to handle object occlusions (partial and short-term full), rotation, scaling, illumination changes, complex background variations, and perspective variations. Unlike traditional deep learning trackers that require extensive training time, the proposed Progressively Expanded Neural Network (PENNet) tracker methodology will utilize a modified variant of the extreme learning machine, which encompasses polynomial expansion and state preserving methodologies. This reduces the training time significantly for online training of the object. The proposed algorithm is evaluated on the DAPRA Video Verification of Identity (VIVID) dataset, wherein the selected high-value-targets (HVTs) are vehicles.


Decentralized decision-making for self-organizing collaborative robotic teams
Paper 10651-14

Author(s):  John Budenske, General Dynamics Mission Systems (United States), et al.
Conference 10651: Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2018
Session 3: C4ISR Networks

Decentralized decision-making algorithms are investigated as a means to allocate multiple dynamic tasks across a collaborative robotic team, within time-constraints, and while the team is already engaged in other tasks. Dynamic tasks exhibit constantly changing requirements and multiple unknowns (e.g., searching a building of unknown layout). The algorithms allow allocation and re-allocation of tasks even as the environment of tasks/robots continually changes. The approach utilizes weighted formulas that represent a robot’s ability to engage each of the identified tasks, and allocation is based on comparing formula results. The allocation process is improved via optimizing the weights based on deep-learning methods.


Cyber security and integrity self-awareness of mobile autonomous systems
Paper 10651-15

Author(s):  Lori Murray, General Dynamics Mission Systems (United States), et al.
Conference 10651: Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2018
Session 3: C4ISR Networks

Mobile autonomous systems (robots) have the potential vulnerability of moving into an environment where hardware and firmware are exposed to tampering. The OS and applications can be spoofed to compromise sensitive data. If the robot is part of a group, compromised authentication credentials could cascade into compromising the entire group. Individual robots need to determine whether they are compromised (Cyber-Integrity Self-Awareness); and groups need to determine whether individuals’ authentication credentials are compromised (Cyber-Integrity Group-Awareness). This presentation discusses cyber security for mobile autonomous systems, robot cyber resilience, and the extension of authenticated integrity to a group or swarm of collaborative robots.


Integrated air and missile defence under spatial grasp technology
Paper 10651-18

Author(s):  Peter Sapaty, The National Academy of Sciences of Ukraine (Ukraine), et al.
Conference 10651: Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2018
Session 4: Autonomous C4ISR Systems of the Future: Autonomous Decision-Making Approaches: Joint Session with Conferences DS116 and DS133

A novel control technology for solving tasks in distributed spaces is briefed. Based on active scenarios self-navigating and matching distributed systems in highly organized super-virus mode, it can effectively establish global control over large systems of any natures. It can use scattered and dissimilar facilities in an integral and holistic way allowing them to work together in a goal-driven supercomputer mode. Distributed air and missile defense systems pursuing multiple objects will be able to operate and self-recover after indiscriminate failures of any components. For example, they can effectively deal with many cruise missiles with tricky routes using cheap distributed sensor networks.


Advances in autonomous underwater vehicles and the move to network centric persistent subsea capabilities
Paper 10651-20

Author(s):  Thomas Altshuler, Teledyne Marine (United States), et al.
Conference 10651: Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2018
Session 5: Collaborative Robotic Teams: Joint Session with conferences DS117 and DS133


Resilient detection of multiple targets using a distributed algorithm with limited information sharing
Paper 10652-12

Author(s):  Jing Wang, Bradley Univ. (United States), et al.
Conference 10652: Disruptive Technologies in Information Sciences
Session 2: Advanced Networking


Integrating ground surveillance with aerial surveillance for enhanced amateur drone detection
Paper 10652-13

Author(s):  Houbing Song, Embry-Riddle Aeronautical Univ. (United States), et al.
Conference 10652: Disruptive Technologies in Information Sciences
Session 2: Advanced Networking

Drones are popular with an ever growing community of amateurs, but its deployment poses several public safety (PS) threats to national institutions and assets. Most amateur drones (ADrs) communicate with ground controller station (GCS) through telemetry radio. And telemetry radio continues to transmit data even the connection is broken. In this article, we propose a UAV-based IoT surveillance platform that works for detection of ADrs which are equipped with telemetry radio and operated by amateurs. Ground Surveillance Nodes (GSNs), equipped with radio receivers, determine approximate position of ADr which threatens PS, meanwhile available surveillance drone(SDr) come to confirm the specific detection with direction of GSNs. This platform combines ground and aerial surveillance results to recognize ADr with radio and image approaches. This platform has the potential to integrate with other advanced technologies such as recognition of radio and image for a solution of ADr detection.


Accurate localization and tracking of amateur drone enabled by cooperating surveillance drones
Paper 10652-14

Author(s):  Houbing Song, Embry-Riddle Aeronautical Univ. (United States), et al.
Conference 10652: Disruptive Technologies in Information Sciences
Session 2: Advanced Networking

Unmanned aerial vehicles (UAVs), commonly known as drones, have the potential to enable a wide variety of bene-ficial applications in areas such as monitoring and inspec-tion of physical infrastructure, smart emergency/disaster response, agriculture support, and observation and study of weather phenomena including severe storms, among oth-ers. However, the increasing deployment of amateur drones (ADrs) places the public safety at risk. A promising solution is to deploy surveillance drones (SDrs) for the detection, localization, tracking, jamming and hunting of ADrs. Ac-curate localization and tracking of ADr is the key to the success of ADr surveillance. In this article, we propose a novel framework for accurate localization and tracking of ADr enabled by cooperating SDrs. At the heart of the framework is a localization algorithm called cooperation coordinate separation interactive multiple model extended Kalman filter (CoCS-IMMEKF). This algorithm simplifies the set of multiple models an


Large-scale parallel simulations of distributed detection algorithms for collaborative autonomous sensor networks
Paper 10652-15

Author(s):  Anton Y. Yen, Lawrence Livermore National Lab. (United States), et al.
Conference 10652: Disruptive Technologies in Information Sciences
Session 2: Advanced Networking


Design-optimization and performances of multispectral (VIS-SWIR) photodetector and its array
Paper 10656-21

Author(s):  Jaydeep Dutta, Banpil Photonics, Inc. (United States), et al.
Conference 10656: Image Sensing Technologies: Materials, Devices, Systems, and Applications V
Session 5: Advanced Photodetectors and Focal Plane Array (FPA)

A novel broadband (VIS-SWIR) photodetector is developed for focal plane array (FPA) for military, security, and industrial imaging applications. The photodetector is based on InGaAs and fabricated on InP substrate, exhibiting high sensitivity, high quantum efficiency, and yet cost-effective. In order to realize a small weight, power, and cost effectiveness (SWAP-C) camera, the photodetector must have low dark current at high operating temperatures, which saves power for cooling. This paper will explain photodetector structure, design-simulation for optimizing the parameters, and performance of the photodetector and its array. We investigate the device structure and the theory of the photodetector. Electrical and optical characteristics of the photodetectors will be also presented in this paper.


Novel high energy short-pulse laser diode source for 3D Flash LIDAR
Paper 10656-34

Author(s):  Andreas Kohl, Quantel Laser (France), et al.
Conference 10656: Image Sensing Technologies: Materials, Devices, Systems, and Applications V
Session 9: Advanced Imaging Technologies

A mili-joul class diode laser source with pulse width < 10 ns has been demonstrated. This laser source has a very small footprint of 4 cm x 5 cm and a very high electro optical conversion efficiency of ~25%. It is therefore very well suited for applications in compact, light weight 3D flash lidar systems. A novel diode driver architecture was required to achieve these performances. The laser source operates between 800nm and 1000nm. At 1550 nm the pulse energies are lower than in the NIR but still much higher than for any other laser source of this size.


Advanced quantum cryptography: Where do we go from here?
Paper 10660-6

Author(s):  Paul G. Kwiat, Univ. of Illinois (United States), et al.
Conference 10660: Quantum Information Science, Sensing, and Computation X
Session 2: Quantum Cryptography and Quantum Key Distribution

A future quantum-secure network will require reconfigurable local nodes. Free-space platforms naturally lend themselves to reconfiguration, as nodes may be moved/ reoriented to easily target new nodes. We are implementing a multi-copter drone-based quantum cryptography link, including fast, high-resolution optical stabilization; compact, independent sources; and lightweight single-photon detection. Having access to an agile, reconfigurable QKD networking system will enable quantum cryptography to reach applications prohibited by current approaches, such as temporary networks in seaborne, urban, or even battlefield situations. By using transmitters and receivers at higher altitudes, the deleterious effects weather events like fog and turbulence can be mitigated.


Implications of sensor inconsistencies and remote sensing error in the use of small unmanned aerial systems for generation of information products for agricultural management
Paper 10664-1

Author(s):  Mac McKee, Utah State Univ. (United States), et al.
Conference 10664: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III
Session 1: Collecting Reliable Image Data with UAVs

Small, unmanned aerial systems (sUAS) are used as remote sensing devices for agriculture with growing frequency. These systems place limitations on the types and quality of the cameras that can be flown. This, in turn, limits the quality of the information that can be generated for the grower. This paper statistically examines issues of how errors in sensor spectral response, orthorectification accuracy, and spatial resolution can affect the estimation of information products of potential interest to growers, such as plant nutrition and precision fertilization. The paper relies on high-resolution data collected in 2016 over a commercial vineyard located near Lodi, California.


Quality assessment of radiometric calibration of UAV image mosaics
Paper 10664-3

Author(s):  Cody Bagnall, Texas A&M Univ. (United States), et al.
Conference 10664: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III
Session 1: Collecting Reliable Image Data with UAVs

UAV (unmanned aerial vehicle) based imaging produces vast amounts of data that could be used to improve the efficiency of agricultural inputs. One reason this ability has not yet been realized is that producing radiometrically calibrated UAV image mosaics is difficult. This paper presents an investigation of a field-based image-mosaic calibration procedure. A commercial off-the-shelf fixed-wing small UAV and a five-band multispectral sensor were used with multiple exposure settings. We evaluate the quality of the radiometric calibration procedure for UAV image mosaics by comparing them to high quality calibrated manned aircraft and satellite images collected on the same day at roughly the same time.


Inter-comparison of thermal measurements using ground-based sensors, airborne thermal cameras, and eddy covariance radiometers
Paper 10664-12

Author(s):  Alfonso F. Torres-Rua, Utah State Univ. (United States), et al.
Conference 10664: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III
Session 3: Thermal and Hyperspectral Imaging from UAVs

The increasing availability of on-ground sensors and UAV-borne thermal cameras, along with eddy covariance radiometers, for estimation of agricultural parameters such as Evapotranspiration, implicitly rely on the assumption that information produced by these sensors is interchangeable or compatible. This work presents a comparison between on-ground infrared radiometer (IRT), microbolometer thermal cameras used in UAVs and thermal radiometers used in eddy covariance towers. as part of the USDA Agricultural Research Service Grape Remote Sensing Atmospheric Profile and Evapotranspiration Experiment (GRAPEX) Program).


A low-cost method for collecting hyperspectral measurements from a small unmanned aircraft system
Paper 10664-15

Author(s):  Ali Hamidisepehr, Univ. of Kentucky (United States), et al.
Conference 10664: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III
Session 3: Thermal and Hyperspectral Imaging from UAVs

This study aimed to develop a spectral measurement platform for deployment on a sUAS for quantifying and delineating moisture zones within an agricultural landscape. A series of portable spectrometers covering ultraviolet (UV), visible (VIS), and near-infrared (NIR) wavelengths were instrumented using an embedded computer programmed to interface with the sUAS autopilot for autonomous data acquisition. A calibration routine was developed that scaled raw reflectance data by sensor integration time and ambient light energy. Results indicated the potential for mitigating the effect of ambient light when passively measuring reflectance on a portable spectral measurement system.


Experimental approach to detect water stress in ornamental plants using UAV-imagery
Paper 10664-20

Author(s):  Ana de Castro, Instituto de Agricultura Sostenible (Spain), et al.
Conference 10664: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III
Session 4: Detecting Yield, Disease, and Water Stress from UAVs

Accurate, reliable and timely crop water status measurements could improve irrigation efficiency and optimize water use in agriculture. Containerized ornamental crops provide a unique opportunity to apply UAV platform due to relatively small area of production, a diversity of plant species, and unbuffered growing media requiring continual inputs of water; making UAV a timely alternative to on-ground data collection. This research evaluated the potential of UAV-based images to estimate crop water status of multiple taxa. An algorithm based on the object-based image analysis (OBIA) paradigm was developed to accurately identify water stressed and non-stressed plants.


A comparison of sustainable forest management metrics generated from unmanned and manned aerial systems
Paper 10664-26

Author(s):  Michael McClelland, Rochester Institute of Technology (United States), et al.
Conference 10664: Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III
Session 6: Innovative UAV Applications

Sign up for email updates about SPIE Defense + Commerical Sensing

Learn more about becoming an Exhibitor at the 2018 event in Orlando