Proceedings Volume 9468

Unmanned Systems Technology XVII

Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, et al.
cover
Proceedings Volume 9468

Unmanned Systems Technology XVII

Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, et al.
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 9 June 2015
Contents: 6 Sessions, 23 Papers, 0 Presentations
Conference: SPIE Defense + Security 2015
Volume Number: 9468

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9468
  • Perception and Human Robot Interaction
  • Robotics CTA
  • Self-organizing, Collaborative Unmanned ISR Teams II: Joint session with conferences 9479 and 9468
  • Special Topics
  • Poster Session
Front Matter: Volume 9468
icon_mobile_dropdown
Front Matter: Volume 9468
This PDF file contains the front matter associated with SPIE Proceedings Volume 9468, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Perception and Human Robot Interaction
icon_mobile_dropdown
Multi-modal interaction for UAS control
Glenn Taylor, Ben Purman, Paul Schermerhorn, et al.
Unmanned aircraft systems (UASs) have seen a dramatic increase in military operations over the last two decades. The increased demand for their capabilities on the battlefield has resulted in quick fielding with user interfaces that are designed more for engineers in mind than for UAS operators. UAS interfaces tend to support tele-operation with a joystick or complex, menu-driven interfaces that have a steep learning curve. These approaches to control require constant attention to manage a single UAS and require increased heads-down time in an interface to search for and click on the right menus to invoke commands. The time and attention required by these interfaces makes it difficult to increase a single operator’s span of control to encompass multiple UAS or the control of sensor systems. In this paper, we explore an alternative interface to the standard menu-based control interfaces. Our approach in this work was to first study how operators might want to task a UAS if they were not constrained by a typical menu interface. Based on this study, we developed a prototype multi-modal dialogue interface for more intuitive control of multiple unmanned aircraft and their sensor systems using speech and map-based gesture/sketch. The system we developed is a two-way interface that allows a user to draw on a map while speaking commands to the system, and which provides feedback to the user to ensure the user knows what the system is doing. When the system does not understand the user for some reason – for example, because speech recognition failed or because the user did not provide enough information – the system engages with the user in a dialogue to gather the information needed to perform the command. With the help of UAS operators, we conducted a user study to compare the performance of our prototype system against a representative menu-based control interface in terms of usability, time on task, and mission effectiveness. This paper describes a study to gather data about how people might use a natural interface, the system itself, and the results of the user study. Keywords: UAS control, natural interfaces, multi-modal interaction.
A novel method for full position and angular orientation measurement of moving objects
Angular orientation of an object such as a projectile, relative to the earth or another object such as a mobile platform continues to be an ongoing topic of interest for guidance and/or steering. Currently available sensors, which include inertia devices such as accelerometers and gyros; magnetometers; surface mounted antennas; radars; GPS; and optical line of sight devices, do not provide an acceptable on-board solution for many applications, particularly for gun-fired munitions. We present a viable solution, which combines open-aperture sensors with custom designed radiation patterns and one or more amplitude modulated polarization scanning reference sources. Subsequently, the sensor system presents a new approach to angle measurements, with several key advantages over traditional cross-polarization based rotation sensors. Primarily, angular information is coded into a complex spatiotemporal pattern, which is insensitive to power fluctuations caused by environmental factors, while making the angle measurement independent of distance from the referencing source. Triangulation, using multiple sources, may be also used for onboard position measurement. Both measurements are independent of GPS localization; are direct and relative to the established local referencing system; and not subject to drift and/or error accumulation. Results of laboratory tests as well as field tests are presented.
Natural interaction for unmanned systems
Glenn Taylor, Ben Purman, Paul Schermerhorn, et al.
Military unmanned systems today are typically controlled by two methods: tele-operation or menu-based, search-andclick interfaces. Both approaches require the operator’s constant vigilance: tele-operation requires constant input to drive the vehicle inch by inch; a menu-based interface requires eyes on the screen in order to search through alternatives and select the right menu item. In both cases, operators spend most of their time and attention driving and minding the unmanned systems rather than on being a warfighter. With these approaches, the platform and interface become more of a burden than a benefit. The availability of inexpensive sensor systems in products such as Microsoft Kinect™ or Nintendo Wii™ has resulted in new ways of interacting with computing systems, but new sensors alone are not enough. Developing useful and usable human-system interfaces requires understanding users and interaction in context: not just what new sensors afford in terms of interaction, but how users want to interact with these systems, for what purpose, and how sensors might enable those interactions. Additionally, the system needs to reliably make sense of the user’s inputs in context, translate that interpretation into commands for the unmanned system, and give feedback to the user. In this paper, we describe an example natural interface for unmanned systems, called the Smart Interaction Device (SID), which enables natural two-way interaction with unmanned systems including the use of speech, sketch, and gestures. We present a few example applications SID to different types of unmanned systems and different kinds of interactions.
Fusion of lidar and radar for detection of partially obscured objects
Jim Hollinger, Brett Kutscher, Ryan Close
The capability to detect partially obscured objects is of interest to many communities, including ground vehicle robotics. The ability to find partially obscured objects can aid in automated navigation and planning algorithms used by robots. Two sensors often used for this task are Lidar and Radar. Lidar and Radar systems provide complementary data about the environment. Both are active sensing modalities and provide direct range measurements. However, they operate in very different portions of the radio frequency spectrum. By exploiting properties associated with the different frequency spectra, the sensors are able to compensate for each other’s shortcomings. This makes them excellent candidates for sensor processing and data fusion systems. The benefits associated with Lidar and Radar sensor fusion for a ground vehicle application, using economical variants of these sensors, are presented. Special consideration is given to detecting objects partially obscured by light to medium vegetation.
Robotics CTA
icon_mobile_dropdown
Motion compensation for structured light sensors
Debjani Biswas, Christoph Mertz
In order for structured light methods to work outside, the strong background from the sun needs to be suppressed. This can be done with bandpass filters, fast shutters, and background subtraction. In general this last method necessitates the sensor system to be stationary during data taking. The contribution of this paper is a method to compensate for the motion if the system is moving. The key idea is to use video stabilization techniques that work even if the illuminator is switched on and off from one frame to another. We used OpenCV functions and modules to implement a robust and efficient method. We evaluated it under various conditions and tested it on a moving robot outdoors. We will demonstrate that one can not only do 3D reconstruction under strong ambient light, but that it is also possible to observe optical properties of the objects in the environment.
Active dictionary learning for image representation
Tong Wu, Anand D. Sarwate, Waheed U. Bajwa
Sparse representations of images in overcomplete bases (i.e., redundant dictionaries) have many applications in computer vision and image processing. Recent works have demonstrated improvements in image representations by learning a dictionary from training data instead of using a predefined one. But learning a sparsifying dictionary can be computationally expensive in the case of a massive training set. This paper proposes a new approach, termed active screening, to overcome this challenge. Active screening sequentially selects subsets of training samples using a simple heuristic and adds the selected samples to a "learning pool," which is then used to learn a newer dictionary for improved representation performance. The performance of the proposed active dictionary learning approach is evaluated through numerical experiments on real-world image data; the results of these experiments demonstrate the effectiveness of the proposed method.
RCTA capstone assessment
Craig Lennon, Barry Bodt, Marshal Childers, et al.
The Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA) is a program intended to change robots from tools that soldiers use into teammates with which soldiers can work. This requires the integration of fundamental and applied research in perception, artificial intelligence, and human-robot interaction. In October of 2014, the RCTA assessed progress towards integrating this research. This assessment was designed to evaluate the robot's performance when it used new capabilities to perform selected aspects of a mission. The assessed capabilities included the ability of the robot to: navigate semantically outdoors with respect to structures and landmarks, identify doors in the facades of buildings, and identify and track persons emerging from those doors. We present details of the mission-based vignettes that constituted the assessment, and evaluations of the robot’s performance in these vignettes.
Semi-autonomous exploration of multi-floor buildings with a legged robot
Garrett J. Wenger, Aaron M. Johnson, Camillo J. Taylor, et al.
This paper presents preliminary results of a semi-autonomous building exploration behavior using the hexapedal robot RHex. Stairwells are used in virtually all multi-floor buildings, and so in order for a mobile robot to effectively explore, map, clear, monitor, or patrol such buildings it must be able to ascend and descend stairwells. However most conventional mobile robots based on a wheeled platform are unable to traverse stairwells, motivating use of the more mobile legged machine. This semi-autonomous behavior uses a human driver to provide steering input to the robot, as would be the case in, e.g., a tele-operated building exploration mission. The gait selection and transitions between the walking and stair climbing gaits are entirely autonomous. This implementation uses an RGBD camera for stair acquisition, which offers several advantages over a previously documented detector based on a laser range finder, including significantly reduced acquisition time. The sensor package used here also allows for considerable expansion of this behavior. For example, complete automation of the building exploration task driven by a mapping algorithm and higher level planner is presently under development.
Toward agile control of a flexible-spine model for quadruped bounding
Katie Byl, Brian Satzinger, Tom Strizic, et al.
Legged systems should exploit non-steady gaits both for improved recovery from unexpected perturbations and also to enlarge the set of reachable states toward negotiating a range of known upcoming terrain obstacles. We present a 4-link planar, bounding, quadruped model with compliance in its legs and spine and describe design of an intuitive and effective low-level gait controller. We extend our previous work on meshing hybrid dynamic systems and demonstrate that our control strategy results in stable gaits with meshable, low-dimension step- to-step variability. This meshability is a first step toward enabling switching control, to increase stability after perturbations compared with any single gait control, and we describe how this framework can also be used to find the set of n-step reachable states. Finally, we propose new guidelines for quantifying "agility" for legged robots, providing a preliminary framework for quantifying and improving performance of legged systems.
Self-organizing, Collaborative Unmanned ISR Teams II: Joint session with conferences 9479 and 9468
icon_mobile_dropdown
UAV field demonstration of social media enabled tactical data link
Christopher C. Olson, Da Xu, Sean R. Martin, et al.
This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.
Fuel cell powered small unmanned aerial systems (UASs) for extended endurance flights
Deryn Chu, R. Jiang, Z. Dunbar, et al.
Small unmanned aerial systems (UASs) have been used for military applications and have additional potential for commercial applications [1-4]. For the military, these systems provide valuable intelligence, surveillance, reconnaissance and target acquisition (ISRTA) capabilities for units at the infantry, battalion, and company levels. The small UASs are light-weight, manportable, can be hand-launched, and are capable of carrying payloads. Currently, most small UASs are powered by lithium-ion or lithium polymer batteries; however, the flight endurance is usually limited less than two hours and requires frequent battery replacement. Long endurance small UAS flights have been demonstrated through the implementation of a fuel cell system. For instance, a propane fueled solid oxide fuel cell (SOFC) stack has been used to power a small UAS and shown to extend mission flight time. The research and development efforts presented here not only apply to small UASs, but also provide merit to the viability of extending mission operations for other unmanned systems applications.
Tactical 3D model generation using structure-from-motion on video from unmanned systems
Josh Harguess, Mark Bilinski, Kim B. Nguyen, et al.
Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earthproviding the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.
Aircraft path planning for optimal imaging using dynamic cost functions
Gordon Christie, Haseeb Chaudhry, Kevin Kochersberger
Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an “applications lag” for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.
Cloud-based distributed control of unmanned systems
Kim B. Nguyen, Darren N. Powell, Charles Yetman, et al.
Enabling warfighters to efficiently and safely execute dangerous missions, unmanned systems have been an increasingly valuable component in modern warfare. The evolving use of unmanned systems leads to vast amounts of data collected from sensors placed on the remote vehicles. As a result, many command and control (C2) systems have been developed to provide the necessary tools to perform one of the following functions: controlling the unmanned vehicle or analyzing and processing the sensory data from unmanned vehicles. These C2 systems are often disparate from one another, limiting the ability to optimally distribute data among different users. The Space and Naval Warfare Systems Center Pacific (SSC Pacific) seeks to address this technology gap through the UxV to the Cloud via Widgets project. The overarching intent of this three year effort is to provide three major capabilities: 1) unmanned vehicle control using an open service oriented architecture; 2) data distribution utilizing cloud technologies; 3) a collection of web-based tools enabling analysts to better view and process data. This paper focuses on how the UxV to the Cloud via Widgets system is designed and implemented by leveraging the following technologies: Data Distribution Service (DDS), Accumulo, Hadoop, and Ozone Widget Framework (OWF).
Special Topics
icon_mobile_dropdown
Research for a multi-modal mobility and manipulation propulsion core
Harris Edge, Jason Collins
There are many challenges for robotics, many of which may be placed in the context of robots acting as a teammate to Soldiers. In general one may see a robotic teammate as an unmanned system that complements a Soldier’s capability, may perform some of the duties of a Soldier, or may actually protect the Soldier. There is much research that needs to be performed before robots are physically capable of performing as teammates to Soldiers in dynamic environments where speed matters, and in complex 3-D environments where navigation for today’s robots is difficult. This research addresses a fundamental obstacle to addressing this issue, which is how to safely and cost effectively develop theory and controls for a new generation of robots that may operate at operations tempo (OPTEMPO) in dynamic complex 3-D environments. This paper documents design and fabrication of a research platform capable of demonstrating theory and control algorithms developed for highly dynamic robotics systems, which may need to navigate and perform a task in complex 3-D environments. The research platform has been designed to address challenging basic research in the areas of airborne manipulation, transition to and interaction with vertical surfaces, exploration of a constrained space such as urban environments (street level to rooftop), forests, and underground facilities. The platform will allow controls development and validation for a vehicle that’s weight is at least partially supported by a propulsion system to perform work on the environment and/or an object within the environment.
Automatic behavior sensing for a bomb-detecting dog
Hoa G. Nguyen, Adam Nans, Kurt Talke, et al.
Bomb-detecting dogs are trained to detect explosives through their sense of smell and often perform a specific behavior to indicate a possible bomb detection. This behavior is noticed by the dog handler, who confirms the probable explosives, determines the location, and forwards the information to an explosive ordnance disposal (EOD) team. To improve the speed and accuracy of this process and better integrate it with the EOD team’s robotic explosive disposal operation, SPAWAR Systems Center Pacific has designed and prototyped an electronic dog collar that automatically tracks the dog’s location and attitude, detects the indicative behavior, and records the data. To account for the differences between dogs, a 5-minute training routine can be executed before the mission to establish initial values for the k-mean clustering algorithm that classifies a specific dog’s behavior. The recorded data include GPS location of the suspected bomb, the path the dog took to approach this location, and a video clip covering the detection event. The dog handler reviews and confirms the data before it is packaged up and forwarded on to the EOD team. The EOD team uses the video clip to better identify the type of bomb and for awareness of the surrounding environment before they arrive at the scene. Before the robotic neutralization operation commences at the site, the location and path data (which are supplied in a format understandable by the next-generation EOD robots—the Advanced EOD Robotic System) can be loaded into the robotic controller to automatically guide the robot to the bomb site. This paper describes the project with emphasis on the dog-collar hardware, behavior-classification software, and feasibility testing.
Trust-based learning and behaviors for convoy obstacle avoidance
Dariusz G. Mikulski, Robert E. Karlsen
In many multi-agent systems, robots within the same team are regarded as being fully trustworthy for cooperative tasks. However, the assumption of trustworthiness is not always justified, which may not only increase the risk of mission failure, but also endanger the lives of friendly forces. In prior work, we addressed this issue by using RoboTrust to dynamically adjust to observed behaviors or recommendations in order to mitigate the risks of illegitimate behaviors. However, in the simulations in prior work, all members of the convoy had knowledge of the convoy goal. In this paper, only the lead vehicle has knowledge of the convoy goals and the follow vehicles must infer trustworthiness strictly from lead vehicle performance. In addition, RoboTrust could only respond to observed performance and did not dynamically learn agent behavior. In this paper, we incorporate an adaptive agent-specific bias into the RoboTrust algorithm that modifies its trust dynamics. This bias is learned incrementally from agent interactions, allowing good agents to benefit from faster trust growth and slower trust decay and bad agents to be penalized with slower trust growth and faster trust decay. We then integrate this new trust model into a trust-based controller for decentralized autonomous convoy operations. We evaluate its performance in an obstacle avoidance mission, where the convoy attempts to learn the best speed and following distances combinations for an acceptable obstacle avoidance probability.
Cybersecurity for aerospace autonomous systems
High profile breaches have occurred across numerous information systems. One area where attacks are particularly problematic is autonomous control systems. This paper considers the aerospace information system, focusing on elements that interact with autonomous control systems (e.g., onboard UAVs). It discusses the trust placed in the autonomous systems and supporting systems (e.g., navigational aids) and how this trust can be validated. Approaches to remotely detect the UAV compromise, without relying on the onboard software (on a potentially compromised system) as part of the process are discussed. How different levels of autonomy (task-based, goal-based, mission-based) impact this remote characterization is considered.
Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot
Leah Kelley, Saam Ostovari, Aaron B. Burmeister, et al.
A novel, multi-segmented magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative motion connect front and rear driving modules, so the robot can twist and turn. This permits its navigation over surface discontinuities while maintaining its adhesion to the hull. During operation, the magnetic crawler receives forward and turning velocity commands from either a tele-operator or high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the development of a simple, low-level, leader-follower controller that permits the rear module to follow the front module. The kinematics and dynamics of the two-module magnetic crawler robot are described. The robot’s geometry, kinematic constraints and the user-commanded velocities are used to calculate the desired instantaneous center of rotation and the corresponding central-linkage angle necessary for the back module to follow the front module when turning. The commands to the rear driving motors are determined by applying PID control on the error between the desired and measured linkage angle position. The controller is designed and tested using Matlab Simulink. It is then implemented and tested on an early two-module magnetic crawler prototype robot. Results of the simulations and experimental validation of the controller design are presented.
Poster Session
icon_mobile_dropdown
Open Space Box: communication to support Big Data in orbit
Atif Farid Mohammad, Jeremy Straub
Communication to and from a small spacecraft can be at an extremely slow Baud rate, means both sending and receiving any communication will take some time. Extract, Transform and Load tools designed to transmit and receive data needs to have a flexible protocol. The Open Space Box model provides this base for smaller spacecraft to provide users data in a fashion that is pervasive within satellites as well as the ground stations. It also autonomically distinguishes data streams and disseminates relevant information to the related end users. Streaming Data can also be considered the generation of Big Data. At a ground station, the receiving of data can create the problem of Big Data and its management. Messages are sent in batch mode and communications are done using MapReduce.
System and mathematical modeling of quadrotor dynamics
Jacob M. Goodman, Jinho Kim, S. Andrew Gadsden, et al.
Unmanned aerial systems (UAS) are becoming increasingly visible in our daily lives; and range in operation from search and rescue, monitoring hazardous environments, and to the delivery of goods. One of the most popular UAS are based on a quad‐rotor design. These are typically small devices that rely on four propellers for lift and movement. Quad‐rotors are inherently unstable, and rely on advanced control methodologies to keep them operating safely and behaving in a predictable and desirable manner. The control of these devices can be enhanced and improved by making use of an accurate dynamic model. In this paper, we examine a simple quadrotor model, and note some of the additional dynamic considerations that were left out. We then compare simulation results of the simple model with that of another comprehensive model.
Establishing a disruptive new capability for NASA to fly UAV's into hazardous conditions
Jay Ely, Truong Nguyen, Jennifer Wilson, et al.
A 2015 NASA Aeronautics Mission “Seedling” Proposal is described for a Severe-Environment UAV (SE-UAV) that can perform in-situ measurements in hazardous atmospheric conditions like lightning, volcanic ash and radiation. Specifically, this paper describes the design of a proof-of-concept vehicle and measurement system that can survive lightning attachment during flight operations into thunderstorms. Elements from three NASA centers draw together for the SE-UAV concept. 1) The NASA KSC Genesis UAV was developed in collaboration with the DARPA Nimbus program to measure electric field and X-rays present within thunderstorms. 2) A novel NASA LaRC fiber-optic sensor uses Faraday-effect polarization rotation to measure total lightning electric current on an air vehicle fuselage. 3) NASA AFRC’s state-of-the-art Fiber Optics and Systems Integration Laboratory is envisioned to transition the Faraday system to a compact, light-weight, all-fiber design. The SE-UAV will provide in-flight lightning electric-current return stroke and recoil leader data, and serve as a platform for development of emerging sensors and new missions into hazardous environments. NASA’s Aeronautics and Science Missions are interested in a capability to perform in-situ volcanic plume measurements and long-endurance UAV operations in various weather conditions. (Figure 1 shows an artist concept of a SE-UAV flying near a volcano.) This paper concludes with an overview of the NASA Aeronautics Strategic Vision, Programs, and how a SE-UAV is envisioned to impact them. The SE-UAV concept leverages high-value legacy research products into a new capability for NASA to fly a pathfinder UAV into hazardous conditions, and is presented in the SPIE DSS venue to explore teaming, collaboration and advocacy opportunities outside NASA.