Proceedings Volume 9084

Unmanned Systems Technology XVI

Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, et al.
cover
Proceedings Volume 9084

Unmanned Systems Technology XVI

Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 24 June 2014
Contents: 6 Sessions, 36 Papers, 0 Presentations
Conference: SPIE Defense + Security 2014
Volume Number: 9084

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9084
  • Special Topics
  • RCTA
  • Mobility and Navigation
  • Perception
  • Poster Session I
Front Matter: Volume 9084
icon_mobile_dropdown
Front Matter: Volume 9084
This PDF file contains the front matter associated with SPIE Proceedings Volume 8084, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Special Topics
icon_mobile_dropdown
Neurobiomimetic constructs for intelligent unmanned systems and robotics
Jerome J. Braun, Danelle C. Shah, Marianne A. DeAngelus
This paper discusses a paradigm we refer to as neurobiomimetic, which involves emulations of brain neuroanatomy and neurobiology aspects and processes. Neurobiomimetic constructs include rudimentary and down-scaled computational representations of brain regions, sub-regions, and synaptic connectivity. Many different instances of neurobiomimetic constructs are possible, depending on various aspects such as the initial conditions of synaptic connectivity, number of neuron elements in regions, connectivity specifics, and more, and we refer to these instances as ‘animats’. While downscaled for computational feasibility, the animats are very large constructs; the animats implemented in this work contain over 47,000 neuron elements and over 720,000 synaptic connections. The paper outlines aspects of the animats implemented, spatial memory and learning cognitive task, the virtual-reality environment constructed to study the animat performing that task, and discussion of results. In a broad sense, we argue that the neurobiomimetic paradigm pursued in this work constitutes a particularly promising path to artificial cognition and intelligent unmanned systems. Biological brains readily cope with challenges of real-life tasks that consistently prove beyond even the most sophisticated algorithmic approaches known. At the cross-over point of neuroscience, cognitive science and computer science, paradigms such as the one pursued in this work aim to mimic the mechanisms of biological brains and as such, we argue, may lead to machines with abilities closer to those of biological species.
Intermittent communications modeling and simulation for autonomous unmanned maritime vehicles using an integrated APM and FSMC framework
Ayodeji Coker, Logan Straatemeier, Ted Rogers, et al.
In this work a framework is presented for addressing the issue of intermittent communications faced by autonomous unmanned maritime vehicles operating at sea. In particular, this work considers the subject of predictive atmospheric signal transmission over multi-path fading channels in maritime environments. A Finite State Markov Channel is used to represent a Nakagami-m modeled physical fading radio channel. The range of the received signal-to-noise ratio is partitioned into a finite number of intervals which represent application-specific communications states. The Advanced Propagation Model (APM), developed at the Space and Naval Warfare Systems Center San Diego, provides a characterization of the transmission channel in terms of evaporation duct induced signal propagation loss. APM uses a hybrid ray-optic and parabolic equations model which allows for the computation of electromagnetic (EM) wave propagation over various sea and/or terrain paths. These models which have been integrated in the proposed framework provide a strategic and mission planning aid for the operation of maritime unmanned vehicles at sea.
Automating software design and configuration for a small spacecraft
The Open Prototype for Educational NanoSats (OPEN) is a framework for the development of low-cost spacecraft. It will allow users to build a 1-U (10 cm x 10 cm x 11 cm, 1.33 kg) CubeSat-class spacecraft with a parts budget of approximately $5,000. Work is underway to develop software to assist users in configuring the spacecraft and validating its compliance with integration and launch standards. Each prospective configuration requires a unique software configuration, combining pre-built modules for controlling base components, custom control software for custom developed and payload components and overall mission management and control software (which, itself will be a combination of standard components and mission specific control logic). This paper presents a system for automating standard component configuration and creating templates to facilitate the creation and integration of components that must be (or which the developer desires to be) custom-developed for the particular mission or spacecraft.
Modeling and simulation of an unmanned ground vehicle power system
John Broderick, Jack Hartner, Dawn M. Tilbury, et al.
Long-duration missions challenge ground robot systems with respect to energy storage and efficient conversion to power on demand. Ground robot systems can contain multiple power sources such as fuel cell, battery and/or ultra-capacitor. This paper presents a hybrid systems framework for collectively modeling the dynamics and switching between these different power components. The hybrid system allows modeling power source on/off switching and different regimes of operation, together with continuous parameters such as state of charge, temperature, and power output. We apply this modeling framework to a fuel cell/battery power system applicable to unmanned ground vehicles such as Packbot or TALON. A simulation comparison of different control strategies is presented. These strategies are compared based on maximizing energy efficiency and meeting thermal constraints.
Autonomous self-righting using recursive Bayesian estimation to determine unknown ground angles
Jason Collins, Chad Kessens
As robots are deployed to dynamic, uncertain environments, their ability to discern key aspects of their environment and recover from errors becomes paramount. In particular, tip-over events can potentially end or substantially disrupt mission performance and jeopardize asset recovery. To facilitate recovery from tip-over events (i.e. self-righting), the robot should be able to discern the ground angle on which it lies even when it is not in its preferred upright orientation. In this paper, we present a methodology for determining unknown ground angles using recursive Bayesian estimation. First, we briefly review our previous framework for autonomous self-righting, which we use to generate conformation space maps correlating stable robot configurations and orientations on various ground angles. Using these maps, we compare sensor orientation to predicted orientation for the robot configuration on all mapped ground angles. We then compute the best fit ground angle and assign it a confidence level based on filters such as predicted stability margin and measured rate of orientation change. We compare ground angle prediction error as a function of time using a variety of methods, and show a sensitivity analysis comparing accuracy as a function of the discretization of the ground angle dimension of the conformation space map. Finally, we demonstrate a physical robot’s ability to self-right on unknown ground using this methodology.
Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal
David Erickson, Hervé Lacheray, Gilbert Lai, et al.
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
Speech and gesture interfaces for squad-level human-robot teaming
Jonathan Harris, Daniel Barber
As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.
New generation of human machine interfaces for controlling UAV through depth-based gesture recognition
Tomás Mantecón, Carlos Roberto del Blanco, Fernando Jaureguizar, et al.
New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.
RCTA
icon_mobile_dropdown
Supporting task-oriented collaboration in human-robot teams using semantic-based path planning
Daqing Yi, Michael A. Goodrich
Improvements in robot autonomy are changing the human-robot interaction from low-level manipulation to high-level task-based collaboration. For a task-oriented collaboration, a human assigns sub-tasks to robot team members. In this paper, we consider task-oriented collaboration of humans and robots in a cordon and search problem. We focus on a path-planning framework with natural language input. By the semantic elements in a shared mental model, a natural language command can be converted into optimization objectives. We import multi-objective optimization to facilitate modeling the “adverb” elements in natural language commands. Finally, human interactions are involved in the optimization search process in order to guarantee that the found solution correctly reflects the human’s intent.
Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems
Scott Ososky, Tracy Sanders, Florian Jentsch, et al.
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator’s ability to understand a robot’s behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans’ trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system’s action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans’ impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human–robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human–robot communication and human mental models regarding robots impact a human’s ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.
An interdisciplinary taxonomy of social cues and signals in the service of engineering robotic social intelligence
Travis J. Wiltshire, Emilio J. Lobato, Jonathan Velez, et al.
Understanding intentions is a complex social-cognitive task for humans, let alone machines. In this paper we discuss how the developing field of Social Signal Processing, and assessing social cues to interpret social signals, may help to develop a foundation for robotic social intelligence. We describe a taxonomy to further R&D in HRI and facilitate natural interactions between humans and robots. This is based upon an interdisciplinary framework developed to integrate: (1) the sensors used for detecting social cues, (2) the parameters for differentiating and classifying differing levels of those cues, and (3) how sets of social cues indicate specific social signals. This is necessarily an iterative process, as technologies improve and social science researchers better understand the complex interactions of vast quantities of social cue combinations. As such, the goal of this paper is to advance a taxonomy of this nature to further stimulate interdisciplinary collaboration in the development of advanced social intelligence that mutually informs areas of robotic perception and intelligence.
Validation and verification of a high-fidelity computational model for a bounding robot's parallel actuated elastic spine
Jason L. Pusey, Jin-Hyeong Yoo
We document the design and preliminary numerical simulation study of a high fidelity model of Canid, a recently introduced bounding robot. Canid is a free-standing, power-autonomous quadrupedal machine constructed from standard commercially available electromechanical and structural elements, incorporating compliant C-shaped legs like those of the decade old RHex design, but departing from that standard (and, to the best of our knowledge, from any prior) robot platform in its parallel actuated elastic spine. We have used a commercial modeling package to develop a finite-element model of the actuated, cable-driven, rigid-plate-reinforced harness for the carbon-fiber spring that joins the robot’s fore- and hind-quarters. We compare a numerical model of this parallel actuated elastic spine with empirical data from preliminary physical experiments with the most important component of the spine assembly: the composite leaf spring. Specifically, we report our progress in tuning the mechanical properties of a standard modal approximation to a conventional compliant beam model whose boundary conditions represent constraints imposed by the actuated cable driven vertebral plates that comprise the active control affordance over the spine. We conclude with a brief look ahead at near-term future experiments that will compare predictions of this fitted composite spring model with data taken from the physical spine flexed in isolation from the actuated harness.
Temporally consistent segmentation of point clouds
Jason L. Owens, Philip R. Osteen, Kostas Daniilidis
We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.
Common world model for unmanned systems: Phase 2
Robert Michael S. Dean, Jean Oh, Jerry Vinokurov
The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of “Phase 2” of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.
Integration and demonstration of MEMS-scanned LADAR for robotic navigation
Barry L. Stann, John F. Dammann Jr., Mark Del Giorno, et al.
LADAR is among the pre-eminent sensor modalities for autonomous vehicle navigation. Size, weight, power and cost constraints impose significant practical limitations on perception systems intended for small ground robots. In recent years, the Army Research Laboratory (ARL) developed a LADAR architecture based on a MEMS mirror scanner that fundamentally improves the trade-offs between these limitations and sensor capability. We describe how the characteristics of a highly developed prototype correspond to and satisfy the requirements of autonomous navigation and the experimental scenarios of the ARL Robotics Collaborative Technology Alliance (RCTA) program. In particular, the long maximum and short minimum range capability of the ARL MEMS LADAR makes it remarkably suitable for a wide variety of scenarios from building mapping to the manipulation of objects at close range, including dexterous manipulation with robotic arms. A prototype system was applied to a small (approximately 50 kg) unmanned robotic vehicle as the primary mobility perception sensor. We present the results of a field test where the perception information supplied by the LADAR system successfully accomplished the experimental objectives of an Integrated Research Assessment (IRA).
Head-orientation for a sidewinding snake robot using modal decomposition
E. A. Cappo, M. Travers, H. Choset
Biological snakes exhibit a natural ability to decouple locomotion from the perceptual task of aligning their heads in a particular direction. This same multi-tasking problem is nontrivial for snake robots. A snake robot can mimic the locomotion of their biological counterparts through use of analytic gait expressions, but to orient its head the robot must solve an inverse kinematics problem at every time step. In this work we use modal decomposition to modify a snake robot’s sidewinding gait to orient the head while locomoting. This avoids the problem of determining online inverse kinematic solutions which can be computationally costly. We use knowledge of the robot’s ground contact points to vary the number of joints used for head reorientation to minimally impact locomotion stability. We further show that the resulting expression can be used as a controller to improve real-time target tracking.
Tip-over prevention through heuristic reactive behaviors for unmanned ground vehicles
Kurt Talke, Leah Kelley, Patrick Longhini, et al.
Skid-steer teleoperated robots are commonly used by military and civilian crews to perform high-risk, dangerous and critical tasks such as bomb disposal. Their missions are often performed in unstructured environments with irregular terrain, such as inside collapsed buildings or on rough terrain covered with a variety of media, such as sand, brush, mud, rocks and debris. During such missions, it is often impractical if not impossible to send another robot or a human operator to right a toppled robot. As a consequence, a robot tip-over event usually results in mission failure. To make matters more complicated, such robots are often equipped with heavy payloads that raise their centers of mass and hence increase their instability. Should the robot be equipped with a manipulator arm or flippers, it may have a way to self-right. The majority of manipulator arms are not designed for and are likely to be damaged during self-righting procedures, however, which typically have a low success rate. Furthermore, those robots not equipped with manipulator arms or flippers have no self-righting capabilities. Additionally, due to the on-board camera frame of reference, the video feed may cause the robot to appear to be on at level ground, when it actually may be on a slope nearing tip-over. Finally, robot operators are often so focused on the mission at hand they are oblivious to their surroundings, similar to a kid playing a video game. While this may not be an issue in the living room, it is not a good scenario to experience on the battlefield. Our research seeks to remove tip-over monitoring from the already large list of tasks an operator must perform. An autonomous tip-over prevention behavior for a mobile robot with a static payload has been developed, implemented and experimentally validated on two different teleoperated robotic platforms. Suitable for use with both teleoperated and autonomous robots, the prevention behavior uses the force-angle stability measure, previously experimentally validated, to predict the likelihood of robot tip-over and trigger prevention behaviors. A unique heuristic approach to tip-over avoidance was investigated, wherein a set of evasive maneuvers that an expert teleoperator might take are activated when the tip-over-likelihood estimate passes a critical threshold. This control approach was validated on an iRobot Packbot as well as on a Segway RMP 440. The heuristic laws demonstrated the advantage of alerting operators to a tip-over scenario and gave them more time to correct the situation, as well as the ability to automatically initiate recovery on the y". This research shows promise in preventing dangerous scenarios that could damage a robot and/or compromise its mission, thus saving lives. It further provides a good foundation for follow-on development involving the expansion and integration of the prevention-control algorithms, to include movable payloads, environment manipulation, 2D or 3D look-ahead laser sensing and mapping, and adaptive path planning.
Mobility and Navigation
icon_mobile_dropdown
Object guided autonomous exploration for mobile robots in indoor environments
Carlos Nieto-Granda, Siddarth Choudhary, John G. Rogers III, et al.
Autonomous mobile robotic teams are increasingly used in exploration of indoor environments. Accurate modeling of the world around the robot and describing the interaction of the robot with the world greatly increases the ability of the robot to act autonomously. This paper demonstrates the ability of autonomous robotic teams to find objects of interest. A novel feature of our approach is the object discovery and the use of it to augment the mapping and navigation process. The generated map can then be decomposed into semantic regions while also considering the distance and line of sight to anchor points. The advantage of this approach is that the robot can return a dense map of the region around an object of interest. The robustness of this approach is demonstrated in indoor environments with multiple platforms with the objective of discovering objects of interest.
Development and evaluation of the Stingray, an amphibious maritime interdiction operations unmanned ground vehicle
Hoa G. Nguyen, Robin Castelli
The U.S. Navy and Marine Corps conduct thousands of Maritime Interdiction Operations (MIOs) every year around the globe. Navy Visit, Board, Search, and Seizure (VBSS) teams regularly board suspect ships and perform search operations, often in hostile environments. There is a need for a small tactical robot that can be deployed ahead of the team to provide enhanced situational awareness in these boarding, breaching, and clearing operations. In 2011, the Space and Naval Warfare Systems Center Pacific conducted user evaluations on a number of small throwable robots and sensors, verified the requirements, and developed the key performance parameters (KPPs) for an MIO robot. Macro USA Corporation was then tasked to design and develop two prototype systems, each consisting of one control/display unit and two small amphibious Stingray robots. Technical challenges included the combination paddle wheel/shock-absorbing wheel, the tradeoff between impact resistance, size, and buoyancy, and achieving adequate traction on wet surfaces. This paper describes the technical design of these robots and the results of subsequent user evaluations by VBSS teams.
Micro air vehicle autonomous obstacle avoidance from stereo-vision
Roland Brockers, Yoshiaki Kuwata, Stephan Weiss, et al.
We introduce a new approach for on-board autonomous obstacle avoidance for micro air vehicles flying outdoors in close proximity to structure. Our approach uses inverse-range, polar-perspective stereo-disparity maps for obstacle detection and representation, and deploys a closed-loop RRT planner that considers flight dynamics for trajectory generation. While motion planning is executed in 3D space, we reduce collision checking to a fast z-buffer-like operation in disparity space, which allows for significant speed-up compared to full 3d methods. Evaluations in simulation illustrate the robustness of our approach, whereas real world flights under tree canopy demonstrate the potential of the approach.
Assisted autonomy of articulated snake robots
Our lab has developed new capabilities for snake robots that allow them to successfully navigate networks of pipes. Recent developments in the control and state estimation of snake robots have enabled these capabilities. The development of a gait-based compliant controller enables us to develop more complex motions while at the same time simplifying the controls for the operator. Additionally, new state estimation techniques that exploit the robot's redundant sensing allow accurate estimation of the robot's orientation and kinematic configuration, even when significant amounts of sensor feedback is missing or corrupted.
Counter tunnel exploration, mapping, and localization with an unmanned ground vehicle
Jacoby Larson, Brian Okorn, Tracy Pastore, et al.
Covert, cross-border tunnels are a security vulnerability that enables people and contraband to illegally enter the United States. All of these tunnels to-date have been constructed for the purpose of drug smuggling, but they may also be used to support terrorist activity. Past robotic tunnel exploration efforts have had limited success in aiding law enforcement to explore and map the suspect cross-border tunnels. These efforts have made use of adapted explosive ordnance disposal (EOD) or pipe inspection robotic systems that are not ideally suited to the cross-border tunnel environment. The Counter Tunnel project was sponsored by the Office of Secretary of Defense (OSD) Joint Ground Robotics Enterprise (JGRE) to develop a prototype robotic system for counter-tunnel operations, focusing on exploration, mapping, and characterization of tunnels. The purpose of this system is to provide a safe and effective solution for three-dimensional (3D) localization, mapping, and characterization of a tunnel environment. The system is composed of the robotic mobility platform, the mapping sensor payload, and the delivery apparatus. The system is able to deploy and retrieve the robotic mobility platform through a 20-cm-diameter borehole into the tunnel. This requirement posed many challenges in order to design and package the sensor and robotic system to fit through this narrow opening and be able to perform the mission. This paper provides a short description of a few aspects of the Counter Tunnel system such as mobility, perception, and localization, which were developed to meet the unique challenges required to access, explore, and map tunnel environments.
On the consistency analysis of A-SLAM for UAV navigation
Simultaneous Localization and Mapping (SLAM) is a good choice for UAV navigation when both UAV’s position and region map are not known. Due to nonlinearity of kinematic equations of a UAV, Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are employed. In this study, EKF and UKF based A-SLAM concepts are discussed in details by presenting the formulations and simulation results. The UAV kinematic model and the state-observation models for EKF and UKF based A-SLAM methods are developed to analyze the filters' consistencies. Analysis during landmark observation exhibits an inconsistency in the form of a jagged UAV trajectory. It has been found that unobservable subspaces and the Jacobien matrices used for linearization are two major sources of the inconsistencies observed. UKF performs better in terms of filter consistency since it does not require the Jacobien matrix linearization.
Perception
icon_mobile_dropdown
Infrared stereo calibration for unmanned ground vehicle navigation
The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.
A robust method for online stereo camera self-calibration in unmanned vehicle system
Yu Zhao, Nobuhiro Chihara, Tao Guo, et al.
Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature point's registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as well as other environment recognition systems.
Investigating clutter reduction for unmanned systems applications using imaging polarimetry
Jonathan B. Hanks, Todd M. Aycock, David B. Chenault
The proliferation of unmanned systems in recent years has sparked increased interest in multiple areas of research for on-board image processing including autonomous navigation, surveillance, detection, and tracking to name a few. For these applications, techniques for reducing scene clutter provide an increased level of robustness for autonomous systems and reduced operator burden for tele-operated systems. Because imaging polarimetry frequently provides complementary information to the standard radiometric image, it is anticipated that this technology is well suited to provide a significant reduction in scene clutter. In this paper, the authors investigate the use of imaging polarimetry under a number of representative scenarios to assess the utility of this technology for unmanned system applications.
Absolute localization of ground robots by matching LiDAR and image data in dense forested environments
Marwan Hussein, Matthew Renner, Karl Iagnemma
A method for the autonomous geolocation of ground vehicles in forest environments is discussed. The method provides an estimate of the global horizontal position of a vehicle strictly based on finding a geometric match between a map of observed tree stems, scanned in 3D by Light Detection and Ranging (LiDAR) sensors onboard the vehicle, to another stem map generated from the structure of tree crowns analyzed from high resolution aerial orthoimagery of the forest canopy. Extraction of stems from 3D data is achieved by using Support Vector Machine (SVM) classifiers and height above ground filters that separate ground points from vertical stem features. Identification of stems from overhead imagery is achieved by finding the centroids of tree crowns extracted using a watershed segmentation algorithm. Matching of the two maps is achieved by using a robust Iterative Closest Point (ICP) algorithm that determines the rotation and translation vectors to align the datasets. The alignment is used to calculate the absolute horizontal location of the vehicle. The method has been tested with real-world data and has been able to estimate vehicle geoposition with an average error of less than 2 m. It is noted that the algorithm’s accuracy performance is currently limited by the accuracy and resolution of aerial orthoimagery used. The method can be used in real-time as a complement to the Global Positioning System (GPS) in areas where signal coverage is inadequate due to attenuation by the forest canopy, or due to intentional denied access. The method has two key properties that are significant: i) It does not require a priori knowledge of the area surrounding the robot. ii) Uses the geometry of detected tree stems as the only input to determine horizontal geoposition.
Occluded human recognition for a leader following system using 3D range and image data in forest environment
Kuk Cho, Muhammad Ilyas, Seung-Ho Baeg, et al.
This paper describes the occluded target recognition and tracking method for a leader-following system by fusing 3D range and image data acquired from 3D light detection and ranging (LIDAR) and a color camera installed on an autonomous vehicle in forest environment. During 3D data processing, the distance-based clustering method has an instinctive problem in close encounters. In the tracking phase, we divide an object tracking process into three phases based on occlusion scenario; before an occlusion (BO) phase, a partially or fully occlusion phase and after an occlusion (AO) phase. To improve the data association performance, we use camera's rich information to find correspondence among objects during above mentioned three phases of occlusion. In this paper, we solve a correspondence problem using the color features of human objects with the sum of squared distance (SSD) and the normalized cross correlation (NCC). The features are integrated with derived windows from Harris corner. The experimental results for a leader following on an autonomous vehicle are shown with LIDAR and camera for improving a data association problem in a multiple object tracking system.
Poster Session I
icon_mobile_dropdown
A practical approach to considering uncertainties in the creation of autonomous behaviors in unmanned surface vehicles
Zi Jing Bay, Chor Wei Yew, Kwok Wai Yue, et al.
Unmanned Surface Vehicles (USVs) have been proposed for use in several mission-critical operations in recent years. Although many autonomous behaviors have been developed by the research community, USV systems are greatly limited in functionality due to imperfect perception information and uncertainties in platform control. This paper presents a practical approach in considering these issues during the creation of autonomous behaviors. The approach allows autonomous behaviors to consider imperfect perception information and uncertainties in control during planning, and has been demonstrated successfully on a 9 meter USV.
HALOS: fast compact, autonomous adaptive optics for UAVs
We present an adaptive optics system which uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. The wavefront characterization is extremely fast as it is based on simple measurements of the intensity of focal spots and does not require any complex calculations. Furthermore, the system does not require a computer in the loop and is thus much cheaper, more compact and more robust as well. A fully functional, closed-loop prototype incorporating a 32-element MEMS mirror has been constructed. The unit has a footprint no larger than a laptop but runs at bandwidths over an order of magnitude faster than comparable, conventional systems occupying a significantly larger volume. Additionally, since the sensing is based on parallel, all-optical processing, the speed is independent of actuator number – running at the same bandwidth for one actuator as for a million.
Use of eternal flight unmanned aircraft in military operations
Unmanned Aerial Vehicles (UAV), are planned to use solar energy, are being more common and interesting gradually. Today, these systems are very promising while fossil fuels are diminishing rapidly. Academic research is still being conducted to develop unmanned aerial systems which will store energy during day time and use it during night time. Development of unmanned aerial systems, which have eternal flight or very long loiter periods, could be possible by such an energy management. A UAV, which can fly very long time, could provide many advantages that cannot be obtained by conventional aircrafts and satellites. Such systems can be operated as fixed satellites on missions with very low cost in circumstances that require continuous intelligence. By improving automation systems these vehicles could be settled on operation area autonomously and can be grounded easily in case of necessities and maintenance. In this article, the effect of solar powered UAV on operation area has been done a literature review, to be used in surveillance and reconnaissance missions.
Roll angle measurement using a polarization scanning reference source
On board measurement of attitude position, for example roll angle, of autonomous vehicles is critical to the execution of a successful mission. This paper describes a real-time technique, which combines a polarization scanning reference source and a priori knowledge of the scanning pattern. Measurements in an anechoic chamber, as well as, field tests in a busy parking lot, verify the efficacy of the technique, for both line of sight and non-line of sight capability.
Controlling UCAVs by JTACs in CAS missions
A. Emre Kumaş
By means of evolving technology, capabilities of UAVs (Unmanned Aerial Vehicle)s are increasing rapidly. This development provides UAVs to be used in many different areas. One of these areas is CAS (Close Air Support) mission. UAVs have several advantages compared to manned aircraft, however there are also some problematic areas. The remote controlling of these vehicles from thousands of nautical miles away via satellite may lead to various problems both ethical and tactical aspects. Therefore, CAS missions require a good level of ALI (Air-Land Integration), a high SA (situational awareness) and precision engagement. In fact, there is an aware friendly element in the target area in CAS missions, unlike the other UAV operations. This element is an Airman called JTAC (Joint Terminal Attack Controller). Unlike the JTAC, UAV operators are too far away from target area and use the limited FOV (Field of View) provided by camera and some other sensor data. In this study, target area situational awareness of a UAV operator and a JTAC, in a high-risk mission for friendly ground forces and civilians such as CAS, are compared. As a result of this comparison, answer to the question who should control the UCAV (Unmanned Combat Aerial Vehicle) in which circumstances is sought. A literature review is made in UAV and CAS fields and recent air operations are examined. The control of UCAV by the JTAC is assessed by SWOT analysis and as a result it is deduced that both control methods can be used in different situations within the framework of the ROE (Rules Of Engagement) is reached.
Current and future possibilities of V2V and I2V technologies: an analysis directed toward Augmented Reality systems
Nowadays, it is very important to explore the qualitative characteristics of autonomous mobility systems in automobiles, especially disruptive technology like Vehicle to Vehicle (V2V) and Infrastructure to Vehicle (I2V), in order to comprehend how the next generation of automobiles will be developed. In this sense, this research covers a general review about active safety in automobiles where V2V and I2V systems have been implemented; identifying the more realistic possibilities related to V2V and I2V technology and analyzing the current applications, some systems in development process and some future conceptual proposals. Mainly, it is notorious the potential development of mixing V2V and I2V systems pointing to increase the driver's attention; therefore, a configuration between these two technologies and some augmented reality system for automobiles (Head-Up Display and Head-Down Display) is proposed. There is a huge potential of implementation for this kind of configuration once the normative and the roadmap for its development can be widely established.
A 10 GHz polarization scanning reference source
Amplitude and phase control of two orthogonal linear polarized RF waves provide a very versatile means for producing a time varying linear polarization scanning reference source. Dynamic control of the state of polarization of the radiated EM wave offers unique scan patterns, which lead to robust recovery of attitude angular information of various flying platforms, such as unmanned aerial vehicles. Data taken in an anechoic chamber confirms the efficacy of the technique.