ARIA: the Aerial Robotic Infrastructure Analyst

A project to develop new methods to rapidly model and analyze infrastructure using small, low-flying robots addresses many of the limitations of existing inspection processes.
09 June 2014
Daniel Huber

A recent review of the United States' aging bridges, dams, and other infrastructure highlights the need to rehabilitate or replace roughly 25% of the nation's bridges because they are either structurally deficient or functionally obsolete and, therefore, are in need of frequent inspections while they await repair or replacement.1 The 2007 collapse of the I-35W bridge over the Mississippi River, which killed 13 people, injured 145, and cost hundreds of millions of dollars to repair, highlights the consequences of inadequate inspections. Current inspection methods involve expensive, specialized equipment, and are labor-intensive as well as potentially dangerous. Results are recorded in lists and tables accompanied by hand-drawn sketches or markings on blueprints to indicate problems. Moreover, inspections are subjective, vary from inspector to inspector, and produce representations that are difficult to compare over time.

The Arial Robot Infrastructure Analyst (ARIA) project introduces a new concept in infrastructure inspection. Rather than putting inspectors in harm's way, ARIA uses a small, low-flying robot, coupled with 3D imaging, and state-of-the-art planning, modeling, and analysis to provide safe, efficient, and high-precision assessments of critical infrastructure.2

Recently, micro air vehicles (MAVs) have begun to be used for infrastructure inspection, mostly using imagery or videos. However, purely image-based approaches give a narrow ‘soda-straw’ view, and users can become disoriented in complex, unfamiliar environments.3 It can also be challenging to relate a close-up of a structural element to the overall structure. The ARIA project takes infrastructure inspection to a new level. Rather than just observing, ARIA will actively construct a semantically rich 3D model of the structure (i.e., composed of objects such as beams, columns, and braces) that will enable new methods of analysis and immersive interaction with the data. The ARIA platform is a custom-designed octo-rotor MAV equipped with a variety of sensors, including a lightweight single-line laser scanner, three video cameras, an inertial measurement unit, GPS (global positioning system), and wireless communication (see Figure 1).


Figure 1. The Aerial Robotic Infrastructure Analyst (ARIA) rapidly models critical infrastructure using small, low-flying robots. (Photo courtesy of Luke Yoder.)

Our vision of ARIA addresses many of the limitations of existing infrastructure inspection processes. Unlike ground-based systems or manual physical inspection that spot-check areas deemed to be important, ARIA can comprehensively cover all visible surfaces of the structure. The robot can safely fly to high locations and inspect the structure at close range, even over water or other hazards that ground-based sensors cannot reach. Because ARIA creates an integrated 3D model containing all measurements, inspectors can later revisit the data and recheck the results. Unlike subjective and potentially inconsistent evaluations from inspectors with different experience and training, ARIA is designed to provide objective inspections that are stable over time. Repeated inspections can be linked together to analyze the progression of deterioration. ARIA is organized into three core objectives: rapid infrastructure modeling and analysis; immersive, engineer-centered inspection and assessment; and the robotic inspection assistant.

Our first and most challenging objective is to rapidly create accurate 3D models of infrastructure, not just in terms of low-level geometry but also high-level, semantically rich models. Such models enable interactions and analysis techniques that are not possible with point clouds (sets of 3D points) alone. For example, if we can recognize the columns of a bridge, then the inspector can specify, “Give me a visual inspection of each of the columns,” rather than tediously marking each column manually.

The process consists of four key operations: controlling the robot and creating a 3D point cloud map (see Figure 2); transforming the point cloud data into a semantic, component-based model; visually analyzing the model to identify defects; and converting the semantic model into a finite element model (FEM) and simulating the resulting model for structural assessment. The result of the process will be an integrated infrastructure model (IIM) that links the robot's raw observations with derived results, including the component-based model, the structural analysis FEM, and inspection algorithm outputs.


Figure 2. Part of the Schenley Bridge modeled using the ARIA robot and the LOAM (Laser Odometry and Mapping) algorithm, developed by members of our team. (Photo courtesy of Ji Zhang.)

Each of these operations represents a significant technical challenge. The mapping operation requires integration of information with dynamic and unpredictable uncertainty. For example, as the robot flies under a bridge, GPS (global positioning system) signal accuracy degrades as the satellites become occluded. Mapping must overcome the sensors' limited sensing range to create an accurate point cloud of large-scale structures. The semantic modeling process must function robustly in the face of noisy and often incomplete data due to sensor limitations and occluded regions. Furthermore, the infinite variety of designs and styles of structures makes purely bottom-up modeling particularly difficult. We previously developed methods for semantic modeling of indoor environments using context.4 We are now attacking the problem by incorporating top-down, knowledge-driven processes. Directly creating FEMs from point clouds is a difficult and, as yet, unsolved problem. We believe, however, that by first transforming the point cloud into a semantic model, the problem becomes much easier. Given a 3D model consisting of connected beams, trusses, girders, columns, and such, an FEM can be created comparatively easily.

Infrastructure inspection is inherently a 3D process. However, current inspection methods distill the 3D information into 2D plans or nonvisual representations, such as tables and checklists. Separation between the physical representation and the inspection results can lead to missed information and mistakes as the user mentally translates between modalities. If, instead of separating the inspection results from the physical model, we allow the two to be intimately tied together, we can benefit from the spatial relationships inherent in the physical model. The ARIA project will create an immersive visualization environment using the IIM as its foundation. We hypothesize that the immersive model will aid in interacting with the robotic inspection assistant, conducting virtual inspections off-line, and analyzing and tracking deterioration of infrastructure over time.

While a fully autonomous system for infrastructure inspection may be possible in the long term, such an approach is likely to lead to user frustration and limited applicability in the near term. The ARIA robot is intended to work interactively with an inspector using planning and control algorithms that learn an inspector's preferences based on observations and then adapt their operation to meet those needs. Such algorithms will enable the robot to act as an assistant or apprentice to the inspector, adjusting its automation level appropriately to the situation.

The ability to rapidly create comprehensive, accurate, and semantically rich 3D models using MAVs has potential beyond the inspection domain in applications such as modeling and analyzing disaster, construction, and historical sites. In the wake of the 2011 Fukushima accident in Japan, the robotics community was asked to help with determining the state of the reactors. Unfortunately, the destruction and debris limited the ground-based robots' abilities to reach parts of the site that would easily be mapped by an MAV. Perhaps the next time disaster strikes, the ARIA robot will be on the scene to assist. We are currently developing the ARIA algorithms using case studies of several bridges in the Pittsburgh and Boston areas, working in collaboration with inspectors from the Pennsylvania Department of Transportation.

This research is funded by the National Science Foundation under grant IIS-1328930. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.


Daniel Huber
Carnegie Mellon University
Pittsburgh, PA

Daniel Huber is a senior systems scientist at Carnegie Mellon University's Robotics Institute. His research focuses on 3D computer vision and its applications in the built environment.


References:
1. Report card for America's infrastructure, tech. rep., ASCE, Reston, Virginia, 2009.
2. http://aria.ri.cmu.edu Website of the Aerial Robotic Infrastructure Analyst project. Accessed 4 May 2014.
3. A. Kelly, N. Chan, H. Herman, D. Huber, R. Meyers, P. Rander, R. Warner, J. Ziglar, E. Capstick, Real-time photorealistic virtualized reality interface for remote mobile robot control, Int'l J. Robot. Res. 30(3), p. 384-404, 2011.
4. X. Xiong, A. Adan, B. Akinci, D. Huber, Automatic creation of semantically rich 3D building models from laser scanner data, Automat. Construct. 31, p. 325-337, 2013. doi:10.1016/j.autcon.2012.10.006
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research