Proceedings Volume 10986

Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXV

cover
Proceedings Volume 10986

Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXV

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 26 July 2019
Contents: 13 Sessions, 58 Papers, 33 Presentations
Conference: SPIE Defense + Commercial Sensing 2019
Volume Number: 10986

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10986
  • Hyperspectral Imaging Standards
  • LWIR and MWIR Spectral Sensing
  • Classification and Dimensionality Reduction
  • Sensor Systems and Characterization
  • Chemical and Explosives Detection
  • Models and Mathematical Methodologies
  • Machine Learning in Spectral Sensing I
  • Applications of Spectral Sensing
  • Target and Change Detection
  • Machine Learning in Spectral Sensing II
  • Spectral Imaging
  • Poster Session
Front Matter: Volume 10986
icon_mobile_dropdown
Front Matter: Volume 10986
This PDF file contains the front matter associated with SPIE Proceedings Volume 10986, including the Title Page, Copyright Information, Table of Contents, Author and Conference Committee lists.
Hyperspectral Imaging Standards
icon_mobile_dropdown
IEEE P4001: Progress towards a hyperspectral standard
Hyperspectral imaging is an innovative and exciting technology that holds incredible diagnostic, scientific and categorization power. Current industry innovation is a testament to the creative power and imagination of the diverse community seeking to optimize this technology. However, fundamental instrument performance is not consistently well characterized, well understood or well represented to suit distinct application endeavors or commercial market expectations. Establishing a common language, technical specification, testing criteria, task-specific recommendations and common data formats are essential to allowing this technology to achieve its true altruistic and economic market potential. In 2018 the IEEE P4001 was formed to facilitate consistent use of terminology, characterization methods and data structures. This talk is a progress report to inform the hyperspectral community of the status of the work to date, the interconnection with other standards and outline the roadmap.
LWIR and MWIR Spectral Sensing
icon_mobile_dropdown
LWIR change detection using robustified temperature emissivity separation and alpha residuals
In this paper, we consider change detection in the longwave infrared (LWIR) domain. Because thermal emission is the dominant radiation source in this domain, differences in temperature may appear as material changes and introduce false alarms in change imagery. Existing methods, such as temperature-emissivity separation and alpha residuals, attempt to extract temperature-independent LWIR spectral information. However, both methods remain susceptible to residual temperature effects which degrade change detection performance. Here, we develop temperature-robust versions of these algorithms that project the spectra into approximately temperatureinvariant subspaces. The complete error covariance matrix for each method is also derived so that Mahalanobis distance may be used to quantify spectral differences in the temperature-invariant domain. Examples using synthetic and measured data demonstrate substantial performance improvement relative to the baseline algorithms.
Applications of spectral image quality equation for longwave infrared hyperspectral imagery
Hyperspectral imaging (HSI) technologies span the electro-optical and infrared domains. Longwave infrared (LWIR) HSI is particularly well suited for chemical and material identification in both day and night conditions due to the fact that longwave signals depend on thermal emission and material composition. However, exploitation performance is impacted by spectral data quality, which is driven by fundamental sensor noise characteristics, focal plane array health, spectral and radiometric calibration accuracy, and weather conditions. Previous algorithms have focused on quantifying spectral quality in the visible, near infrared, and shortwave infrared domains. More recently, we developed a spectral image quality equation (SIQE) based on Bayesian Information Criterion (BIC) for quantifying spectral quality of LWIR HSI data. Here, we further develop the algorithm to provide a more intuitive interpretation of the resulting BIC scores by transforming the scores into a metric that more closely resembles target detection scores. In addition to showing how SIQE is correlated with noise-equivalent spectral radiance, we illustrate several applications of SIQE, including the impact of atmospheric/environmental interferences and calibration errors. Our results reveal that SIQE is an effective metric for quantifying hyperspectral data quality, and thus, can be used for filtering data cubes prior to implementing exploitation algorithms.
Assessments of MODIS thermal emissive bands on-orbit calibration performance using Dome C observations
MODIS (MODerate Resolution Imaging Spectroradiometer) instruments are key contributors to the NASA’s Earth Observing System (EOS) Terra and Aqua missions. Launched in December 1999 and May 2002, Terra and Aqua MODIS have successfully operated for more than 19 and 17 years, respectively. MODIS has 36 spectral bands covering wavelengths from visible (VIS) to long-wave infrared (LWIR). Observations from both Terra and Aqua MODIS instruments have been used to generate a wide range of data products that have enabled many studies of the Earth’s atmosphere, land, and oceans. Sixteen of the 36 MODIS spectral bands, with wavelengths ranging from 3.7 μm to 14.4 μm, are referred to as the Thermal Emissive Bands (TEBs). Key calibration parameters that are related to detector gains of the MODIS TEBs are calibrated on a scan-by-scan basis using an on-board blackbody. As both sensors continue to operate beyond their specified design lifetime of 6 years, it has become increasingly important to constantly monitor and evaluate the on-orbit performance and calibration consistency of their long-term data records. In this study, we examine the long-term calibration stability of each sensor and the calibration consistency between two MODIS instruments using multiple daily (nadir) observations over Dome Concordia. The near-surface temperature measurements from an Automatic Weather Station (AWS) are used as a proxy reference to help determine the calibration stability and relative bias between the Terra and Aqua MODIS TEBs.
Observations on passive polarimetric imaging across multiple infrared wavebands
Christian L. Saludez, Jarrod P. Brown, Darrell B. Card, et al.
An experiment is conducted to observe painted aluminum panels across long-wave, mid-wave, and short-wave regions of the optical infrared spectrum with respect to time. Simultaneously, comprehensive meteorological information including solar intensity, temperature, humidity, and moisture are also recorded. The experiment is focused on 1) understanding the cause of signature variability of several objects in the scene in relation to numerous meteorological conditions, 2) observing the potential benefit offered in passive polarimetric sensing, and 3) identifying the strengths and limitations of each waveband for the encountered condition. Metrics include intensity, polarization, and contrast from multiple wavebands measured across various solar conditions against painted panels and natural clutter. We present details of the experiment setup, analysis of imagery and meteorological data, and observations drawn from experimental results.
Classification and Dimensionality Reduction
icon_mobile_dropdown
Analysis of spectral data using spatial context
Spectral LiDAR analysis can be enabled by the use of spatial context, spatial structure, and prior information in the form of map data. LiDAR intensity imagery are analyzed here using an object-based approach which segments the data according to vector information obtained from OpenStreetMap and other vector map information. Polygons and features from the map vectors are used to establish regions of interest for analysis. This automates the training process for use of traditional statistical classifiers and machine learning algorithms. Map-derived objects can demonstrate multiple spectral components which must be resolved to define the primary object, and its semantic label.
A comparison of adaptive and template matching techniques for radio-isotope identification
Emma J. Hague, Mark Kamuda, William P. Ford, et al.
We compare and contrast the effectiveness of a set of adaptive and non-adaptive algorithms for isotope identification based on gamma-ray spectra. One dimensional energy spectra are simulated for a variety of dwell-times and source to detector distances in order to reflect conditions typically encountered in radiological emergency response and environmental monitoring applications. We find that adaptive methods are more accurate and computationally efficient than non-adaptive in cases of operational interest.
Semi-supervised discriminant feature selection for hyperspectral imagery classification
Chunhua Dong, Masoud Naghedolfeizi, Dawit Aberra, et al.
Sparse representation classification (SRC) is being widely applied for target detection in hyperspectral images (HSI). However, due to the problem of the curse of dimensionality and redundant information in HSI, SRC methods fail to achieve high classification performance via a large number of spectral bands. Selecting a subset of predictive features in a high-dimensional space is a challenging problem for hyperspectral image classification. In this paper, we propose a novel discriminant feature selection (DFS) method for hyperspectral image classification in the eigenspace. Firstly, our proposed DFS method selects a subset of discriminant features by solving the combination of spectral and spatial hypergraph Laplacian quadratic problem, which can preserve the intrinsic structure of the unlabeled pixels as well as both the inter-class and intra-class constraints defined on the labeled pixels in the projected low-dimensional eigenspace. Then, in order to further improve the classification performance of SRC, we exploit the well-known simultaneous orthogonal matching pursuit (SOMP) algorithm to obtain the sparse representation of the pixels by incorporating the interpixel correlation within the classical OMP by assuming that neighboring pixels usually consist of similar materials. Finally, the recovered sparse errors are directly used for determining the label of the pixels. The extracted discriminant features are compatibly used in conjunction with the established SRC methods, and can significantly improve their performance for HSI classification. Experiments conducted with the hyperspectral data sets and different experimental settings show that our proposed method increases the classification accuracy and outperforms the state-of-the-art feature selection and classification methods.
Unsupervised hyperspectral band selection in the compressive sensing domain
Bernard Lampe, Adam Bekit, Charles Della Porta, et al.
Band selection (BS) algorithms are an effective means of reducing the high volume of redundant data produced by the hundreds of contiguous spectral bands of Hyperspectral images (HSI). However, BS is a feature selection optimization problem and can be a computationally intensive to solve. Compressive sensing (CS) is a new minimally lossy data reduction (DR) technique used to acquire sparse signals using global, incoherent, and random projections. This new sampling paradigm can be implemented directly in the sensor acquiring undersampled, sparse images without further compression hardware. In addition, CS can be simulated as a DR technique after an HSI has been collected. This paper proposes a new combination of CS and BS using band clustering in the compressively sensed sample domain (CSSD). The new technique exploits the incoherent CS acquisition to develop BS via a CS transform utilizing inter-band similarity matrices and hierarchical clustering. It is shown that the CS principles of the restricted isometric property (RIP) and restricted conformal property (RCP) can be exploited in the novel algorithm coined compressive sensing band clustering (CSBC) which converges to the results computed using the original data space (ODS) given a sufficient compressive sensing sampling ratio (CSSR). The experimental results show the effectiveness of CSBC over traditional BS algorithms by saving significant computational space and time while maintaining accuracy.
Sensor Systems and Characterization
icon_mobile_dropdown
Simplified measurement of point spread functions of hyperspectral cameras for assessment of spatial coregistration
Torbjørn Skauli, Hans Erling Torkildsen
In multi- and hyperspectral imaging, spatial coregistration of the point spread functions (PSFs) of all bands within each pixel is critical for the integrity of measured spectra. There is a need to define a standardized method for characterizing coregistration error. We propose a method that estimates PSFs from the product of line spread functions (LSFs) recorded in two orthogonal directions. Coregistration is then evaluated according to the PSF difference metric [T. Skauli, Opt. Expr. vol. 20, p. 918 (2012)]. Experimental results on two pushbroom hyperspectral cameras show good correspondence with measurements based on the full PSF, provided that LSF scan directions are aligned with axes of symmetry in the optics. Even a maximally unfavourable choice of scan directions gives meaningful estimates of coregistration error. The proposed method may have potential as a standard for coregistration characterization.
Solar-induced fluorescence retrievals in the context of physiological, environmental, and hardware-based sources of uncertainty
The terrestrial biosphere is a crucial sink for anthropogenic emissions of carbon to the atmosphere, but is also the source of the largest uncertainties in estimated global carbon budgets. Numerous tower- and satellite-based platforms have recently been established to measure solar-induced fluorescence (SIF), which, as a proxy for photosynthesis, shows great promise for constraining global estimates of gross primary productivity. Nonetheless, published SIF retrievals span two orders of magnitude, illustrating an opportunity for improved characterization of the SIF signal in the context of instrument noise, detector calibrations and limitations, viewing geometry, and typical signal magnitude. In 2017, the Forested Optical Reference for Evaluating Sensor Technology (FOREST) site was established at the National Institute of Standards and Technology (NIST) as a test-bed for SIF instrument intercomparison and calibration methods development. Further, we empirically characterize the physiological and ecological meaning of SIF by directly linking to carbon exchange with an extensive suite of ground measurements. Following optimizations to our SIF spectrometer deployment, we find that deviations from ideal measurement conditions, including low light or intermittent cloud cover, introduce significant noise outside even dramatic physiological manipulations. It is critical that common standards are developed for SIF measurement systems to ensure validation of data quality and clear linkages to physiological and biophysical parameters. SIF is a promising technique to improve measurement and understanding of local to global trends in primary productivity, but data quality control is a key challenge to tackle with the rapid deployment of new sensors across the globe. This work is an initial evaluation of sensitivities of SIF signals to hardware and methodologies.
Stray light characterization in a high-resolution imaging spectrometer designed for solar-induced fluorescence
Loren P. Albert, K. C. Cushman, David W. Allen, et al.
New commercial-off-the-shelf imaging spectrometers promise the combination of high spatial and spectral resolution needed to retrieve solar induced fluorescence (SIF). Imaging at multiple wavelengths for individual plants and even individual leaves from low-altitude airborne or ground-based platforms has applications in agriculture and carbon-cycle science. Data from these instruments could provide insight into the status of the photosynthetic apparatus at scales of space and time not observable with tools based on gas exchange, and could support the calibration and validation activities of current and forthcoming space missions to quantify SIF. High-spectral resolution enables SIF retrieval from regions of strong telluric absorption by molecular oxygen, and also within numerous solar Fraunhofer lines in atmospheric windows not obscured by oxygen or water absorptions. Because the SIF signal can be < 5 % of background reflectance, rigorous instrument characterization and reduction of systematic error is necessary. Here we develop a spectral stray-light correction algorithm for a commercial off-the-shelf imaging spectrometer designed to quantify SIF. We use measurements from an optical parametric oscillator laser at 44 wavelengths to generate the spectral line-spread function and develop a spectral stray-light correction matrix using a novel exposure-bracketing method. The magnitude of spectral stray light in this instrument is small, but spectral stray light is detectable at all measured wavelengths. Examination of corrected line-spread functions indicates that the correction algorithm reduced spectral stray-light by 1 to 2 orders of magnitude.
SAGE IV Pathfinder multi-spectral imaging spectrometer telescope paves the way for semi-custom CubeSat imaging missions
Alexander Cheff Halterman, Robert Damadeo, Charles Hill, et al.
The SAGE IV (Stratospheric Aerosol and Gas Experiment) Pathfinder looks towards ushering in the next generation of the SAGE family of instruments, leveraging solar occultation to retrieve vertical profiles of aerosols and gases in the stratosphere, providing high precision calibration data for other instruments. A development funded through the NASA Earth Science Technology Office (ESTO) Instrument Incubator Program (IIP) SAGE IV Pathfinder is designed to extend the data record from the SAGE III scanning grating spectrometer with a multispectral imaging approach. Solar disk imaging improves the data collected by providing: (1) absolute pointing information; (2) measurements of atmospheric refraction effects; and (3) measurements of solar disk anisotropy. This additional information relaxes traditionally tight constraints on attitude knowledge, stability, and pointing control making a free-flying 6U CubeSat instrument feasible. Early estimates show this approach might reduce the cost of SAGE continuity missions by as much as 90%. A key benefit of the SAGE IV Pathfinder design to future missions is the versatility of the resultant telescope subsystem. The F/5.25 telescope resulted in <90% encircled energy within a 30 μm/28 arcsecond pixel and point source normalized irradiance transmittance (PSNIT) of <1E-4 0.5° outside of the field of view (FOV). The baseline design can be adapted to accommodate changes to layout, aperture, focal lengths, filters, and/or detectors in various CubeSat form factors. The telescope was designed to be thermally agnostic, with STOP analysis results indicating negligible performance variation as thermal gradients fluctuate on orbit. Once thermal validation of STOP analysis is completed, proven micron-level alignment, mounting, and analyses can then be leveraged for new high performance, semi-custom instruments, saving significant development cost for future science missions.
Extended characterization of multispectral resolving filter-on-chip snapshot-mosaic CMOS cameras
P.-G. Dittrich, M. Bichra, D. Stiehler, et al.
Multispectral resolving filter-on-chip snapshot-mosaic CMOS cameras are a convenient, reliable and affordable approach for the parallel acquisition of spatial and spectral information. The combination of pixel-arranged spectral filter matrices on CMOS sensors increases their integration density and system complexity by several times compared to standard RGB cameras. Due to the system design, these cameras have an increased spectral crosstalk and specific dependencies from the angle of illumination. To ensure the comparability and reproducibility of the measured values, arrangements, methods and algorithms for the characterization are developed and applied to characterize the capabilities of these cameras. It will be shown how to characterize these cameras in accordance to the EMVA1288 standard and which methods, algorithms and additional measurement arrangements have been developed and applied to make suggestions for extending this standard concerning a possible extension of the characterization for spectral crosstalk and angle dependencies.
Chemical and Explosives Detection
icon_mobile_dropdown
Advances in active infrared spectroscopy for trace chemical detection
Kristin DeWitt
Current military and commercially fielded sensors for detection of trace materials on surfaces (explosives, narcotics, low volatility chemical warfare agents and other hazardous chemicals) require physical collection and analysis of the trace material. Although existing trace detection techniques are very sensitive, they all require unseen particles to be collected from the surface of the substrate being screened and then transferred to the analysis system. This collection process requires human input and close-proximity exposure to the screened article, which poses a safety threat from either IED detonation or hazardous chemical traces. It also introduces a fundamental limit on the screening speed, and limits the types of surfaces that can be successfully tested for trace materials to those with good “wipe-ability”. The ability to detect targets at significant standoff with a rapid, non-destructive screening technique that has sensitivity comparable to “sample and test” methods has been a “holy grail” for a number of years. Preliminary active infrared spectroscopy results from the Intelligence Advanced Research Project Activity (IARPA) Standoff ILuminator for Measuring Absorption and Reflectance Infrared Light Signatures (SILMARILS) program have shown a capability to detect trace explosives at levels comparable with Explosive Trace Detection (ETD) systems, and narcotics and chemical warfare agent simulants at similar levels. Especially interesting is the discovery that the standoff infrared technique still detects measurable explosive residue signal after solvent cleaning of surfaces, but with subtle changes in the target signature indicating a phase change from discrete particles to a thin film. Trace quantities of narcotics have been detected through plastic bags, and traces of explosives and other hazardous chemicals detected on clothing, on vehicle surfaces, on building materials, on packaging materials, and on pig skin, which is a close stand-in to human skin in terms of water, fat, and hemoglobin content. Also discussed will be test results for aerosol and fallout detection of chemical and biological simulants released in the Joint Ambient Breeze Tunnel (JABT) at Dugway Proving Grounds (DPG), as well as detection of trace surface residues at distances up to 25m.
Active LWIR hyperspectral imaging and algorithms for rapid standoff trace chemical identification
We are developing a cart-mounted platform for chemical threat detection and identification based on active LWIR imaging spectroscopy. Infrared backscatter imaging spectroscopy (IBIS) leverages IR quantum cascade lasers, tuned through signature absorption bands (6 - 11 μm) in the analytes while illuminating a surface area of interest. An IR focal plane array captures the time-dependent backscattering surface response. The image stream forms a hyperspectral image cube composed of spatial, spectral and temporal dimensions as feature vectors for detection and identification. Our current emphasis is on rapid screening. This manuscript also describes methods for simulating IBIS data and for training detection algorithms based on convolutional neural networks (CNN). We have previously demonstrated standoff trace detection at several meters indoors and in field tests, while operating the lasers below the eye-safe intensity limit (100 mW/cm2). Sensitivity to explosive traces as small as a single grain (~1 ng) has been demonstrated. Analytes tested include RDX, PETN, TNT, ammonium nitrate, caffeine and perchlorates on relevant glass, plastic, metal, and painted substrates.
Understanding polynomial distributed lag models: truncation lag implications for a mosquito-borne disease risk model in Brazil
Jessica Conrad, Amanda Ziemann, Randall Refeld, et al.
Using data for the states of Brazil, we construct a polynomial distributed lag model under different truncation lag criteria to predict reported dengue cases. Accurately predicting dengue cases provides the framework to develop forecasting models, which would provide public health professionals time to create targeted interventions for areas at high risk of dengue outbreaks. Others have shown that variables of interest such as temperature and vegetation can be used to predict dengue cases. These models did not detail how truncation lag criteria was chosen for their respective models when polynomial distributed lag was used. We explore current truncation lag selection methods used widely in the literature (marginal and minimized AIC) and determine which of these methods works best for our given data set. While minimized AIC truncation lag selection produced the best fit to our data, this method used substantially more data to inform its prediction compared to the marginal truncation lag selection method. Finally, the following variables were found to be significant predictors of dengue in this region: normalized difference vegetation index (NDVI), green-based normalized difference water index (NDWI), normalized burn ratio (NBR), and temperature. These best predictors were derived from multispectral remote sensing imagery as well as temperature data.
Algorithm development with on-board and ground-based components for hyperspectral gas detection from small satellites
Los Alamos is currently working toward demonstrating Cubesat-based hyperspectral detection of gas-phase chemical plumes, a goal initially pursued in the internally funded Targeted Atmospheric Chemistry Observations from Space (TACOS) project, and now advancing toward space deployment with the NASA-funded Nanosat Atmospheric Chemistry Hyperspectral Observation System (NACHOS). This paper will present a general overview of these projects. Bandwidth considerations prevent full datacube downloads, and so processing algorithms include an on-board processing component to provide matched-filter and RX images for the gases of interest. The downlinked data will additionally include the full spectrum for a small sample of pixels, and one of the challenges for ground-based analysis will be to incorporate these different but incomplete "views" of the datacube into a more physical interpretation/analysis of the downlinked data.
Models and Mathematical Methodologies
icon_mobile_dropdown
A universal sensing model for compressed hyperspectral image analysis
C. J. Della Porta, Adam Bekit, Bernard Lampe, et al.
Hyperspectral imaging (HSI) systems have found success in a variety of applications and are continuing to grow into new applications placing an emphasis on developing more affordable systems. Compressive sensing (CS) is an enabling technology for applications requiring low cost, size, weight, and power (SWAP) HSI sensors. A typical compressed sensing system includes both sparse sampling (encoding) and sparse recovery (decoding); however, recent work has investigated the design of algorithms capable of operating directly in the compressed domain and have shown great success. Many of these works are based on a random sampling mathematical framework that explicitly models both the sparse representation basis and the sampling basis. Such a model requires the selection of a sparsifying representation basis that is seldom proven to be optimal for hyperspectral images and typically left as an open-ended question for future research. In this work, a brief review of the compressive sensing framework for Hyperspectral pixel vectors is provided and the concept of Universality is exploited to simplify the model, removing the need to specify the sparsifying basis entirely for CS applications where sparse recovery is not required. A simple experiment is constructed to demonstrate Universality in sparse reconstruction and to better illustrate the concept. The results to this experiment clearly show, that with a random sampling framework, knowledge of the sparsifying basis is only required during sparse recovery.
Parametric modeling of surface-distributed-scatterer ensembles for inverse analysis of diffuse-reflectance spectra
R. Furstenberg, A. Shabaev, C. A. Kendziora, et al.
This study examines using parametric models for inverse analysis of diffuse IR reflectance from particulate materials that are sparsely distributed upon a surface. Parametric models are applied for inverse analysis of simulated spectra, which are calculated using ensembles of reflectance spectra for non-interacting material particles on surfaces, which have specified dielectric response properties and particle-size distributions Simulated reflectance spectra for individual particles upon surfaces, used for prototype inverse analysis, are calculated numerically using a model based on Mie scattering theory, which assumes spherical particles on surfaces. Parametric models of diffuse reflectance spectra provide encoding of dielectric response features for physical interpretation and convenien representation.
Machine Learning in Spectral Sensing I
icon_mobile_dropdown
Initial investigation into the effect of image degradation on the performance of a 3-category classifier using transfer learning and data augmentation
This paper documents an initial investigation into the effect of image degradation on the performance of transfer learning (TL) as the number of retrained layers is varied, using a well-documented, commonly-used, and well- performing deep learning classifier (VGG16). Degradations were performed on a publicly-available data set to simulate the effects of noise and varying optical resolution by electro-optical (EO/IR) imaging sensors. Performance measurements were gathered on TL performance on the base image-set as well as modified image-sets with different numbers of retrained layers, with and without data augmentation. It is shown that TL mitigates against corrupt data, and improves classifier performance with increased numbers of retrained layers. Data augmentation also improves performance. At the same time, the phenomenal performance of TL cannot overcome the lack of feature information in severely degraded images. This experiment provides a qualitative sense of when transfer learning cannot be expected to improve classification results.
Spatially regularized multiscale graph clustering for electron microscopy
Nathan Kapsin, James M. Murphy
We propose an unsupervised, multiscale learning method for the segmentation of electron microscopy (EM) imagery. Large EM images are first coarsely clustered using spectral graph analysis, thereby non-locally and non-linearly denoising the data. The resulting coarse-scale clusters are then considered as vertices of a new graph, which is analyzed to derive a clustering of the original image. The two-stage approach is multiscale and enjoys robustness to noise and outlier pixels. A quasilinear and parallelizable implementation is presented, allowing the proposed method to scale to images with billions of pixels. Strong empirical performance is observed compared to conventional unsupervised techniques.
Analysis of long-wave infrared hyperspectral classification performance across changing scene illumination
Nicholas M. Westing, Brett J. Borghetti, Kevin C. Gross, et al.
Hyperspectral sensors collect data across a wide range of the electromagnetic spectrum, encoding information about the materials comprising each pixel in the scene as well as atmospheric effects and illumination conditions. Changes in scene illumination and atmospheric conditions can strongly affect the observed spectra. In the long- wave infrared, temperature variations resulting from illumination changes produce widely varying at-aperture signals and create a complex material identification problem. Machine learning techniques can use the high- dimensional spectral data to classify a diverse set of materials with high accuracy. In this study, classification techniques are investigated for a long-wave hyperspectral imager. A scene consisting of 9 different materials is imaged over an entire day providing diversity in scene illumination and surface temperatures. A Support Vector Machine classifier, feedforward neural network, and one-dimensional convolutional neural network (1D-CNN) are compared to determine which method is most robust to changes in scene illumination. The 1D-CNN outperforms the other classification methods by a wide margin when presented hyperspectral data cubes significantly different from the training data distribution. This analysis simulates real-world classifier use and validates the robustness of the 1D-CNN to changing illumination and material temperatures.
Unraveling low abundance intimate mixtures with deep learning
The high-confidence detection and identification of very low abundance, subpixel quantities of solid materials in nonlinear/intimate mixtures are still significant challenges for hyperspectral imagery (HSI) data analysis. We compare the ability of a traditional, shallow neural network (NN), deep learning with a convolutional neural network (DL/CNN), and a support vector machine (SVM) to analyze spectral signatures of nonlinear mixtures. Traditional mainstay algorithms (e.g., spectral unmixing, the matched filter) are also applied. Using a benchtop shortwave infrared (SWIR) hyperspectral imager, we acquired several microscenes of intimate mixtures of sand and neodymium oxide (Nd2O3). A microscene is a hyperspectral image measured in a laboratory. Several hundred thousand labeled spectra are easily and rapidly generated in one HSI cube of a microscene. Individual Petri dishes of 0, 0.5, 1, 2, 3, 4, and 5 weight-percent (wt. %) Nd2O3 with a silicate sand comprise a suite of microscenes furnishing labeled spectra for analysis. The NN and the DL/CNN both have average validation accuracies of ≥ 98 % (for the low wt. % classes); the SVM yields similar performance. As wt. % Nd2O3 increases, accuracies decrease slightly—perhaps due to the dominance of the Nd2O3 signature in the mixtures, which causes an increasing difficulty in separation. For example, this could affect the 4 and 5 wt. % classes in which the Nd2O3 would be easily detected and identified with traditional, mainstay HSI algorithms. The fact that neural network methods can separate such low quantity classes (e.g., 0, 0.5, and 1 wt. %), though not unexpected, is encouraging and demonstrates the potential of NNs and DL/CNNs for such detailed HSI analysis.
Sheared multi-scale weight sharing for multi-spectral superresolution
Micah Goldblum, Liam Fowl, Wojciech Czaja
Deep learning approaches to single-image superresolution typically use convolutional neural networks. Convolutional layers introduce translation invariance to neural networks. However, other spatial invariants appear in imaging data. Two such invariances are scale invariance, similar features at multiple spacial scales, and shearing invariance. We investigate these invariances by using weight sharing between dilated and sheared convolutional kernels in the context of multi-spectral imaging data. Traditional pooling methods can extract features at coarse spacial levels. Our approach explores a finer range of scales. Additionally, our approach offers improved storage efficiency because dilated and sheared convolutions allows single trainable kernels to extract information at multiple spacial scales and shears without the costs of training and storing many filters, especially in multi-spectral imaging where data representations are complex.
Applications of Spectral Sensing
icon_mobile_dropdown
Multispectral camera design and algorithms for python snake detection in the Florida Everglades
Gonzalo Vaca-Castano, Ronald Driggers, Orges Furxhi, et al.
The Burmese Python has invaded the Florida Everglades where the estimate of pythons is around 150,000 and rapidly growing. Pythons were released as unwanted pets in South Florida and now they are an apex invasive species. As a result, the local fauna population has been largely decimated, and there is an increasing concern about python migration to northern latitudes. Working with a team interested in developing a python detection camera, we have taken hyperspectral and multispectral reflectivity measurements of Brumese Pythons in the visible and near infrared bands (VisNIR). The results show that some VisNIR reflectivity bands can be used to automatically discriminate pythons in the wild. This paper discusses the results of our data collections and provides a camera design process that includes a band selection algorithm and pixel-level classification using machine learning. Additionally, we show a visual enhancement alternative that helps to identify pythons in realistic conditions.
Stellar background rendering for space situational awareness algorithm development
A key component of a night scene background on a clear moonless night is the stellar background. Celestial objects affected by atmospheric distortions and optical system noise become the primary contribution of clutter for detection and tracking algorithms while at the same time providing a solid geolocation or time reference due to their highly predictable motion. Any detection algorithm that needs to operate on a clear night must take into account the stellar background and remove it via background subtraction methods. As with any scenario, the ability to develop detection algorithms depends on the availability of representative data to evaluate the difficulty of the task. Further, the acquisition of measured field data under arbitrary atmospheric conditions is difficult if not impossible. For this reason, a radiometrically accurate simulation of the stellar background is a boon to algorithm developers. To aid in simulating the night sky, we have incorporated a star-field rendering model into the Georgia Tech Simulations Integrated Modeling System (GTSIMS). Rendering a radiometrically accurate star-field requires three major components: positioning the stars as a function of time and observer location, determining the in-band radiance of each star, and simulating the apparent size of each star. We present the models we have incorporated into GTSIMS and provide a representative sample of the images generated with the new model. We then demonstrate how the clutter in the neighborhood of a pixels change by including a radiometrically accurate rendering of a star-field.
Hyperspectral nondestructive evaluation of early damage and degradation in metallic materials
In this work, we investigated a Hyperspectral NDE (HySpecNDE) technique for detection of early damage and material degradation in metallic structures and components. A hyperspectral imaging camera in middle wavelength infrared (MWIR) was used to examine metallic specimens with damage and material degradation created from tensile testing. Hyperspectral data in the MWIR range at ambient and elevated temperatures was acquired and analyzed with a set of data analysis and image processing algorithms. By comparing the digital image correlation results during tensile testing and the hyperspectral analysis results, we demonstrated that HySpecNDE technique unveils the variations in emission intensity, which correspond to early damage and material degradation in materials.
Hyperspectral pigment analysis of cultural heritage artifacts using the opaque form of Kubelka-Munk theory
Kubelka-Munk (K-M) theory has been successfully used to estimate pigment concentrations in the pigment mixtures of modern paintings in spectral imagery. In this study the single-constant K-M theory has been utilized for the classification of green pigments in the Selden Map of China, a navigational map of the South China Sea likely created in the early seventeenth century. Hyperspectral data of the map was collected at the Bodleian Library, University of Oxford, and can be used to estimate the pigment diversity, and spatial distribution, within the map. This work seeks to assess the utility of analyzing the data in the K/S space from Kubelka-Munk theory, as opposed to the traditional reflectance domain. We estimate the dimensionality of the data and extract endmembers in the reflectance domain. Then we perform linear unmixing to estimate abundances in the K/S space, and following Bai, et al. (2017), we perform a classification in the abundance space. Finally, due to the lack of ground truth labels, the classification accuracy was estimated by computing the mean spectrum of each class as the representative signature of that class, and calculating the root mean squared error with all the pixels in that class to create a spatial representation of the error. This highlights both the magnitude of, and any spatial pattern in, the errors, indicating if a particular pigment is not well modeled in this approach.
Evaluation of target detection methods and the study of accuracy improvement toward the application to MDA with hyperspectral imaging
Takaaki Ito, Daiki Nakaya, Shin Satori, et al.
In recent years, Maritime Domain Awareness (MDA) has become important for national defense in Japan. Target detection using hyperspectral data is useful for MDA. In this study, we found that Correlation Matched Filter (CMF) has a better detection accuracy than Spectral Matched Filter (SMF), both of which are derived from Reed-Xiaoli Detector. CMF doesn't need to calculate the average value of the background spectrum, which is also advantageous in real-time processing. In addition, we could also show that it is possible to improve the detection accuracy by band selection in CMF. This increases the detection accuracy of foreign matter on the ocean.
Target and Change Detection
icon_mobile_dropdown
Multi-sensor anomalous change detection at scale
Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. To meet this challenge requires effective algorithms for combining heterogeneous data to identify subtle but important changes among the intrinsic data variation. The major obstacle to using heterogeneous satellite data to monitor anomalous changes across time is this: subtle but real changes on the ground can be overwhelmed by artifacts that are simply due to the change in modality. Here, we implement a joint-distribution framework for anomalous change detection that can effectively "normalize" for these changes in modality, and does not require any phenomenological resampling of the pixel signal. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the Los Angeles Stadium at Hollywood Park (in Inglewood, CA) as our testbed, and exploit synthetic aperture radar (SAR) imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We explore results for anomalous change detection between Sentinel-2 and Landsat 8 over time, and also show results for anomalous change detection between Sentinel-1 SAR imagery and Sentinel-2 multispectral imagery.
Change detection using Landsat and Worldview images
This paper presents some preliminary results using Landsat and Worldview images for change detection. The studied area had some significant changes such as construction of buildings between May 2014 and October 2015. We investigated several simple, practical, and effective approaches to change detection. For Landsat images, we first performed pansharpening to enhance the resolution to 15 meters. We then performed a chronochrome covariance equalization between two images. The residual between the two equalized images was then analyzed using several simple algorithms such as direct subtraction and global Reed-Xiaoli (GRX) detector. Experimental results using actual Landsat images clearly demonstrated that the proposed methods are effective. For Worldview images, we used pansharpened images with only four bands for change detection. The performance of the aforementioned algorithms is comparable to that of a commercial package developed by Digital Globe.
Comparison of longwave infrared hyperspectral target detection methods
Nathan P. Wurst, Seung Hwan An, Joseph Meola
Numerous methods exist to perform hyperspectral target detection. Application of these algorithms often requires the data to be atmospherically corrected. Detection for longwave infrared data typically requires surface temperature estimates as well. This work compares the relative robustness of various target detection algorithms with respect to atmospheric compensation and target temperature uncertainty. Specifically, the adaptive coherence estimator and spectral matched filter will be compared with subspace detectors for various methods of atmospheric compensation and temperature-emissivity separation. Comparison is performed using both daytime and nighttime longwave infrared hyperspectral data collected at various altitudes for various target materials.
Coupled atmospheric surface observations with surface aerosol particle counts for daytime sky radiance quantification
Scott Wolfmeyer, Grant Thomas, Steven Fiorino
For successful daytime imaging and detection, it is extremely important to be able to quantify the solar background noise of the sky in the ultraviolet (UV) to shortwave infrared (SWIR) range of the electromagnetic (EM) spectrum. Daytime sky radiance can be characterized in any direction at any time using the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. Atmospheric information needed for sky radiance characterizations includes temperature, pressure, humidity, and aerosol concentration that is present throughout the atmospheric path. The real- or near-real time atmospheric information can be obtained with satellites, radiosondes, surface weather stations, and particle counts or diagnosing it with numerical weather prediction (NWP). Furthermore, it is necessary to investigate how much real-time information is needed, how difficult is it to obtain, and whether or not some or all of the inputs need to be measured or predicted using NWP. In order to evaluate what is optimal in terms of ease of obtaining necessary data as well as accuracy and speed of the analysis, sky radiance measurements were made for a select number of non-winter days that were sunny and clear and offered varying atmospheric conditions. LEEDR sky radiances were calculated for the given days, times, and telescope look angles for each of the real-time observed surface observations and NWP data as well as all combinations of those inputs. Comparisons were made between the sky radiance measurements and the LEEDR radiance outputs to determine what combinations of real-time observed information produced the most accurate simulated characterizations.
Object detection and classification in aerial hyperspectral imagery using a multivariate hit-or-miss transform
High resolution aerial and satellite borne hyperspectral imagery provides a wealth of information about an imaged scene allowing for many earth observation applications to be investigated. Such applications include geological exploration, soil characterisation, land usage, change monitoring as well as military applications such as anomaly and target detection. While this sheer volume of data provides an invaluable resource, with it comes the curse of dimensionality and the necessity for smart processing techniques as analysing this large quantity of data can be a lengthy and problematic task. In order to aid this analysis dimensionality reduction techniques can be employed to simplify the task by reducing the volume of data and describing it (or most of it) in an alternate way. This work aims to apply this notion of dimensionality reduction based hyperspectral analysis to target detection using a multivariate Percentage Occupancy Hit or Miss Transform that detects objects based on their size shape and spectral properties. We also investigate the effects of noise and distortion and how incorporating these factors in the design of necessary structuring elements allows for a more accurate representation of the desired targets and therefore a more accurate detection. We also compare our method with various other common Target Detection and Anomaly Detection techniques.
Machine Learning in Spectral Sensing II
icon_mobile_dropdown
Machine learning for better trace chemical detection
Kristin DeWitt
One of the biggest challenges facing standoff chemical detection is the ability to accurately predict spectral influences on chemical signatures that are caused by factors such as particle size, particle shape, deposition thickness, or mesoscale crystallinity and morphology. Currently, most chemical detection algorithms treat each substrate/sorbate combination as a separate library entry that must be empirically measured. Physics-based or semi-empirical models work fairly well for fitting spectra and qualitatively mapping features and trends, but are much less accurate for quantitatively predictions, and are too computationally intensive for real-time development of field libraries for practical instruments. IARPA’s MORGOTH’S CROWN prize challenge was a crowdsourced effort to encourage new approaches to infrared spectral modeling to quantitatively predict trace spectra on surfaces from bulk reflectance spectra. Challenge participants were given a training set which included reflectance spectra of sample coupons with trace chemical residues on them, including examples with different particle sizes, crystal structures, and mass loadings. Participants were then asked to generate an algorithm to predict what the spectra of different combinations of chemicals and substrates would look like. The results of the MORGOTH’S CROWN challenge showed that machine-learning based algorithms were better able to quantitatively predict new spectra than physics-based models. The article will describe the execution of the MORGOTH’S CROWN challenge, and discuss the “bigger picture” of what the MORGOTH’S CROWN results mean, and where we “go from here”.
Blended learning for hyperspectral data
Ilya Kavalerov, Wojciech Czaja
We explore the spectral spatial representation capabilities of convolutional neural networks for the purpose of classification of hyperspectral images. We examine several types of neural networks, including a novel technique that blends the Fourier scattering transform with a convolutional neural network. This method is naturally suited for the representation of hyperspectral data because it decomposes signals into multi-frequency bands, removing small perturbations such as noise, while also having the capability of neural networks to learn a hierarchical representation. We test our proposed method on the standard Pavia University hyperspectral dataset and demonstrate a new training set sampling strategy that reveals the inherent spatial bias present in some purely neural network methods. The results indicate that our form of blended learning is more effective at representing spectral data and less prone to overfitting the artificial spatial bias in hyperspectral data.
An application of CNNs to time sequenced one dimensional data in radiation detection
Eric T. Moore, William P. Ford, Emma J. Hague, et al.
A Convolutional Neural Network architecture was used to classify various isotopes of time-sequenced gamma-ray spectra, a typical output of a radiation detection system of a type commonly fielded for security or environmental measurement purposes. A two-dimensional surface (waterfall plot) in time-energy space is interpreted as a monochromatic image and standard image-based CNN techniques are applied. This allows for the time-sequenced aspects of features in the data to be discovered by the network, as opposed to standard algorithms which arbitrarily time bin the data to satisfy the intuition of a human spectroscopist. The CNN architecture and results are presented along with a comparison to conventional techniques. The results of this novel application of image processing techniques to radiation data will be presented along with a comparison to more conventional adaptive methods.1
Optimizing deep learning model selection for angular feature extraction in satellite imagery
Poppy G. Immel, Meera A. Desai, Daniela I. Moody
Deep learning techniques have been leveraged in numerous applications and across different data modalities over the past few decades, more recently in the domain of remotely sensed imagery. Given the complexity and depth of Convolutional Neural Networks (CNNs) architectures, it is difficult to fully evaluate performance, optimize the hyperparameters, and provide robust solutions to a specific machine learning problem that can be easily extended to similar problems, e.g. via transfer learning. Ursa Space Systems Inc. (Ursa) develops novel machine learning approaches to build custom solutions and extract answers from Synthetic Aperture Radar (SAR) satellite data fused with other remote sensing datasets. One application is identifying the orientation with respect to true north of the inlet pipe, which is one common feature located on the top of a cylindrical oil storage tank. In this paper, we propose a two-phase approach for determining this orientation: first an optimized CNN is used to probabilistically determine a coarse orientation of the inlet pipe, followed by a maximum likelihood voting scheme to automatically extract the location of the angular feature within 7.5° . We present a systematic technique to determine the best deep learning CNN architecture for our specific problem and under user-defined optimization and accuracy constraints, by optimizing model hyperparameters (number of layers, size of the input image, and dataset preprocessing) using a manual and grid search approach. The use of this systematic approach for hyperparameter optimization yields increased accuracy for our angular feature extraction algorithm from 86% to 94% and can be extended to similar applications.
Spectral Imaging
icon_mobile_dropdown
Hyperspectral imaging microscopy with a tunable laser illumination source (Conference Presentation)
Ronald G. Resmini, David W. Allen, E. T. Slonecker
We report on a shortwave infrared (SWIR; 900 nanometers [nm] to 2500 nm) hyperspectral imaging microscope (HSIM) based on a tunable laser illumination source―a capability we assembled in 2013, applied in 2014, and discuss in a 2018 paper in SPIE’s Journal of Applied Remote Sensing (JARS). The HSIM is a custom built system based on monochromatic laser illumination and an imager. It is a framing system; sample translation or mirror scanning is not required. The laser used is a Q-switched Nd:YAG, 10 Hz, 850 mJ at 1064 nm and a optical parametric oscillator tunable from 410 nm to 2550 nm with a linewidth <6.0 cm-1. The laser is projected free space to a diffuser and then to the sample. The imager is an HgCdTe-based camera with 14-bit radiometric resolution and a spectral response from 900 nm to 2500 nm. Its detector array is 320 by 256 with a pixel pitch of 30 µm. The lens used was a 25 mm focal length, f/1.4, optimized for use in the near-infrared/SWIR. The sensor may be raised or lowered to vary the spatial resolution. A custom written program was used for operation and data acquisition. The program controls the laser stepping sequentially through the wavelengths, triggers the camera, and collects a set of images at each wavelength. We discuss lessons learned in the HSIM’s construction and operation as well as in data processing. Data of a polished granite slab are shown and are compared to HSI data acquired with other laboratory sensors.
High speed VNIR/SWIR HSI sensor for vegetation trait mapping
Julia R. Dupuis, S. Chase Buchanan, Stephanie Craig, et al.
A high-speed visible/near infrared, shortwave infrared (VNIR/SWIR) hyperspectral imaging (HSI) sensor for airborne, dynamic, spatially-resolved vegetation trait measurements in support of advanced terrestrial modeling is presented. The VNIR/SWIR-HSI sensor employs a digital micromirror device as an agile, programmable entrance slit into VNIR (0.5–1μm) and SWIR (1.2–2.4μm) grating spectrometer channels, each with a two-dimensional focal plane array. The sensor architecture, realized in a 13 lb package, is specifically tailored for deployment on a small rotary wing (hovering) unmanned aircraft system (UAS). The architecture breaks the interdependency between aircraft speed, frame rate, and spatial resolution characteristic of push-broom HSI systems. The approach enables imaging while hovering as well as flexible revisit and/or foveation over a region of interest without requiring cooperation by the UAS. Hyperspectral data cubes are acquired on the second timescale which alleviates the position accuracy requirements on the UAS’s GPS-IMU. The sensor employs a simultaneous and boresighted visible context imager for pan sharpening and orthorectification. The data product is a 384×290 (spatial) ×340 (spectral) format calibrated, orthorectified spectral reflectivity data cube with a 26×20° field of view. The development, characterization, and a series of capability demonstrations of an advanced prototype VNIR/SWIR HSI sensor are presented. Capability demonstrations include ground-based testing as well as flight testing from a commercial rotary wing UAS with remote operation of the HSI sensor via a dedicated ground station.
Frequency analysis and optimization of the diffractive plenoptic camera
Two different configurations of the Diffractive Plenoptic Camera (DPC), the DPC and the Intermediate Image (II) DPC, had previously been built and their performances compared. The DPC couples a diffractive optic with plenoptic camera designs that provide snapshot spectral imaging capabilities but produce rendered images with low pixel count and low spatial resolution. The IIDPC, a modified setup of the DPC, was introduced as a system that could improve the spatial resolution. The IIDPC improved resolution over a narrow centralized spectral range, while the DPC had sustained resolution over a larger spectral range. Further study of both systems was desired to understand what the limiting factor in their performance was. Frequency analysis of both systems was carried out to determine the limiting component of each system. The limiting optic in both systems was determined to be the Microlens Array (MLA).
Assessment of residual fixed pattern noise on hyperspectral detection performance
Hyperspectral imaging sensors suffer from pixel-to-pixel response nonuniformity that manifests as fixed pattern noise (FPN) in collected data. FPN is typically removed by application of flat-field calibration procedures and nonuniformity correction algorithms. Despite application of these techniques, some amount of residual fixed pattern noise (RFPN) may persist in the data, negatively impacting target detection performance. In this paper we examine the conditions under which RFPN can impact detection performance using data collected in the SWIR across a range of target materials. We examine the application of scene-based nonuniformity correction (SBNUC) algorithms and assess their ability to remove RFPN. Moreover, we examine the effect of RFPN after application of these techniques to assess detection performance on a number of target materials that range in inherent separability from the background.
Optimized algorithm for processing hyperspectral push-broom data from multiple sources
Raik Illmann, Maik Rosenberger, Gunther Notni
Handling big measurement data is increasingly developing into a task with high requirements regarding efficiency and ensuring unaltered accuracy. Especially in the framework of spectral measurement, there is a rapidly increasing demand caused by rising availability of constantly improved sensor systems with better resolution and sensitivity. By using spectral imaging, an object can be measured including its spectral and spatial information. Respecting the claims on information given by a measurement result, for example from the field of quality assurance, the experimental setup that is used in this work should reach a spectral range from 220 nm up to 1700 nm. Because of technical limitations, more than one push-broom imaging system is necessary for measuring in such a wide spectral range. This paper deals about the evaluation and the further work concerning the techniques for matching spectral cubes acquired by different sources. It ties in with previous work, which laid down the fundamental ideas for handling those special kinds of big data sets. The developed algorithm is able to handle hyperspectral data from a multiple push-broom imaging system and the integrated calibration strategy ensures the correction with respect to geometric and chromatic aberrations. Further on, the experimental data will be compared to find the promising approach, depending on the case of application. A short survey of the analysis is also included and a simple idea for decreasing the effect of motion blur based on wavelet transformation was realized as well. The paper closes a chapter of investigations for merging spectral cubes, acquired by a multiple imaging prototype system with an efficient result.
Development of a pipeline for generating high resolution multispectral Mastcam images
The Mastcam multispectral imagers onboard the Mars rover Curiosity have been collecting data since 2012. There are two imagers. The left imager has wide field of view, but three times lower resolution than that of the right, which is just the opposite. Left and right images can be combined to generate stereo images. However, the resolution of the stereo images using conventional ways is at the same resolution of the left. Ideally, it will be more interesting to science fans and rover operators if one can generate stereo images with the same resolution of the right imager, as the resolution will be three times better. Recently, we have developed some algorithms that can fuse left and right images to create left images with the same resolution of the right. Consequently, high resolution stereo images can be generated. Moreover, disparity image can also be generated. In this paper, we will summarize the development of a data processing pipeline that takes left and right Mastcam images from the Planetary Data System (PDS) archive, performs pansharpening to enhance the left images with help from the right images, generates high resolution stereo images, disparity maps, and saves the processed images back into the PDS archive. The details of the workflow will be described. For example, image alignment algorithm, the pansharpening algorithm, stereo image formation algorithms, and disparity map generation algorithms will be summarized. Some demonstration examples will be given as well.
Poster Session
icon_mobile_dropdown
Iterative constrained energy minimization convolutional neural network for hyperspectral image classification
In hyperspectral image classification, how to jointly take care of spectral and spatial information received considerable interest lately, and many spectral-spatial classification approaches have been proposed. Unlike spectral-spatial classifications which are developed from traditional aspect, iterative constrained energy minimization (ICEM) and iterative target-constrained interference-minimized classifier (ITCIMC) approaches are developed from subpixel detection and mixed pixel classification point of view, and generally performs better than existing spectral-spatial approaches in terms of several measurements, such as accuracy rate and precision rate. Recently, convolutional neural networks (CNNs) have been successfully applied to visual imagery classification and have received great attention in hyperspectral image classification, due to the outstanding ability of CNN to capture spatial information. This paper extends ICEM to iterative constrained energy minimization convolution neural network approach for hyperspectral image classification. In order to capture spatial information, instead of Gaussian filter, CNN is utilized to generate binary pixelwise classification map for constrained energy minimization (CEM) detection results, and CNN classification map is feedbacked into hyperspectral bands, and then CEM detection is reprocessed in an iteration manner. Since CNN can reduce the performance of precision rate, a background recovery procedure is designed, to recover background detection map from CEM detection map and add it into CEM result as a new detection map.
Unsupervised iterative CEM-clustering based multiple Gaussian feature extraction for hyperspectral image classification
Recently, many spectral-spatial hyperspectral image classification techniques have been developed, such as widely used EPF-based and composite kernel-based approaches. However, the performance of these types of spectral-spatial approaches are generally depends on both techniques and its guided spatial feature information. To address this issue, an unsupervised subpixel detection based hyperspectral feature extraction for classification approach is proposed in this paper. Harsany-Farrand-Chang (HFC) method is utilized to estimate the number of distinct features of hyperspectral image can be decomposed into, and simplex growing algorithm (SGA) is utilized to generate endmembers as initial condition for K-means clustering. Subpixel detection maps are generated by constrained energy minimization (CEM) using centroid of K-means clusters. To capture spatial information, multiple Gaussian feature maps are generated by applying Gaussian spatial filters with different  on CEM detection maps, and PCA is used to reduce the dimension of multiple Gaussian feature maps, and feedback it into hyperspectral band images to reprocess K-means in an iteration manner. The proposed unsupervised approach is evaluated by supervised approaches such as iterative CEM (ICEM), EPF-based, and composite kernel-based methods, and results shows that most classification performance is improved.
A novel image registration method based on geometrical outlier removal
The accuracy of two sets of feature points is significant to remote sensing image registration based on feature matching. This paper proposes a novel image registration method based on geometrical outlier removal. The purpose of this algorithm is to eliminate most outliers and preserve as much inliers as possible. We formulate the outlier elimination method into a mathematical model of optimization, the geometric relationship of feature points is the constraint, and derive a simple closed-form solution with linear time and linear space complexities. This algorithm is divided into three key steps. First two remote sensing images are registered by scale-invariant feature transform(SIFT) algorithm. The initial feature points are generated by this step. Then the mathematical model is built and the optimal solution is calculated based on the initial feature points. Last we compare two recent registration results based on the optimal solution, and determine if it is necessary to update the initial feature points and recalculate. The experiment results demonstrate the accuracy and robustness of the proposed algorithm.
An iterative SIFT based on intensity and spatial information for remote sensing image registration
Shuhan Chen, Xiaorun Li, Liaoying Zhao, et al.
Owing to significant geometric distortions and illumination differences, high precision and robust matching of multisource remote sensing image registration poses a challenge. This paper presents a new approach, called iterative scale invariant feature transform (ISIFT) with rectification (ISIFTR), to remote sensing image registration. Unlike traditional SIFT-based methods or modified SIFT-based methods, the ISIFTR includes rectification loops to obtain rectified parameters in an iterative manner. The SIFT-based registration results is updated by rectification loops iteratively and terminated by an automatic stopping rule. ISIFTR works in three stages. The first stage is used to capture consistency feature sets with maximum similarity followed by a second stage to compare the registration parameters between two successive iterations for updating and finally concluded by a third stage to terminate the algorithm. The experimental results demonstrate that ISIFTR performed better registration accuracy than SIFT without rectification. By comparing the iteration curve based on the four different similarity metric, the results illustrate that the RIRMI-based rectification obtains better results than other similarity metrics.
Analysis of close-range hyperspectral images of vegetation communities in a high Arctic tundra ecosystem
Astrid D. Chacon, Miguel Velez-Reyes, Stephen M. Escarzaga, et al.
Close-range hyperspectral imaging is a valuable but often underutilized tool for rapid, non-destructive and automated assessment of vegetation functional dynamics in terms of both structure and physiology. During the 2017 summer growing season, several hyperspectral images were collected at close proximity over a variety of vegetation plots measuring approximately 1m2 which consisted of a heterogeneous architecture of vascular and non-vascular plant species and spanning variable soil moisture gradients. These long-term ecological monitoring vegetation plots are associated with the International Tundra Experiment - Arctic Observing Network (ITEX-AON) in Utqiaġvik, Alaska (formerly known as Barrow). Over the past two decades, ITEX has aimed to understand how Arctic tundra is responding to warming both across plant communities and through time. Hyperspectral images were collected in the visible to near-infrared range using a SOC710 VP (400-1000nm) hyperspectral imager from Surface Optics Corporation (SOC). Here we present initial results of analysis of these images using spectral unmixing techniques, which offer the potential to characterize and highlight presence, structure, and vigor of these highly complex heterogeneous tundra plant communities.
Case-study analysis of apparent camouflage-pattern color using segment-weighted spectra
S. Ramsey, T. Mayo, C. Howells, et al.
Advanced camouflage patterns, consisting of highly detailed camouflage patterning, require additional methodologies for color evaluation, which is with respect to realistic field conditions. A quantitative metric for evaluation of comouflage patterns, as viewed under realistic field conditions, is “apparent color,” which is the combination of all visible wavelengths (380-700 nm) of light reflected from large camouflage-pattern samples (≥1m2 ) for a given standoff distance (25-100 ft). Camouflage patterns lose resolution with increasing standoff distance, and eventually all colors within the pattern combine and thus appear monotone (the “apparent color” of the camouflage pattern). This paper presents a case-study analysis of apparent camouflage-pattern color using segment-weighted reflectance spectra for the purpose of evaluating apparent color of advanced camouflage patterns with respect to realistic field conditions. Simulation of apparent camouflage-pattern color using this methodology is based on decomposition of camouflage-pattern reflectance with respect to component segments of camouflage patterns.
Dried red chili peppers pungency assessment by visible and near infrared spectroscopy
Giuseppe Bonifazi, Riccardo Gasbarrone, Silvia Serranti
Chili peppers are widely used in many cuisines all around the world for enhancing dishes hotness. For this reason, a fast, reliable, non-destructive and not-invasive method is needed to measure and control the content of hotness in red chili peppers could be quite useful in respect of their use. Visible - Near InfraRed Spectroscopy (Vis-NIRS) fits well this purpose. The work explores the possible utilization of a portable spectroradiometer, to evaluate the spiciness of dried red chili peppers, along others important properties such as moisture content and ash content. An ASD FieldSpec 4™ Standard-Res able to acquire reflectance spectra on “spot” bases in the electromagnetic region (350-2500 nm) was utilized to reach this goal. Different specimen of ground dried chili peppers (i.e. powder) and crushed dried chili peppers, of different characteristics, were analyzed. The collected spectra have been correlated with the pungency, reported in Scoville Heat Units (SHU), of the powder and crushed samples. To reach these goals, a chemometric approach, finalized to set up Partial Least Square (PLS) regression models able to predict red chili peppers characteristics (i.e. ash content, moisture and SHU) was preliminary applied, then a Partial Least Square - Discriminant Analysis (PLS-DA) classification model was calibrated and validated by using reflectance spectra in order to specifically recognize the pungency of the examined samples. Results have been framed in a proximity sensing perspective and in a “on-line” food quality control logic.
Robust iterative estimation of material abundances based on spectral filters exploiting the SVD
Spectral unmixing aims to determine the relative amount (so-called abundances) of raw materials (so-called endmembers) in hyperspectral images (HSI). Libraries of endmember spectra are often given. Since the linear mixing model assigns one spectrum to each raw material, the endmember variability is not considered. Computationally costly algorithms exist to still derive precise abundances. In the method proposed in this work, we use only the pseudoinverse of the matrix of the endmember spectra to estimate the abundances. As can be shown, this approach circumvents the necessity of acquiring a HSI and is less computationally costly. To become robust against model deviations, we iteratively estimate the abundances by modifying the matrix of the endmember spectra used to derive the pseudoinverse. The values to modify each endmember spectrum are derived involving the singular value decomposition and the grade of violation of physical constraints to the abundances. Unlike existing algorithms, we account for the endmember variability and force simultaneously to meet physical constraints. Evaluations of samples for material mixtures, such as mixtures of color powders and quartz sands, show that more accurate abundance estimates result. A physical interpretation of these estimates is enabled in most cases.
Feature extraction and scene classification for remote sensing image based on sparse representation
Youliang Guo, Junping Zhang, Shengwei Zhong
Sparse representation theory for classification is an active research area. Signals can potentially have a compact representation as a linear combination of atoms in an overcomplete dictionary. In this paper, a novel classification method is proposed, which combines sparse-representation-based classification (SRC) and K-nearest neighbor classifier for remote sensing image. Based on the extracted multidimensional features which are used to constitute an overcomplete dictionary, the image is expressed as the product of the dictionary and coefficient of sparse representation. Then the test image is reconstructed by utilizing correlation and distance information between the image and each class simultaneously. Finally, each image will be assigned a class label based on minimizing the reconstruction error. And then, the proposed method has been extended to a kernelized variant to solve linearly inseparable problems. The experimental results show that the proposed method and its variant not only improve the classification performance over SRC but also outperform typical classifiers, such as support vector machine(SVM), especially when the number of training samples is limited.
Hyperspectral anomaly detection algorithm based on non-negative sparsity score estimation
Zhenyuan Feng, Junping Zhang, Qiupeng Sun, et al.
Hyperspectral anomaly detection, as an important application of hyperspectral remote sensing, is widely used in mineral exploration, environmental monitoring, military reconnaissance, etc. The anomalies in hyperspectral image mainly refer to the spectral anomalies caused by the reflection or radiation of some special objects. They have the characteristics of small scales, no prior information, and low probability of occurrence. Such anomalies in the military, agriculture, geology and other fields often contain much important information and has great application value. The sparsity score estimation algorithm is a global algorithm with high stability but as well as high false alarm rate. In this paper, we propose a hyperspectral anomaly detection algorithm based on non-negative sparsity score estimation. Firstly, the initial dictionary is obtained by K-SVD algorithm. After the sparse representation model of hyperspectral image is generated by orthogonal matching pursuit algorithm, dictionary atoms and corresponding coefficients are updated through singular value decomposition of the error term, then the optimization is achieved by successive iteration. Secondly, the nonnegative constraint condition is introduced into the sparse representation model, and the non-negative sparse coefficient is solved by the non-negative sparse coding. Finally, the non-negative sparse coefficients are used to calculate the atom usage probability, so as to infer whether the corresponding image pixels are anomaly or not. The experiments conducted on hyperspectral images show that the proposed method is superior to some typical methods, which has a lower false alarm rate while remains the accuracy.
Tracking long-term stability of MODIS thermal emissive bands response versus scan-angle using Dome C observations
Since their launch in December 1999 and May 2002, Terra and Aqua MODIS (MODerate Resolution Imaging Spectroradiometer) have successfully operated for over 19 and 17 years, respectively. MODIS is a scanning radiometer that uses a two-sided scan mirror rotating at 20.3 rpm. MODIS data are collected in 36 spectral bands: 20 reflective solar bands (RSBs) and 16 thermal emissive bands (TEBs). Earth observations are made each scan over a wide scan-angle range of ±55° from the instrument nadir. MODIS TEBs are calibrated on-orbit on a scan-by-scan basis using a blackbody at a fixed scan-angle. For Terra MODIS, the current TEB Response Versus Scan-angle (RVS) of its scan mirror was derived using observations made during a deep-space pitch maneuver in early 2003, while the Aqua MODIS TEB RVS was characterized pre-launch. In this study, the RVS on-orbit stability for the MODIS TEBs (over mission lifetime) is evaluated using multiple daily Dome C observations over the entire range of angles of incidence (AOIs) using a 20 x 20 km region of interest (ROI) centered at 75.12° S, 123.39° E. In this study, a total of approximately 3000 individual granules per year per instrument are analyzed. Except for band 29, the estimated brightness temperature drift for every AOI and band is small for both MODIS instruments over their mission lifetimes. The large variability and noise in the Dome C data sets make it difficult to determine any small RVS changes that may have occurred, but the stability of the results does give confidence that the current RVS used in the calibration is sufficient.
Defect detection based on monogenic signal processing
Using an infrared image sequence, how can one make the inner structure of a sample more visible without human supervision nor understanding of the context? This task is well known as a challenging task. One of the reasons is due to the great number of external events and factors that can influence the acquisition. This paper introduces a solution to this question. The sequence of infrared images is processed using the monogenic signal theory in order to extract the phase congruency. The Fourier Transform must respect the Hermitian property and it does thank to the Hilbert Transform in the 1D case, however this property is not respected in 2D. It does thanks to some approximation made in the analytic signal. The monogenic signal theory consists in reprocessing the Fourier Transform by replacing the Hilbert Transform by a Riesz Transform in order to maintain the Hermitian symmetry. In other words the phase congruence can be described as a feature detection approach. Using the assumption that the symmetry, or asymmetry of the phase does represent the similarity of the features at one scale, then the phase congruency represents how similar the phase values are at different scales. The proposed approach is invariant to image contrast which makes it suitable for applications. It can also give valuable results even with very noisy sequences. The proposed approach has been evaluated by using referenced Carbon Fiber Reinforced Plastic sample.
Combining images near-infrared and visible data from cameras UAV
The publication deals with the problem of obtaining the boundaries of objects in images of the near-infrared range. It is difficult for a person to correlate information the IR images with the observed objects. Actual is the task of combining information from all cameras to highlight the stable signs of objects located in the frame. The proposed algorithm implements step-by-step data processing including preprocessing, superresolution, searching for object boundaries and base points in different optical ranges, searching for correspondences between them, transferring object boundaries from stationary system images to mobile monitoring system images. The test data we used video shooting from developed UAV. A group of fixed optical cameras (visible spectrum) and data in the near-infrared spectrum are used as the optical system. The cameras used in our UAV have a low optical resolution (800×600 near-infrared spectrum, 1600×1200 visible range).