Proceedings Volume 9820

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXVII

cover
Proceedings Volume 9820

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXVII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 20 July 2016
Contents: 13 Sessions, 40 Papers, 0 Presentations
Conference: SPIE Defense + Security 2016
Volume Number: 9820

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9820
  • Modeling I
  • Modeling II
  • Modeling III
  • Modeling IV
  • Modeling V
  • Modeling VI
  • Modeling VII
  • Test I
  • Test II
  • Test III
  • Targets, Backgrounds, Atmospherics, and Simulation I
  • Targets, Backgrounds, Atmospherics, and Simulation II
Front Matter: Volume 9820
icon_mobile_dropdown
Front Matter: Volume 9820
This PDF file contains the front matter associated with SPIE Proceedings Volume 9820, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Modeling I
icon_mobile_dropdown
Investigating binocular summation in human vision using complementary fused external noise
Christopher L. Howell, Jeffrey T. Olson
The impact noise has on the processing of visual information at various stages within the human visual system (HVS) is still an open research area. To gain additional insight, twelve experiments were administered to human observers using sine wave targets to determine their contrast thresholds. A single frame of additive white Gaussian noise (AWGN) and its complement were used to investigate the effect of noise on the summation of visual information within the HVS. A standard contrast threshold experiment served as the baseline for comparisons. In the standard experiment, a range of sine wave targets are shown to the observers and their ability to detect the targets at varying contrast levels were recorded. The remaining experiments added some form of noise (noise image or its complement) and/or an additional sine wave target separated between one to three octaves to the test target. All of these experiments were tested using either a single monitor for viewing the targets or with a dual monitor presentation method for comparison. In the dual monitor experiments, a ninety degree mirror was used to direct each target to a different eye, allowing for the information to be fused binocularly. The experiments in this study present different approaches for delivering external noise to the HVS, and should allow for an improved understanding regarding how noise enters the HVS and what impact noise has on the processing of visual information.
Method and tool for generating and managing image quality allocations through the design and development process
Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.
Modeling threshold detection and search for point and extended sources
This paper deals with three separate topics. 1)The Berek extended object threshold detection model is described, calibrated against a portion of Blackwell’s 1946 naked eye threshold detection data for extended objects against an unstructured background, and then the remainder of Blackwell’s data is used to verify and validate the model. A range equation is derived from Berek’s model which allows threshold detection range to be predicted for extended to point objects against an un-cluttered background as a function of target size and adapting luminance levels. The range equation is then used to model threshold detection of stationary reflective and self-luminous targets against an uncluttered background. 2) There is uncertainty whether Travnikova’s search data for point source detection against an un-cluttered background is described by Rayleigh or exponential distributions. A model which explains the Rayleigh distribution for barely perceptible objects and the exponential distribution for brighter objects is given. 3) A technique is presented which allows a specific observer’s target acquisition capability to be characterized. Then a model is presented which describes how individual target acquisition probability grows when a specific observer or combination of specific observers search for targets. Applications for the three topics are discussed.
Reflective band image generation in the night vision integrated performance model
The generation of accurate reflective band imagery is complicated by the intrinsic properties of the scene, target, and camera system. Unlike emissive systems, which can be represented in equivalent temperature given some basic assumptions about target characteristics, visible scenes depend highly on the illumination, reflectivity, and orientation of objects in the scene as well as the spectral characteristics of the imaging system. Once an image has been sampled spectrally, much of the information regarding these characteristics is lost. In order to provide reference scene characteristics to the image processing component, the visible image processor in the Night Vision Integrated Performance Model (NV-IPM) utilizes pristine hyper-spectral data cubes. Using these pristine spectral scenes, the model is able to generate accurate representations of a scene for a given camera system. In this paper we discuss the development of the reflective band image simulation component and various methodologies for collecting or simulating the hyperspectral reference scenes.
Modeling II
icon_mobile_dropdown
Measured system component development for the night vision integrated performance model (NV-IPM)
The Night Vision Integrated Performance Model (NV-IPM) introduced a variety of measured system components in version 1.6 of the model. These measured system components enable the characterization of systems based on lab measurements which treat the system as a ‘black-box.’ This encapsulation of individual component terms into higher level measurable quantities circumvents the need to develop costly, time-consuming measurement techniques for each individual input term. Each of the ‘black-box’ system components were developed based upon the minimum required system level measurements for a particular type of imaging system. The measured system hierarchy also includes components for cases where a very limited number of measurements are possible. We discuss the development of the measured system components, the transition of lab measurements into model inputs, and any assumptions inherent to this process.
Comparison of relative effectiveness of video with serial visual presentation for target reconnaissance from UASs
Frank E. Skirlo, Anthony J. Matthews, Melvin Friedman, et al.
Reconnaissance from an unmanned aerial systems (UAS) is often done using video presentation. An alternate method is Serial Visual Presentation (SVP). In SVP, a static image remains in view until replaced by a new image at a rate equivalent to the live video. Mardell et al. have shown, in a forested environment, that a higher fraction of targets (people lost in the forest), are found with SVP than with video presentation. Here Mardell’s experiment is repeated for military targets in forested terrain at a fixed altitude. We too find a higher fraction of targets are found using SVP rather than video presentation. Typically it takes five seconds to cover a video field of view and at 30 frames per second. This implies that, for scenes where the target is not moving, 150 video images have nearly identical information (from a reconnaissance point of view) as a single SVP image. This is highly significant since transmission bandwidth is a limiting factor for most UASs. Finding targets in video or in SVP is an arduous task. For that reason we also compare aided target detection performance (Aided SVP) and unaided target detection performance on SVP images.
Modeling demosaicing of color corrected cameras in the NV-IPM
A critical step in creating an image using a Bayer pattern sampled color camera is demosaicing, the process of combining the individual color channels using a post-processing algorithm to produce the final displayed image. The demosaicing process can introduce degradations which reduce the quality of the final image. These degradations must be accounted for in order to accurately predict the performance of color imaging systems. In this paper, we present analytical derivations of transfer functions to allow description of the effects of demosaicing on the overall system blur and noise. The effects of color balancing and the creation of the luminance channel image are also explored. The methods presented are validated through Monte Carlo simulations, which can also be utilized to determine the transfer functions of non-linear demosaicing methods. Together with this new treatment of demosaicing, the framework behind the color detector component in NV-IPM is discussed.
Comparing and contrasting 2D versus 1D performance modeling in NV-IPM v1.6
Jonathan G. Hixson, Brian P. Teaney
Version 1.6 of the Night Vision Integrated Performance Model (NV-IPM) introduced two-dimensional Modulation Transfer Function (MTF) and noise signals within the model architecture. These two-dimensional signals enable the model to more accurately treat systems with non-separable MTF components. These non-separable MTF components may be introduced by optical elements, electronic post-processing, or atmospheric effects. In this paper we discuss the differences between the new two-dimensional signal architecture and the one-dimensional separable representation used in earlier versions of the model and highlight some cases which demonstrate the utility of the two-dimensional signals.
Modeling III
icon_mobile_dropdown
Model development and system performance optimization for staring infrared search and track (IRST) sensors
The mission of an Infrared Search and Track (IRST) system is to detect and locate (sometimes called find and fix) enemy aircraft at significant ranges. Two extreme opposite examples of IRST applications are 1) long range offensive aircraft detection when electronic warfare equipment is jammed, compromised, or intentionally turned off, and 2) distributed aperture systems where enemy aircraft may be in the proximity of the host aircraft. Past IRST systems have been primarily long range offensive systems that were based on the LWIR second generation thermal imager. The new IRST systems are primarily based on staring infrared focal planes and sensors. In the same manner that FLIR92 did not work well in the design of staring infrared cameras (NVTherm was developed to address staring infrared sensor performance), current modeling techniques do not adequately describe the performance of a staring IRST sensor. There are no standard military IRST models (per AFRL and NAVAIR), and each program appears to perform their own modeling. For this reason, L-3 has decided to develop a corporate model, working with AFRL and NAVAIR, for the analysis, design, and evaluation of IRST concepts, programs, and solutions. This paper provides some of the first analyses in the L-3 IRST model development program for the optimization of staring IRST sensors.
Is there an optimum detector size for digital night vision goggles?
In previous studies maximum acquisition range was achieved when Fλ/d approached 2. There was no constraint on magnification or field-of-view. This suggested that detector size approach λ/2 when F = 1. Night vision goggles typically have a fixed FOV of 40 deg with unity magnification. Digital night vision goggles (DNVG) acquisition range is limited by the human visual system resolution of 0.291 mrad (20/20 vision). This suggests the maximum number of horizontal detectors should be about 2500 with a minimum pixel size of about 8 μm when F = 1 and aperture = 1 inch. Values change somewhat depending upon f-number and noise level. Ranges are provided for GaAs and InGaAs detectors under starlight conditions. The different spectral responses create minimum resolvable contrast (MRC) test issues.
Efficient polarimetric BRDF transformations
In order to characterize a target, the basic information that is of interest is spectral, polarization and distance. Imaging spectropolarimetry is a powerful tool for obtaining the polarization state of a scene and to discriminate manmade objects in a cluttered background. With respect to polarization, often the measurements are limited to the first three components of the Stokes vector, excluding circular polarization. The scene is therefore characterized in four directions of linear polarization, I0, I90, I45 and I135. An efficient polarimetric BRDF model defined in a local coordinate system has recently been published. The model will now be extended to a global coordinate system for linear polarized radiation. This includes the first three elements of the Stokes vector. We will provide examples for surface of intrinsically different scattering materials, bulk scattering materials and clear coated surfaces.
A wavelet contrast metric for the targeting task performance metric
Target acquisition performance depends strongly on the contrast of the target. The Targeting Task Performance (TTP) metric, within the Night Vision Integrated Performance Model (NV-IPM), uses a combination of resolution, signal-to-noise ratio (SNR), and contrast to predict and model system performance. While the dependence on resolution and SNR are well defined and understood, defining a robust and versatile contrast metric for a wide variety of acquisition tasks is more difficult. In this correspondence, a wavelet contrast metric (WCM) is developed under the assumption that the human eye processes spatial differences in a manner similar to a wavelet transform. The amount of perceivable information, or useful wavelet coefficients, is used to predict the total viewable contrast to the human eye. The WCM is intended to better match the measured performance of the human vision system for high-contrast, low-contrast, and low-observable targets. After further validation, the new contrast metric can be incorporated using a modified TTP metric into the latest Army target acquisition software suite, the NV-IPM.
Modeling IV
icon_mobile_dropdown
Performance assessment of a single-pixel compressive sensing imaging system
Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.
Unified characterization of imaging sensors from VIS through LWIR
Modern reconnaissance strategies are based on gathering information using as many spectral bands as possible. Besides the well-known atmospheric windows at VIS, MWIR and LWIR wavelength suitable for long range observation progress in detector technology has provided excess also to the atmospheric window from 1.0 to 1.7 μm known as SWIR. Independent of the chosen spectral band all applications are longing to achieve the largest observation range possible. Thus, a concept for comparing the sensors in different wavelength bands is appreciated. Achievable ranges are influenced in part by the atmospheric conditions and in part by the capability of the imaging sensor, only the latter are under the control of the instrument manufacturer. In range simulation the contribution of the sensor can be efficiently characterized by using the MRC and the MRTD concept. The minimal resolvable contrast (MRC) as a function of spatial frequency is a decisive figure if merit for the VIS and SWIR. The minimum resolvable temperature difference (MRTD) as a function of spatial frequency is the same for MWIR and LWIR. All relevant sensor data are covered by MRC and MRTD, respectively, and thus can be introduced into range calculation by simply measuring the MRC or MRTD data curves. Based on measured MRC data range calculations for three imaging sensors (VIS, NIR and SWIR) are presented for selected atmospheric conditions together with significant captured images.
Modeling V
icon_mobile_dropdown
NIR sensitivity analysis with the VANE
Justin T. Carrillo, Christopher T. Goodin, Alex E. Baylot
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
Foote's Law and its application to cameras
In modeling and characterizing a focal plane array (FPA) with a uniform source, estimating the irradiance on the FPA is inevitable. Many have developed needed formulas for the estimate. Those formulas mostly focus on one pixel of the FPA on the optical axis, ignoring all the other pixels. I use Foote’s law here to derive the formulas for all the pixels in a simple configuration where the FPA is directly exposed to the uniform source. I extend the formulas for two more configurations: the FPA enclosed with a baffle and the FPA housed with a lens. My results are compared with some existing formulas. They show differences, yet reach an agreement with some approximations. My formulas are useful for modeling and trade study for cameras, especially for the cameras with wide field of view.
Spatially resolved 3D noise
When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].
Modeling VI
icon_mobile_dropdown
Hyperhemispheric multifunction sensors for ground combat vehicles: concept evaluation using virtual prototyping
As the defense budget reduces and we are asked to do more with less (seems to have been a major theme now for over 10 years), multifunction systems are becoming critical to the future of military EOIR systems. The design of multifunction (MF) sensors is not a well-developed or well-understood discipline. In this paper, we provide an example trade study of a ground combat system hyperhemispheric multifunction system. In addition, we show how concept evaluation can be achieved using a virtual prototyping environment.
HIL range performance of notional hyperspectral imaging sensors
In the use of conventional broadband imaging systems, whether reflective or emissive, scene image contrasts are often so low that target discrimination is difficult or uncertain, and it is contrast that drives human-in-the-loop (HIL) sensor range performance. This situation can occur even when the spectral shapes of the target and background signatures (radiances) across the sensor waveband differ significantly from each other. The fundamental components of broadband image contrast are the spectral integrals of the target and background signatures, and this spectral integration can average away the spectral differences between scene objects. In many low broadband image contrast situations, hyperspectral imaging (HSI) can preserve a greater degree of the intrinsic scene spectral contrast for the display, and more display contrast means greater range performance by a trained observer. This paper documents a study using spectral radiometric signature modeling and the U.S. Army’s Night Vision Integrated Performance Model (NV-IPM) to show how waveband selection by a notional HSI sensor using spectral contrast optimization can significantly increase HIL sensor range performance over conventional broadband sensors.
Scenario-based analysis of binning in MWIR detectors for missile applications
High resolution imaging is an important aspect of imaging in missile applications especially for automated target recognition and tracking. However it is not without its negative aspects. For similar detector size, increase in the resolution is only possible with decrease in pixel pitch which results a smaller detection area which translates to longer detection ranges. Binning is a relatively mature feature for silicon detectors used for obtaining better signal to noise ratio. In this study; a similar concept is proposed for MWIR detectors with emphasis on security related properties such as detection range and performance of an autonomous/semi-autonomous electro-optical system. The analysis and simulations has been performed for a fixed sample with predefined optical and electrical properties for noise and signal models for the clarity of the subject.
Modeling VII
icon_mobile_dropdown
An imaging system detectivity metric using energy and power spectral densities
The purpose of this paper is to construct a robust modeling framework for imaging systems in order to predict the performance of detecting small targets such as Unmanned Aerial Vehicles (UAVs). The underlying principle is to track the flow of scene information and statistics, such as the energy spectra of the target and power spectra of the background, through any number of imaging components. This information is then used to calculate a detectivity metric. Each imaging component is treated as a single linear shift invariant (LSI) component with specified input and output parameters. A component based approach enables the inclusion of existing component-level models and makes it directly compatible with image modeling software such as the Night Vision Integrated Performance Model (NV-IPM). The modeling framework also includes a parallel implementation of Monte Carlo simulations designed to verify the analytic approach. However, the Monte Carlo simulations may also be used independently to accurately model nonlinear processes where the analytic approach fails, allowing for even greater extensibility. A simple trade study is conducted comparing the modeling framework to the simulation.
The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and “soon to be” commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor’s overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
Characterization and recognition of mixed emotional expressions in thermal face image
Priya Saha, Debotosh Bhattacharjee, Barin Kumar De, et al.
Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.
Test I
icon_mobile_dropdown
Thermal system field performance predictions from laboratory and field measurements
Laboratory measurements on thermal imaging systems are critical to understanding their performance in a field environment. However, it is rarely a straightforward process to directly inject thermal measurements into thermal performance modeling software to acquire meaningful results. Some of the sources of discrepancies between laboratory and field measurements are sensor gain and level, dynamic range, sensor display and display brightness, and the environment where the sensor is operating. If measurements for the aforementioned parameters could be performed, a more accurate description of sensor performance in a particular environment is possible. This research will also include the procedure for turning both laboratory and field measurements into a system model.
Novel approach to characterize and compare the performance of night vision systems in representative illumination conditions
Nathalie Roy, Alexandre Vallières, Daniel St-Germain, et al.
A novel approach is used to characterize and compare the performance of night vision systems in conditions more representative of night operation in terms of spectral content. Its main advantage compared to standard testing methodologies is that it provides a fast and efficient way for untrained observers to compare night vision system performances with realistic illumination spectra. The testing methodology relies on a custom tumbling-E target and on a new LED-based illumination source that better emulates night sky spectral irradiances from deep overcast starlight to quarter-moon conditions. In this paper, we describe the setup and we demonstrate that the novel approach can be an efficient method to characterize among others night vision goggles (NVG) performances with a small error on the photogenerated electrons compared to the STANAG 4351 procedure.
Noise measurement on thermal systems with narrow band
Thermal systems with a narrow spectral bandpass and mid-wave thermal imagers are useful for a variety of imaging applications. Additionally, the sensitivity for these classes of systems is increasing along with an increase in performance requirements when evaluated in a lab. Unfortunately, the uncertainty in the blackbody temperature along with the temporal instability of the blackbody could lead to uncontrolled laboratory environmental effects which could increase the measured noise. If the temporal uncertainty and accuracy of a particular blackbody is known, then confidence intervals could be adjusted for source accuracy and instability. Additionally, because thermal currents may be a large source of temporal noise in narrow band systems, a means to mitigate them is presented and results are discussed.
Test II
icon_mobile_dropdown
Development of a high-definition IR LED scene projector
Dennis T. Norton Jr., Joe LaVeigne, Greg Franks, et al.
Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches ≤32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.
Display MTF measurements based on scanning and imaging technologies and its importance in the application space
Measuring the Modulation Transfer Function (MTF) of a display monitor is necessary for many applications such as: modeling end-to-end systems, conducting perception experiments, and performing targeting tasks in real-word scenarios. The MTF of a display defines the resolution properties and quantifies how well the spatial frequencies are displayed on a monitor. Many researchers have developed methods to measure display MTFs using either scanning or imaging devices. In this paper, we first present methods to measure display MTFs using two separate technologies and then discuss the impact of a display MTF on a system’s performance. The two measurement technologies were scanning with a photometer and imaging with a CMOS based camera. To estimate a true display MTF, measurements made with the photometer were backed out for the scanning optics aperture. The developed methods were applied to measure MTFs of the two types of monitors, Cathode Ray Tube (CRT) and Liquid Crystal Display (LCD). The accuracy of the measured MTFs was validated by comparing MTFs measured with the two systems. The methods presented here are simple and can be easily implemented employing either a Prichard photometer or an imaging device. In addition, the impact of a display MTF on the end-to-end performance of a system was modeled using NV-IPM.
Achieving ultra-high temperatures with a resistive emitter array
Tom Danielson, Greg Franks, Nicholas Holmes, et al.
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.
Automated and semi-automated field testing of night vision goggles
Stephen Scopatz, Dominic Paszkeicz, Brent Langsdorf
This paper will discuss the development and results of a new field portable test set for Gen 2 and Gen 3 night vision goggles that automates many of the tests supported by currently available NVG test products. The major innovation is the use of MTF testing with a knife edge target. MTF testing is established in the laboratory environment and well suited to replace the operator's interpretation of the USAF 1951 resolution chart. Results will be presented to show the more consistent performance of the MTF approach as compared to the known operator variations when humans determine resolution. Other standard tests are semi-automated and/or video-assisted, such as infinity focus, spot defects, and distortion. The presentation will show repeatability across test units and operators on the key tests. The presentation will include automatically generated examples of the report files for each test run on each goggle. All of these capabilities are provided in a package that matches the form factor of other products in use to test NVG’s. A discussion of the user interface and the ease of use of the system will be included as well as the improvement in the test time for each goggle type.
Test III
icon_mobile_dropdown
Real-time simulation of thermal shadows with EMIT
Andreas Klein, Stefan Oberhofer, Peter Schätz, et al.
Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.
Targets, Backgrounds, Atmospherics, and Simulation I
icon_mobile_dropdown
Simulation of whitecaps and their radiometric properties in the SWIR
A 3D simulation of the dynamic sea surface populated with whitecaps is presented. The simulation considers the dynamic evolution of whitecaps depending on wind speed and fetch. It is suitable for imaging simulations of maritime scenarios. The calculation of whitecap radiance is done in the SWIR spectral band by considering wave hiding and shadowing, especially occurring at low viewing angles. Our computer simulation combines the 3D simulation of a maritime scene (open sea/clear sky) considering whitecaps with the simulation of light from a light source (e.g. laser light) reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. The whitecap generation is deduced from the vertical acceleration of the sea surface, i.e. from the second moment of the wave power density spectrum. To predict the view of a camera, the sea surface radiance must be calculated for the specific waveband with the emitted sea surface radiance and the specularly reflected sky radiance as components. The radiances of light specularly reflected at the windroughened sea surface without whitecaps are modeled by considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). A specific BRDF of whitecaps is used by taking into account their shadowing function. The simulation model is suitable for the pre-calculation of the reflected radiance of a light source for near horizontal incident angles where slope-shadowing of waves has to be considered. The whitecap coverage is determined from the simulated image sequences for different wind speeds and is compared with whitecap coverage functions from literature. A SWIR-image of the water surface of a lake populated with whitecaps is compared with the corresponding simulated image. Additionally, the impact of whitecaps on the radiation balance for a bistatic configuration of light source and receiver is calculated for different wind speeds.
Investigation of the dynamic thermal infrared signatures of a calibration target instrumented with a network of 1-wire temperature sensors
Gareth D. Lewis, Patrick Merken
In this paper, we describe the temperature and thermal variations from a painted geometrical target (CUBI) fitted with a network of internally mounted 1-wire temperature sensors. The sensors, which were calibrated in a temperature-controlled oven, were recorded every 20 seconds over a period from May to December 2015. This amounts to an archive of approximately 180 days of near uninterrupted data. Two meteorological stations collocated with the CUBI on a roof test site, record relevant environmental parameters every few minutes. In this paper, we analyze the data for only one day, 2 October 2015, for which a wavelet analysis highlights the contribution of different temporal fluctuations to total signature. We selected this specific day since it represented simple environmental conditions, and additionally images from a 3-5 microns (MWIR) thermal imager were recorded. Finally, we demonstrate that a wavelet decomposition of the temperature signature to be a useful method to characterize dynamic temperature changes, and perhaps a method to verify prediction models for varying fluctuation scales.
Image based performance analysis of thermal imagers
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Targets, Backgrounds, Atmospherics, and Simulation II
icon_mobile_dropdown
Application of cooled IR focal plane arrays in thermographic cameras
B. Vollheim, M. Gaertner, G. Dammass, et al.
The usage of cooled IR Focal Plane Array detectors in thermographic or radiometric thermal imaging cameras, respectively, leads to special demands on these detectors, which are discussed in this paper. For a radiometric calibration of wide temperature measuring ranges from -40 up to 2,000 °C, a linear and time-stable response of the photodiode array has to be ensured for low as well as high radiation intensities. The maximum detectable photon flux is limited by the allowed shift of the photodiode’s bias that should remain in the linear part of the photodiode’s I(V) curve even for the highest photocurrent. This limits the measurable highest object temperature in practice earlier than the minimum possible integration time. Higher temperature measuring ranges are realized by means of neutral or spectral filters. Defense and Security applications normally provide images at the given ambient temperature with small hot spots. The usage of radiometric thermal imagers for thermography often feature larger objects with a high temperature contrast to the background. This should not generate artifacts in the image, like pixel patterns or stripes. Further issues concern the clock regime or the sub-frame capabilities of the Read-Out-Circuit and the frame rate dependency of the signal. We will briefly describe the demands on the lens design for thermal imaging cameras when using cooled IR Focal Plane Array detectors with large apertures.
3D flare particle model for ShipIR/NTCS
A key component in any soft-kill response to an incoming guided missile is the flare /chaff decoy used to distract or seduce the seeker homing system away from the naval platform. This paper describes a new 3D flare particle model in the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR), which provides independent control over the size and radial distribution of its signature. The 3D particles of each flare sub-munition are modelled stochastically and rendered using OpenGL z-buffering, 2D projection, and alpha-blending to produce a unique and time varying signature. A sensitivity analysis on each input parameter provides the data and methods needed to synthesize a model from an IR measurement of a decoy. The new model also eliminated artifacts and deficiencies in our previous model which prevented reliable tracks from the adaptive track gate algorithm already presented by Ramaswamy and Vaitekunas (2015). A sequence of scenarios are used to test and demonstrate the new flare model during a missile engagement.
Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling
Kimberly E. Kolb, Hee-sue S. Choi, Balvinder Kaur, et al.
The US Army’s Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD’s Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Multi-spectral synthetic image generation for ground vehicle identification training
Christopher M. May, Neil A. Pinto, Jeffrey S. Sanders
There is a ubiquitous and never ending need in the US armed forces for training materials that provide the warfighter with the skills needed to differentiate between friendly and enemy forces on the battlefield. The current state of the art in battlefield identification training is the Recognition of Combat Vehicles (ROC-V) tool created and maintained by the Communications - Electronics Research, Development and Engineering Center Night Vision and Electronic Sensors Directorate (CERDEC NVESD). The ROC-V training package utilizes measured visual and thermal imagery to train soldiers about the critical visual and thermal cues needed to accurately identify modern military vehicles and combatants. This paper presents an approach to augment the existing ROC-V imagery database with synthetically generated multi-spectral imagery that will allow NVESD to provide improved training imagery at significantly lower costs.
New technologies for HWIL testing of WFOV, large-format FPA sensor systems
Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author’s team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.