Proceedings Volume 7300

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XX

cover
Proceedings Volume 7300

Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XX

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 April 2009
Contents: 10 Sessions, 36 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2009
Volume Number: 7300

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7300
  • Modeling I
  • Modeling II
  • Modeling III
  • Modeling IV
  • Targets, Backgrounds, and Atmospherics I
  • Targets, Backgrounds, and Atmospherics II
  • Systems and Testing I
  • Systems and Testing II
  • Poster Session
Front Matter: Volume 7300
icon_mobile_dropdown
Front Matter: Volume 7300
This PDF file contains the front matter associated with SPIE Proceedings Volume 7300, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Modeling I
icon_mobile_dropdown
Optical characteristics of small surface targets, measured in the False Bay, South Africa; June 2007
Arie N. de Jong, Piet B. W. Schwering, Peter J. Fritz, et al.
During the False Bay trial (June 2007), the performance of a set of three optical sensors was tested against several small surface targets in a coastal area: a hyperspectral camera, a camera with a rotating polarization filter and a high resolution camera. One objective was the validation of a contrast and clutter model for the visual spectral band in this type of scenarios. Another issue was to test the benefit of using a polarisation filter and a hyperspectral unit in front of standard TV cameras. Finally the loss in identification capability of a high resolution camera at long range due to atmospheric blur was investigated. Data were collected of targets in near-sun direction at ranges up to seven kilometers, in all cases for down looking angles (targets below the horizon). Environmental parameters such as solar irradiance and windspeed were measured as input for the contrast and clutter models. Spatial, spectral and temporal effects of the target contrast and of the background clutter behaviour in the visual spectral band were determined as function of range and compared with model predictions. Samples of data and predictions are presented in this paper. The spatial and temporal target characteristics are of key importance for the development of algorithms for target detection, classification and tracking. Finally, rules of thumb, based on the measurements and model predictions, on the detection and identification range performances of specific optical sensors against small surface targets in a maritime environment are presented.
Range performance impact of noise for thermal system modeling
This paper presents a comparison of the predictions of NVThermIP to human perception experiment results in the presence of large amounts of noise where the signal to noise ratio is around 1. First, the calculations used in the NVESD imager performance models that deal with sensor noise are described outlining a few errors that appear in the NVThermIP code. A perception experiment is designed to test the range performance predictions of NVThermIP with varying amounts of noise and varying frame rates. NVThermIP is found to overestimate the impact of noise, leading to pessimistic range performance predictions for noisy systems. The perception experiment results are used to find a best fit value of the constant α used to relate system noise to eye noise in the NVESD models. The perception results are also fit to an alternate eye model that handles frame rates below 30Hz and smoothly approaches an accurate prediction of the performance in the presence of static noise. The predictions using the fit data show significantly less error than the predictions from the current model.
Passive IR sensor performance analysis using Mathcad modeling
This paper presents an end-to-end physics based performance model for a passive infrared (IR) sensor using the Mathcad® spreadsheet. This model will calculate both temporal and spatial noise of a staring focal plane array (FPA) IR sensor, the signal-to-noise ratio (SNR) of the sensor against different targets at different ranges (with atmospheric effects, both turbulence and extinction considered). Finally, probability of detection (Pd) based on SNR results, against these targets, are also discussed. This model will allow the user to easily define basic sensor parameters such as spectral band, detector FPA format & size, field of view (FOV), optics F/#, etc. In addition, target and environmental parameters are also considered for the analyses. This performance model will allow the user to determine if a particular IR sensor design would meet the requirements of its operational specifications, and would help the user to refine the various parameters of the IR sensor at the early design stage.
Visual acuity and contrast sensitivity with compressed motion video
Piet Bijl, Sjoerd C. de Vries
Video of Visual Acuity (VA) and Contrast Sensitivity (CS) test charts in a complex background was recorded using a CCD camera mounted on a computer-controlled tripod and fed into real-time MPEG2 compression/decompression equipment. The test charts were based on the Triangle Orientation Discrimination (TOD) test method and contained triangle test patterns of different sizes and contrasts in four possible orientations. In a perception experiment, VA and CS thresholds at the 75% correct level were obtained for three camera velocities (0, 1.0 and 2.0 deg/s or 0, 4.1 and 8.1 pix/frame) and four compression rates (no compression, 4Mb/s, 2Mb/s and 1 Mb/s). VA is shown to be rather robust to any combination of motion and compression. CS however dramatically decreases when motion is combined with high compression ratios. The data suggest that with the MPEG2 algorithm the emphasis is on the preservation of image detail at the cost of contrast loss.
Modeling II
icon_mobile_dropdown
Perception testing: a key component in modeling and simulation at NVESD
Tana Maurer, Oanh Nguyen, Jim Thomas, et al.
The U.S. Army's Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division is responsible for developing and enhancing electro-optic/infrared sensor performance models that are used in wargames and for sensor trade studies. Predicting how well a sensor performs a military task depends on both the physics of the sensor and how well observers perform specific tasks while using that sensor. An example of such a task could be to search and detect targets of military interest. Another task could be to identify a target as a threat or non-threat. A typical sensor development program involves analyses and trade-offs among a number of variables such as field of view, resolution, range, compression techniques, etc. Observer performance results, obtained in the NVESD perception lab, provide essential information to bridge the gap between the physics of a system and the humans using that system. This information is then used to develop and validate models, to conduct design trade-off studies and to generate insights into the development of new systems for soldiers in surveillance, urban combat, and all types of military activities. Computer scientists and engineers in the perception lab design tests and process both real and simulated imagery in order to isolate the effect or design being studied. Then, in accordance with an approved protocol for human subjects research, experiments are administered to the desired number of observers. Results are tabulated and analyzed. The primary focus of this paper is to describe current capabilities of the NVESD perception lab regarding computer-based observer performance testing of sensor imagery, what types of experiments have been completed and plans for the future.
Empirical modeling and results of NIR clutter for tactical missile warning
Joel B. Montgomery, Christine T. Montgomery, Richard B. Sanderson, et al.
A tactical airborne multicolor missile warning testbed was developed as part of an Air Force Research Laboratory (AFRL) initiative focusing on the development of sensors operating in the near infrared where commercially available silicon detectors can be used. The presentation will detail the new background and clutter data collections from ground and flight operations and results. It will outline the statistical analysis in both detection and guard bands to provide a basis for evaluation of sensor performance against missile and hostile fire threats. A general stochastic model for the NIR clutter will be presented and validity compared against flight data.
Modeling of video compression effects on target acquisition performance
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Modeling III
icon_mobile_dropdown
Super-resolution for flash LADAR data
Flash laser detection and ranging (LADAR) systems are increasingly used in robotics applications for autonomous navigation and obstacle avoidance. Their compact size, high frame rate, wide field of view, and low cost are key advantages over traditional scanning LADAR devices. However, these benefits are achieved at the cost of spatial resolution. Super-resolution enhancement can be applied to improve the resolution of flash LADAR devices, making them ideal for small robotics applications. Previous work by Rosenbush et al. applied the super-resolution algorithm of Vandewalle et al. to flash LADAR data, and observed quantitative improvement in image quality in terms of number of edges detected. This study uses the super-resolution algorithm of Young et al. to enhance the resolution of range data acquired with a SwissRanger SR-3000 flash LADAR camera. To improve the accuracy of sub-pixel shift estimation, a wavelet preprocessing stage was developed and applied to flash LADAR imagery. The authors used the triangle orientation discrimination (TOD) methodology for a subjective evaluation of the performance improvement (measured in terms of probability of target discrimination and subject response times) achieved with super-resolution. Super-resolution of flash LADAR imagery resulted in superior probabilities of target discrimination at the all investigated ranges while reducing subject response times.
Sensor performance as a function of sampling (d) and optical blur (Fλ)
Detector sampling and optical blur are two major factors affecting Target Acquisition (TA) performance with modern EO and IR systems. In order to quantify their relative significance, we simulated five realistic LWIR and MWIR sensors from very under-sampled (detector pitch d >> diffraction blur Fλ) to well-sampled (Fλ >> d). Next, we measured their TOD (Triangle Orientation Discrimination) sensor performance curve. The results show a region that is clearly detectorlimited, a region that is clearly diffraction-limited, and a transition area. For a high contrast target, threshold size TFPA on the sensor focal plane can mathematically be described with a simple linear expression: TFPA =1.5·d ·w(d/Fλ) + 0.95· Fλ·w(Fλ/d), w being a steep weighting function between 0 and 1. Next, tacticle vehicle identification range predictions with the TOD TA model and TTP (Targeting Task Performance) model where compared to measured ranges with human observers. The TOD excellently predicts performance for both well-sampled and under-sampled sensors. While earlier TTP versions (2001, 2005) showed a pronounced difference in the relative weight of sampling and blur to range, the predictions with the newest (2008) TTP version that considers in-band aliasing are remarkably close to the TOD. In conclusion, the TOD methodology now provides a solid laboratory sensor performance test, a Monte Carlo simulation model to assess performance from sensor physics, a Target Acquisition range prediction model and a simple analytical expression to quickly predict sensor performance as a function of sampling and blur. TTP approaches TOD with respect to field performance prediction.
Validating model predictions of MRT measurements on LWIR imaging systems
Stephen D. Burks, Kenneth Garner, Stephen Miller, et al.
The predicted Minimum Resolvable Temperature (MRT) values from five MRT models are compared to the measured MRT values for eighteen long-wave thermal imaging systems. The most accurate model, which is based upon the output of NVTherm IP, has an advantage over the other candidate models because it accounts for performance degradations due to blur and bar sampling. Models based upon the FLIR 92 model tended to predict overly optimistic values for all frequencies. The earliest models for MRT's for staring arrays did not incorporate advanced eye effects and had the tendency to provide pessimistic estimates as the frequency approached the Nyquist limit.
Comparison of perception results with a proposed model for detection of a stationary target from a moving platform
A model has been developed that predicts the probability of detection as a function of time for a sensor on a moving platform looking for a stationary object. The proposed model takes as input P (calculated from NVThermIP), expresses it as a function of time using the known sensor-target range and outputs detection probability as a function of time. The proposed search model has one calibration factor that is determined from the mean time to detect the target. Simulated imagery was generated that models a vehicle moving with constant speed along a straight road with varied vegetation on both sides and occasional debris on the road and on the shoulder. Alongside, and occasionally on the road, triangular and square shapes are visible with a contrast similar to that of the background but with a different texture. These serve as targets to be detected. In perception tests, the ability of observers to detect the simulated targets was measured and excellent agreement was observed between modeled and measured results.
Modeling IV
icon_mobile_dropdown
Performance evaluation of image enhancement techniques on a digital image-intensifier
Recently new techniques for night-vision cameras are developed. Digital image-intensifiers are becoming available on the market. Also, so-called EMCCD cameras are developed, which may even be able to record color information about the scene. However, in low-light situations all night-vision imagery becomes noisy. In this paper we evaluate the performance of image enhancement techniques for one type of noisy night imagery, that is a digital image-intensifier. The image enhancement techniques tested are noise reduction, super-resolution reconstruction and local adaptive contrast enhancement. The results show that image enhancement techniques improve the usage of image-intensifiers in low-light conditions. The largest improvement is found for super-resolution reconstruction applied at the smallest objects. This indicates that part of the improvement is obtained by resolution enhancement. Applying LACE does not change the performance, indicating that in this setup LACE performs equal to the automatic gain control of the image-intensifier.
Limitations of contrast enhancement for infrared target identification
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content. Automatic contrast enhancement techniques do not always achieve this improvement. In some cases, the contrast can increase to a level of target saturation. This paper assesses the range-performance effects of contrast enhancement for target identification as a function of image saturation. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing contrast enhancement processed images at various levels of saturation. Contrast enhancement is modeled in the U.S. Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of specific feature saturation or enhancement. The measured results follow the predicted performance based on the target task difficulty metric used in NVThermIP for the non-saturated cases. The saturated images reduce the information contained in the target and performance suffers. The model treats the contrast of the target as uniform over spatial frequency. As the contrast is enhanced, the model assumes that the contrast is enhanced uniformly over the spatial frequencies. After saturation, the spatial cues that differentiate one tank from another are located in a limited band of spatial frequencies. A frequency dependent treatment of target contrast is needed to predict performance of over-processed images.
Multispectral EO/IR sensor model for evaluating UV, visible, SWIR, MWIR and LWIR system performance
Ashok K. Sood, Robert Richwine, Yash R. Puri, et al.
Next Generation EO/IR Sensors using Nanostructures are being developed for a variety of Defense Applications. In addition, large area IRFPA's are being developed on low cost substrates. In this paper, we will discuss the capabilities of a EO/IR Sensor Model to provide a robust means for comparing performance of infrared FPA's and Sensors that can operate in the visible and infrared spectral bands that coincide with the atmospheric windows - UV, Visible-NIR (0.4-1.8μ), SWIR (2.0-2.5μ), MWIR (3-5μ), and LWIR (8-14μ). The model will be able to predict sensor performance and also functions as an assessment tool for single-color and for multi-color imaging. The detector model can also characterize ZnO, Si, SiGe, InGaAs, InSb, HgCdTe and Nanostructure based Sensors. The model can predict performance by also placing the specific FPA into an optical system, evaluates system performance (NEI, NETD, MRTD, and SNR). This model has been used as a tool for predicting performance of state-of-the-art detector arrays and nanostructure arrays under development. Results of the analysis can be presented for various targets for each of the focal plane technologies for a variety of missions.
Identification of ground targets from airborne platforms
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) sensor performance models predict the ability of soldiers to perform a specified military discrimination task using an EO/IR sensor system. Increasingly EO/IR systems are being used on manned and un-manned aircraft for surveillance and target acquisition tasks. In response to this emerging requirement, the NVESD Modeling and Simulation division has been tasked to compare target identification performance between ground-to-ground and air-to-ground platforms for both IR and visible spectra for a set of wheeled utility vehicles. To measure performance, several forced choice experiments were designed and administered and the results analyzed. This paper describes these experiments and reports the results as well as the NVTherm model calibration factors derived for the infrared imagery.
Targets, Backgrounds, and Atmospherics I
icon_mobile_dropdown
Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM
System assessment by image simulation requires synthetic scenarios that can be viewed by the device to be simulated. In addition to physical modeling of the camera, a reliable modeling of scene elements is necessary. Software products for modeling of target data in the IR should be capable of (i) predicting surface temperatures of scene elements over a long period of time and (ii) computing sensor views of the scenario. For such applications, FGAN-FOM acquired the software products RadTherm-IR (ThermoAnalytics Inc., Calumet, USA) and IR-Workbench (OKTAL-SE, Toulouse, France). Inspection of the accuracy of simulation results by validation is necessary before using these products for applications. In the first step of validation, the performance of both "thermal solvers" was determined through comparison of the computed diurnal surface temperatures of a simple object with the corresponding values from measurements. CUBI is a rather simple geometric object with well known material parameters which makes it suitable for testing and validating object models in IR. It was used in this study as a test body. Comparison of calculated and measured surface temperature values will be presented, together with the results from the FGAN-FOM thermal object code F-TOM. In the second validation step, radiances of the simulated sensor views computed by RadTherm-IR and IR-Workbench will be compared with radiances retrieved from the recorded sensor images taken by the sensor that was simulated. Strengths and weaknesses of the models RadTherm-IR, IR-Workbench and F-TOM will be discussed.
The coupling of MATISSE and the SE-WORKBENCH: a new solution for simulating efficiently the atmospheric radiative transfer and the sea surface radiation
Thierry Cathala, Nicolas Douchin, Jean Latger, et al.
The SE-WORKBENCH workshop, also called CHORALE (French acceptation for "simulated Optronic Acoustic Radar battlefield") is used by the French DGA (MoD) and several other Defense organizations and companies all around the World to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes that may contain several types of target, and then generate the physical signal received by a sensor, typically an IR sensor. The SE-WORKBENCH can be used either as a collection of software modules through dedicated GUIs or as an API made of a large number of specialized toolkits. The SE-WORKBENCH is made of several functional block: one for geometrically and physically modeling the terrain and the targets, one for building the simulation scenario and one for rendering the synthetic environment, both in real and non real time. Among the modules that the modeling block is composed of, SE-ATMOSPHERE is used to simulate the atmospheric conditions of a Synthetic Environment and then to integrate the impact of these conditions on a scene. This software product generates an exploitable physical atmosphere by the SE WORKBENCH tools generating spectral images. It relies on several external radiative transfer models such as MODTRAN V4.2 in the current version. MATISSE [4,5] is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiative quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). Backgrounds include atmosphere, low and high altitude clouds, sea and land. A particular characteristic of the code is its ability to take into account atmospheric spatial variability (temperatures, mixing ratio, etc) along each line of sight. An Application Programming Interface (API) is included to facilitate its use in conjunction with external codes. MATISSE is currently considered as a new external radiative transfer model to be integrated in SE-ATMOSPHERE as a complement to MODTRAN. Compared to the latter which is used as a whole MATISSE can be used step by step and modularly as an API: this can avoid to pre compute large atmospheric parameters tables as it is done currently with MODTRAN. The use of MATISSE will also enable a real coupling between the ray tracing process of the SEWORKBENCH and the radiative transfer model of MATISSE. This will lead to the improvement of the link between a general atmospheric model and a specific 3D terrain. The paper will demonstrate the advantages for the SE WORKEBNCH of using MATISSE as a new atmospheric code, but also for computing the radiative properties of the sea surface.
MATISSE-v1.5 and MATISSE-v2.0: new developments and comparison with MIRAMER measurements
Pierre Simoneau, Karine Caillault, Sandrine Fauqueux, et al.
MATISSE is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiatives quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). The spectral bandwidth ranges from 0.4 to 14 μm. Natural backgrounds include atmosphere (taking into account spatial variability), low and high altitude clouds, sea and land. The current version MATISSE-v1.5 can be run on SUN and IBM workstations as well as on PC under Windows and Linux environment. An IHM developed under Java environment is also implemented. MATISSE-v2.0 recovers all the MATISSE-v1.5 functionalities, and includes a new sea surface radiance model depending on wind speed, wind direction and the fetch value. The release of this new version in planned for April 2009. This paper gives a description of MATISSE-v1.5 and MATISSE-v2.0 and shows preliminary comparison results between generated images and measured images during the MIRAMER campaign, which hold in May 2008 in the Mediterranean Sea.
Targets, Backgrounds, and Atmospherics II
icon_mobile_dropdown
Measurement and analysis of optical surface properties for input to ShipIR
David A. Vaitekunas, Jim Jafolla, Paul McKenna, et al.
A new standard for the measurement and analysis of optical surface properties for input to the ShipIR model (Vaitekunas, 2002) are developed and tested using paint specimens taken from the unclassified Canadian research vessel (CFAV Quest). The theory and equations used to convert the in-lab surface property measurements into ShipIR model input parameters are described. The resultant data consists of two thermal model input parameters, solar absorptivity (αs) and thermal emissivity epsilon;Τ), and a series of in-band surface properties, the nominal emissivity (ε), nominal specular reflectance (ρs), angular lobe-width (e) and a grazing-angle (b) parameter. Original sample measurements in 2004 are supplemented with new hemispherical directional reflectance (HDR) and bi-directional reflectance distribution function (BRDF) measurements in 2008 to track the changes in the paint specimens and expand the analysis to include additional input parameters to ShipIR. A more rigorous treatment of the BRDF model relates the HDR and BRDF measurements to a single surface roughness parameter (σ).
CART III: improved camouflage assessment using moving target indication
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in image sequences (see contributions to SPIE 2007 and SPIE 2008 [1], [2]). It works with visual-optical, infrared and SAR image sequences. The system comprises a semi-automatic annotation functionality for marking target objects (ground truth generation) including a propagation of those markings over the image sequence for static as well as moving scene objects, where the recording camera may be static or moving. The marked image regions are evaluated by applying user-defined feature extractors, which can easily be defined and integrated into the system via a generic software interface. This article presents further systematic enhancements made in the recent year and addresses particularly the task of the detection of moving vehicles by latest image exploitation methods for objective camouflage assessment in these cases. As a main topic, the loop was closed between the two natural opposites of reconnaissance and camouflage, which was realized by incorporating ATD (Automatic Target Detection) algorithms into the computer aided camouflage assessment. Since object (and sensor) movement is an important feature for many applications, different image-based MTI (Moving Target Indication) algorithms were included in the CART system, which rely on changes in the image plane from an image to the successive one (after camera movements are automatically compensated). Additionally, the MTI outputs over time are combined in a certain way which we call "snail track" algorithm. The results show that their output provides a valuable measurement for the conspicuity of moving objects and therefore is an ideal component in the camouflage assessment. It is shown that image-based MTI improvements lead to improvements in the camouflage assessment process.
A structure-based image similarity measure using homogeneity regions
Comparing two similar images is often needed to evaluate the effectiveness of an image processing algorithm. But, there is no one widely used objective measure. In many papers, the mean squared error (MSE) or peak signal to noise ratio (PSNR) are used. These measures rely entirely on pixel intensities. Though these measures are well understood and easy to implement, they do not correlate well with perceived image quality. This paper will present an image quality metric that analyzes image structure rather than entirely on pixels. It extracts image structure with the use of a recursive quadtree decomposition. A similarity comparison function based on contrast, luminance, and structure will be presented.
Signal modeling of turbulence-distorted imagery
S. Susan Young, Ronald G. Driggers, Keith Krapels, et al.
Understanding turbulence effects on wave propagation and imaging systems has been an active research area for more than 50 years. Conventional atmospheric optics methods use statistical models to analyze image degradation effects that are caused by turbulence. In this paper, we intend to understand atmospheric turbulence effects using a deterministic signal processing and imaging theory point of view and modeling. The model simulates the formed imagery by a lens by tracing the optical rays from the target through a band of turbulence. We examine the nature of the turbulence-degraded image, and identify its characteristics as the parameters of the band of turbulence, e.g., its width, angle, and index of refraction, are varied. Image degradation effects due to turbulence, such as image blurring and image dancing, are revealed by this signal modeling. We show that in fact these phenomena can be related not only to phase errors in the frequency domain of the image but also a 2D modulation effect in the image spectrum. Results with simulated and realistic data are provided.
Systems and Testing I
icon_mobile_dropdown
Comparison of emissivity evaluation methods for infrared sources
Stephen D. Scopatz, Jason A. Mazzetta, John E. Sgheiza, et al.
This paper starts with a back to basics review of the definition of blackbody emissivity, how it is measured and how it is specified. Infrared source vendors provide emissivity specifications for their blackbodies and source plates, but there is fine print associated with their declarations. While there is an industry agreement concerning the definition of emissivity, the data sheets for blackbodies and source plates are not consistent in how they base their claims. Generally, there are two types of emissivity specifications published in data sheets; one based on design properties of the source and thermometric calibration, and another based on an equivalent radiometric calibrated emissivity. The paper details how the source properties including geometry, surface treatment, and coatings are characterized and result in an emissivity value by design. The other approach is that the emissivity can be claimed to be essentially 100% when measured directly with a radiometer. An argument is derived to show that as the optical parameters of the unit under test and the radiometer diverge, the less useful an equivalent radiometric emissivity claim is. Also discussed, is under what test conditions the absolute emissivity does not matter. Further suggestions on how to achieve the clearest comparative emissivity specifications are presented.
Improving MTF measurements of under-sampled optical systems
The modulation transfer function (MTF) of optical systems is often derived by taking the Fourier transform (FT) of a measured line spread function. Recently, methods of performing Fourier transforms that are common in infrared spectroscopy have been applied to MTF calculations. Proper apodization and phase correction have been shown to improve MTF calculations in optical systems. In this paper these methods as well as another filtering algorithm based on phase are applied to under-sampled optical systems. Results, both with and without the additional processing are presented and the differences are discussed.
Infrared search and track and imaging system: testing in the laboratory and during flight
The PIRATE, short for Passive Infra Red Airborne Tracking Equipment, multi mode Infrared Sensor, has been developed and manufactured by the Eurofirst consortium for the Eurofighter 2000 Aircraft. The consortium is led by SELEX Galileo, contract award, from Italy, and includes THALES UK, and TECNOBIT from Spain. Testing of the unit has been performed either in the laboratory and in flight. In the laboratory, sophisticated measurement techniques to verify optical parameters like the MTF, the NETD and the MRTD in the different Field of View have been employed, as well as for the detection, tracking and identification functions. On top of that, a flight test campaign has started to verify the performance, when operating in real scenarios, with cooperative and opportunity targets, across different weather conditions, like clean sky, but also misty and cloudy weather, haze and hail, generally not ideal to the infrared wavelength operation, either during tracking or imaging.
MWIR and LWIR wavefront sensing with quadri-wave lateral shearing interferometry
Sabrina Velghe, Djamel Brahmi, William Boucher, et al.
We present the application of Quadri-Wave Lateral Shearing Interferometry (QWLSI), a wave front sensing technique, to characterize optical beams at infrared wavelengths from 2 to 16μm with a single instrument. This technique can be used to quantify the quality of optical systems (like thermal infrared lenses) by measuring their aberrations. It can also evaluate laser sources in the infrared range like some gas lasers (HeNe laser or CO2 laser), infrared Optical Parametric Oscillator laser sources or Quantum Cascade Laser sources. In all cases, QWLSI offers the crucial advantage that it yields an analyzed wave front without the use of a reference arm and consequent time consuming alignment. In this paper, we first present the single interferometer which can be used with wavelength within 2 and 16μm, covering this way the MWIR (λ within 3 and 5μm) and LWIR (λ within 8 and 14μm) ranges. We then present the characterization of two gas lasers: an infrared HeNe lasers (λ=3.39μm) and a CO2 laser (λ=10.6μm) with this instrument. We finally show the experimental analysis of an infrared lens at two different wavelengths, one in the MWIR range (λ=3.39μm) and the other in the LWIR range (λ=10.6μm). λ
A new fast infrared imaging spectroradiometer
Louis Moreau, Claude Roy, Christian Vallières, et al.
ABB Bomem is expanding its line of infrared remote sensing products with the addition of a new imaging spectroradiometer. This hyperspectral instrument is based on the proven MR FTIR spectroradiometers. This field instrument, called the MR-i, is an imaging Fourier Transform spectroradiometer. It generates spectral data cubes in the MWIR and LWIR. It is designed to be sufficiently fast to acquire the spectral signatures of rapid events. The design is modular. The two output ports of the instrument can be populated with different combinations of detectors (imaging or not). For instance to measure over a broad spectral range, one output port can be equipped with a LWIR camera while the other port is equipped with a MWIR camera. No dichroics are used to split the bands, hence enhancing the sensitivity. Both ports can be equipped with cameras serving the same spectral range but set at different sensitivity levels in order to increase the measurement dynamic range and avoid saturation of bright parts of the scene while simultaneously obtaining good measurement of the faintest parts of the scene. Various telescope options are available for the input port. This is a presentation of the current state of the development.
Systems and Testing II
icon_mobile_dropdown
Controllable time dependent and dual band emission infrared source to test missile warning systems in-flight: system characterization
Dario Cabib, Larry Davidzon, Amir Gil
Proliferation and technological progress of Mid Wave Infrared (MWIR) sensors for Missile Warning Systems (MWS)1,2 and increased sophistication of countermeasures require more demanding in-flight testing. Spectral discrimination is being introduced for higher specificity and lower false alarms. As a result, testing such spectrally more capable systems requires a more spectrally capable stimulator. In a previous paper3 we have described a system we developed to test missile warning systems mounted on an aircraft. The system is placed in the field and projects a time dependent infrared beam towards the flying aircraft, simulating the infrared emittance of an approaching missile in the 3 to 5 micron spectral range as sensed by an MWS system. It can be used also as a trainer for the pilot himself to practice his/her reaction to being targeted. Now we have developed a new system based on the above concept but allowing the user to synchronously produce time profiles of two different infrared ranges independently within the 3 to 5 micron range (3.5 to 4 and 4.5 to 4.8 μ). This new dual color system (the DCIRTS) can now be used stationary or mounted on a vehicle while traveling, for even more realistic simulation. In this paper we describe the DCIRTS and its capability. The system design was presented in a previous paper (reference 4), but now after assembly and preliminary testing, we show the actual system performance and most important physical characteristics.
MKV carrier vehicle sensor calibration
Joseph Tansock, Scott Hansen, Jason Williams, et al.
The Multiple Kill Vehicle (MKV) system, which is being developed by the US Missile Defense Agency (MDA), is a midcourse payload that includes a carrier vehicle and a number of small kill vehicles. During the mission, the carrier vehicle dispenses the kill vehicles to address a complex threat environment and directs each kill vehicle toward the intercept point for its assigned threat object. As part of the long range carrier vehicle sensor development strategy, MDA and project leaders have developed a pathfinder sensor and are in the process of developing two subsequent demonstration sensors to provide proof of concept and to demonstrate technology. To increase the probability of successful development of the sensor system, detailed calibration measurements have been included as part of the sensor development. A detailed sensor calibration can provide a thorough understanding of sensor operation and performance, verifying that the sensor can meet the mission requirements. This approach to instrument knowledge will help ensure the program success and reduce cost and schedule risks. The Space Dynamics Laboratory at Utah State University (SDL) completed a calibration test campaign for the pathfinder sensor in April 2008. Similar calibration efforts are planned in 2009 for the two demonstration sensors. This paper provides an overview of calibration benefits, requirements, approach, facility, measurements, and preliminary results of the pathfinder calibration.
Large-area blackbody emissivity variation with observation angle
Calibration of wide-angle (100°+ field of view) long wave infrared cameras with commercially available large-area blackbody calibration targets poses problems. Typically the emissivity of blackbody sources is specified on axis and up to angles of approximately 20°. For wide-angle camera calibration the emissivity needs to be known out to 60° or greater. Presented is a technique that uses the known on-axis emissivity for the blackbody and changes in radiance with angle to determine the angle-dependent emissivity. Four commercial blackbodies with various surface structures were measured. The emissivity was found to be significantly angle dependent beyond 30°, dropping to 0.95 or less by 60°.
A study of the radiometric calibration of spectral bands in the mid-wave infrared (MWIR) spectral range 1.5-5 µm
Thomas Svensson, Ingmar Renhorn, Patrik Broberg
Radiometric calibrations of sensor data are routinely performed using one or more radiation sources at different radiance levels. Calibration of spectral bands in the thermal infrared region (> 2.0 μm) is needed due to the bias drift which is a characteristic of the detector technology (e.g. MCT, InSb). To maintain the accuracy during an extended measurement the calibration has to be frequently repeated. The complexity of the radiometric calibration increases even more when a) the number of spectral bands to calibrate increases, b) the spectral range of the sensors to calibrate increases, c) the radiation level of the scene or object under interest (like hot spots) increases. If the accuracy in the calibration is to be maintained, all these factors will both increase the time needed to perform the calibration and the number and/or complexity of the radiation sources needed. Either or both may be impractical to handle during a field trial. In this paper we have studied the radiometric calibration of spectral bands in the mid wave infrared region (MWIR, 1.5 - 5.5 μm), with the focus on hot spots. The model and methodology proposed are however general and can be applied on an arbitrary set of sensor data collected in the 0.4 - 12 μm spectral region. Data was obtained from a cooled multi-band sensor based on an MCT detector. The study also includes the development of a SWIR source practical for field trials.
A new passive polarimetric imaging system collecting polarization signatures in the visible and infrared bands
Electro-optical imaging systems are frequently employed during surveillance operations and search and rescue missions to detect various targets of interest in both the civilian and military communities. By incorporating the polarization of light as supplementary information to such electro-optical imaging systems, it may be possible to increase the target discrimination performance considering that man-made objects are known to depolarize light in different manners than natural backgrounds. Consequently, many passive Stokes-vector imagers have been developed over the years. These sensors generally operate using one single spectral band at a time, which limits considerably the polarization information collected across a scene over a predefined specific spectral range. In order to improve the understanding of the phenomena that arise in polarimetric signatures of man-made targets, a new passive polarimetric imaging system was developed at Defence Research and Development Canada - Valcartier to collect polarization signatures over an extended spectral coverage. The Visible Infrared Passive Spectral Polarimetric Imager for Contrast Enhancement (VIP SPICE) operates four broad-band cameras concomitantly in the visible (VIS), the shortwave infrared (SWIR), the midwave infrared (MWIR), and the longwave infrared (LWIR) bands. The sensor is made of four synchronously-rotating polarizers mounted in front of each of the four cameras. Polarimetric signatures of man-made objects were acquired at various polarization angles in the four spectral bands. Preliminary results demonstrate the utility of the sensor to collect significant polarimetric signatures to discriminate man-made objects from their background.
Poster Session
icon_mobile_dropdown
Experimental method for observation prediction based on the decision matrix through day/night equipments in NIR and LWIR spectral ranges
The paper presents an evaluation methodology and the results of some experiments that have been made in laboratory in order to determine the target's detection's probability depending on the target's contrast and the observers' age. The main goal was to assure the model for an optimal feature's configuration for a device used to enable the view during day or night, so that we can estimate, within improper view conditions, its visibility boundaries during day and night. The base of method's principle is the Bayes' theorem, and the authors have used in their experiments the technique of estimation by probability of real positive and real negative that is also used in medical evaluation of images. The authors have used an instrument layout in the laboratory that included an uncooled 8- 12 μm thermal camera, a CCD and a ICU camera, an USAF pattern and a set of chemical compositions that produce aerosols with different concentrations. It has been proved that the detection probability decreases proportionally by age, but being differentiated by the contrast between the target and the background; it has been presented the diagram of the probability variation and the analytical relationships that approximate it, in terms of contrast and aerosols' concentration features.
Evaluation of the different configurations of infrared-type gimbaled cameras in the sense of blur
In guided munition applications, it is a priori to detect the intended target correctly and then to track it until the termination of the engagement. However, especially high angular rates of the munition carrying an infrared (IR) type camera cause the target image on the detector of the camera to blur. This, in turn, results in losing the correct target information and even in missing the target. Therefore, it is required that the blur problem be handled carefully in an IRtype camera design process. In this study, the blur problem of an IR-type gimbaled camera operating on a guided munition is dealt with and the net field of view of the camera is determined for its different configurations. In the calculations, the roll rate of the munition is taken into consideration because of the fact that it is much greater than its counterparts in the yaw and pitch directions. Afterwards, the roll rate limit causing no blur is obtained and the ways that can be applied to avoid this severe condition are proposed.
Novel image fusion quality metrics based on sensor models and image statistics
This paper presents progress in image fusion modeling. One fusion quality metric based on the Targeting Task performance (TTP) metric and another based on entropy are presented. A human perception test was performed with fused imagery to determine effectiveness of the metrics in predicting image fusion quality. Both fusion metrics first establish which of two source images is ideal in a particular spatial frequency pass band. The fused output of a given algorithm is then measured against this ideal in each pass band. The entropy based fusion quality metric (E-FQM) uses statistical information (entropy) from the images while the Targeting Task Performance fusion quality metric (TTPFQM) utilizes the TTP metric value in each spatial frequency band. This TTP metric value is the measure of available excess contrast determined by the Contrast Threshold Function (CTF) of the source system and the target contrast. The paper also proposes an image fusion algorithm that chooses source image contributions using a quality measure similar to the TTP-FQM. To test the effectiveness of TTP-FQM and E-FQM in predicting human image quality preferences, SWIR and LWIR imagery of tanks were fused using four different algorithms. A paired comparison test was performed with both source and fused imagery as stimuli. Eleven observers were asked to select which image enabled them to better identify the target. Over the ensemble of test images, the experiment showed that both TTP-FQM and E-FQM were capable of identifying the fusion algorithms most and least preferred by human observers. Analysis also showed that the performance of the TTP-FQM and E-FQM in identifying human image preferences are better than existing fusion quality metrics such as the Weighted Fusion Quality Index and Mutual Information.