Show all abstracts
View Session
- Front Matter: Volume 6941
- Modeling I
- Modeling II
- Modeling III
- Modeling IV
- Modeling V
- Atmospheric Effects
- Systems and Testing I
- Systems and Testing II
- Poster Session
Front Matter: Volume 6941
Front Matter: Volume 6941
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 6941, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Modeling I
What causes sampling artifacts?
Show abstract
The sampling process creates an infinite number of new frequencies that were not present in the original scene. For an under-sampled system (which is characteristic of nearly all imaging systems), the replicated frequencies can overlap scene frequencies. An optical anti-alias filter can eliminate the overlapping but does not prevent frequency replication. Frequencies above the Nyquist frequency are eliminated by an ideal reconstruction filter. As overlapping increases or with less than ideal reconstruction, the resultant image is distorted. The phases associated with the replicated frequencies violate linear-shift-invariant system requirements. As a result, movement of the scene with respect to the detector array creates ambiguity in edge locations further distorting imagery. While sampling theory suggests that sharp cutoff filters are required, these filters will create ringing (Gibbs phenomenon) in the image. Replicated spectra that appear in the image are called the spurious response. Out-of-band spurious response (above Nyquist frequency) looks very similar to the input but with phase variation. The phase errors interfere with target recognition and identification. In-band spurious response (frequencies less than the Nyquist frequency) appears somewhat like noise. At this juncture it is not clear how this "noise" interferes with recognition and identification tasks. It may only affect detection. Since the sampling process replicates frequencies, it is possible to extract information with reconstruction band-pass filters whose center frequency is above the Nyquist frequency.
Modeling II
Modeling the benefit of color in target acquisition: characterizing color vision
Show abstract
This paper describes experiments to examine the role of color contrast in target identification and signal detection. Current models quantify the image quality of the achromatic visual channel. However, a chromatic map is also created in the visual cortex. The chromatic map has lower spatial resolution than the achromatic channel. The chromatic and achromatic channels are somehow combined in the visual cortex to provide a full-resolution, colored scene.
Adding color to the current target acquisition model requires answering two primary questions. First, does color contribute to feature discrimination? Second, what are the spatial and sensitivity characteristics of color perception?
This paper describes experiments to determine color contribution to object identification.
High resolution images obtained with uncooled microbolometer
Show abstract
This study presents experimental results of a resolution enhancement algorithm used in the physical world without
any microscan optomechanical element. The HR (High Resolution) software developed by Lightnics was used
together with an uncooled microbolometer array from Ulis, with low thermal time constant. It takes advantage of the
relative motion between camera and object to produce information redundancy through a set of captured images
giving rise to one increased resolution image. Enhanced images from a bar target show better MTF and MRTD
curves after HR software processing than before. In addition, lower spatial and temporal noises were obtained as an
additional benefit of the algorithm.
Effects of video compression on target acquisition performance
Show abstract
The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and
multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in
greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the
generated artifacts can adversely affect task performance.
The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment
of various compression techniques on static imagery for tank identification. In this paper, we expand this initial
assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank
identification performance. We perform a series of controlled human perception tests using three dynamic simulated
scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will
quantify the effect of video compression on target identification and provide a framework to evaluate video compression
on future sensor systems.
Small craft ID criteria (N50/V50) for short wave infrared sensors in maritime security
Show abstract
The need for Anti-Terrorism and Force Protection (AT/FP), for both shore and sea platform protection, has resulted in a
need for imager design and evaluation tools which can predict field performance against maritime asymmetric threats.
In the design of tactical imaging systems for target acquisition, a discrimination criterion is required for successful
sensor realization. It characterizes the difficulty of the task being performed by the observer and varies for different
target sets. This criterion is used in both assessment of existing infrared sensor and in the design of new conceptual
sensors.
In this experiment, we collected 8 small craft signatures (military and civilian) in the short wave infrared (SWIR) band
during the day. These signatures were processed to determine the targets' characteristic dimension and contrast. They
were also processed to bandlimit the signature's spatial information content (simulating longer range) and a perception
experiment was performed to determine the task difficulty (N50 and V50). The results are presented in this paper and can
be used for maritime security imaging sensor design and evaluation.
An improved image scene registration using wavelets
Show abstract
Subpixel scene registration is useful for certain image processing applications. In one image processing application,
image frame integration can use frame registration. The scene shifts does not necessarily have to be integer shifts. In
this paper, we present an image registration approach that is based on the wavelet decomposition and the Fitts correlation
algorithm. The original Fitts algorithm is ideal for small-scale translations. A successful image-based tracker using Fitts
correlation for position measurement will require additional modifications to the original algorithm to enable it to
operate the small-scale translations.
Modeling III
Monotonic correlation analysis of image quality measures for image fusion
Show abstract
The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality measures to automatically determine the best image fusion algorithm for a particular task. This work will introduce a novel monotonic correlation coefficient to investigate how well possible image quality features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by the traditional Pearson correlation.
Human target identification and automated shape based target recognition algorithms using target silhouette
Show abstract
Human target identification performance based on target silhouettes is measured and compared to that of complete targets. The target silhouette identification performance of automated region based and contour based shape identification algorithms are also compared. The region based algorithms of interest are Zernike Moment Descriptor (ZMD), Geometric Moment Descriptor (GMD), and Grid Descriptor (GD) while the contour based algorithms considered are Fourier Descriptor (FD), Multiscale Fourier Descriptor (MFD), and Curvature Scale Space Descriptor (CS). The results from the human perception experiments indicate that at high levels of degradation, human identification of target based on silhouettes is better than that of complete targets. The shape recognition algorithm comparison shows that GD performs best, very closely followed by ZMD. In general region based shape algorithms perform better that contour based shape algorithms.
Target acquisition performance: effects of target aspect angle, dynamic imaging, and signal processing
Show abstract
In an extensive Target Acquisition (TA) performance study, we recorded static and dynamic imagery of a set of military and civilian two-handheld objects at a range of distances and aspect angles with an under-sampled uncooled thermal imager. Next, we applied signal processing techniques including DSR (Dynamic Super Resolution) and LACE (Local Adaptive Contrast Enhancement) to the imagery. In a perception experiment, we determined identification (ID) and threat/non-threat discrimination performance as a function of target range for a variety of conditions. The experiment was performed to validate and extend current TA models. In addition, range predictions were performed with two TA models: the TOD model and NVThermIP. The results of the study are: i) target orientation has a strong effect on performance, ii) the effect of target orientation is well predicted by the two TA models, iii) absolute identification range is close the range predicted with the two models using the recommended criteria for two-handheld objects, iv) there was no positive effect of sensor motion on performance, and this was against the expectations based on earlier studies, v) the benefit of DSR was smaller than expected on the basis of the model predictions, and vi) performance with LACE was similar to performance on an image optimized manually, indicating that LACE can be used to optimize the contrast automatically. The relatively poor results with motion and DSR are probably due to motion smear induced by a higher camera speed than used in earlier studies. Camera motion magnitude and smear are not yet implemented in TA models.
Infrared sensor modeling for discrimination of ground-based human activity
Show abstract
In an initial effort to better understand how motion in human activities influences sensor performance, Night Vision and Electronic Sensors Directorate (NVESD) developed a perception experiment that tests an observer's ability to identify an activity in static and dynamic scenes. Current sensor models such as NVTherm were calibrated using static imagery of military vehicles but, given the current battlefield environment, the focus has shifted more towards discriminating human activities. In these activities, motion plays an important role but this role is not well quantified by the model. This study looks at twelve hostile and non-hostile activities that may be performed on an urban roadside such as digging a hole, raking, surveillance with binoculars, and holding several weapons. The forced choice experiment presents the activities in both static and dynamic scenes so that the effect of adding motion can be evaluated. The results are analyzed and attempts are made at relating observer performance to various static and dynamic metrics and ultimately developing a calibration for the sensor model.
Modular target acquisition model and visualization tool
Show abstract
We developed a software framework for image-based simulation models in the chain: scene-atmosphere-sensor-image enhancement-display-human observer. The goal is to visualize the steps and to quantify (Target Acquisition) task performance. The framework is set up in such a way that modules of different producers can be combined, once they comply with a standardized interface. At the moment the shell runs with three modules, required to calculate TA-performance based on the TOD (Triangle Orientation Discrimination) method. Applications of the shell can be found in the areas of sensor design, maintenance, TA model development, tactical decision aids and R&D. Model developers are invited to add their module to the frameweork.
Modeling IV
Effect of image bit depth on target acquisition modeling
Show abstract
The impact of bit depth on human in the loop recognition and identification performance is of particular importance
when considering trade-offs between resolution and band-width of sensor systems. This paper presents the
results from two perception studies designed to measure the effects of quantization and finite bit depth on target
acquisition performance. The results in this paper allow for the inclusion of limited bit depth and quantization
as an additional noise term in NVESD sensor performance models.
Human activity discrimination for maritime application
Show abstract
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how
motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at
estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the
Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach
is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more
quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to
calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice
(11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on
watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this
experiment is given, including: the data collection, image processing, perception testing and how contrast was defined
for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection
(AT/FP) tasks for the U.S. Navy.
The application of Compressive Sensing technique on a stationary surveillance camera system
Show abstract
Compressive Sensing (CS) is a recently emerged signal processing method. It shows that when a signal is sparse in a
certain basis, it can be recovered from a small number of random measurements made on it. In this work, we investigate
the possibility of utilizing CS to sample the video stream acquired by a fixed surveillance camera in order to reduce the
amount of data transmitted. For every 15 continuous video frames, we select the first frame in the video stream as the
reference frame. Then for each following frame, we compute the difference between this frame and its preceding frame,
resulting in a difference frame, which can be represented by a small number of measurement samples. By only
transmitting these samples, we greatly reduce the amount of transmitted data. The original video stream can still be
effectively recovered. In our simulations, SPGL1 method is used to recover the original frame. Two different methods,
random measurement and 2D Fourier transform, are used to make the measurements. In our simulations, the Peak
Signal-to-Noise Ratio (PSNR) ranges from 28.0dB to 50.9dB, depending on the measurement method and number of
measurement used, indicating good recovery quality. Besides a good compression rate, the CS technique has the
properties of being robust to noise and easily encrypted which all make CS technique a good candidate for signal
processing in communication.
Optical and radiometry analysis for a passive infrared sparse sensor detection system
Show abstract
The optical performance of an infrared sparse sensor detector system is modeled. Such a system, due to its low cost, uses single element, spherical, off-the-shelf optical components that may produce poor quality off-axis images. Since sensors will not populate the entire focal plane, it is necessary to evaluate how the optics will affect sensor placement. This analysis will take into account target location, optical system aberrations, and wavelength, in an effort to determine the proper placement of the sparsely populated sensors.
Thermal signatures of personnel in urban environments
Show abstract
Georgia Tech has initiated a research program into the detection of covert personnel in traditionally difficult environments (e.g., urban, caves). This program focuses on a detailed phenomenological analysis of human physiology and signatures with the subsequent identification and characterization of potential observables--particularly in the context of urban environments. For this current effort, several electro-optical sensing modalities have been evaluated for use as a component in an unattended sensor suite designed to detect personnel. These modalities include active sensors (e.g., vibrometry) and passive sensors (e.g., multi-spectral, thermal). Within the urban environment, illumination conditions can vary widely and change dynamically during the course of a day. This paper will discuss those issues and present a computational approach to computing the radiative exchange environment and the corresponding thermal signatures of personnel in these environments. Consideration will also be given to the impact of these variations on thermal signatures of clutter objects.
Modeling the effects of contrast enhancement on target acquisition performance
Show abstract
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared
imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels
either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms
for target identification with well contrasted vehicles. Human perception experiments were performed to determine field
performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target
set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing
linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US
Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts
improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To
account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was
calculated and added to the model. The measured results are compared with the predicted performance based on the
target task difficulty metric used in NVThermIP.
The impact of spatio-temporal focal plane array nonuniformity noise on target search and identification performance
Show abstract
Recent developments in infrared focal plane array technology have led to the wide use of staring sensors in many tactical
scenarios. With all of its advancements, however, several noise sources remain present to degrade imagery and impede
performance. Fixed pattern noise, arising from detector nonuniformities in focal plane arrays, is a noise source that can
severely limit infrared imaging system performance. In addition, temporal noise, arising from frame to frame
nonuniformities, can further hinder the observer from perceiving the target within the tactical scene and performing a
target acquisition task.
In this paper, we present a new method of simulating realistic spatial and temporal noise effects, derived from focal
plane array statistics, on infrared imagery, and study their effect on the tasks of search and identification of a tank
vehicle. In addition, we assess the utility of bad pixel substitution as a possible correction algorithm to mitigate these
effects. First, tank images are processed with varying levels of fixed pattern and temporal noise distributions with
differing percentage of highly noisy detectors lying outside the operability specification. Then, a series of controlled
human perception experiments are performed using trained observers tasked to identify and search for tank targets,
respectively, through the combinations of noise. Our results show promise for a relaxation of the operability
specification in focal plane array development without severe degradation in task performance.
Modeling V
Calculating incoherent diffraction MTF
Show abstract
The incoherent diffraction MTF plays an increasingly important role in the range performance of imaging systems as the wavelength increases and the optical aperture decreases. Accordingly, all NVESD imager models have equations that describe the incoherent diffraction MTF of a circular entrance pupil. NVThermIP, a program which models thermal imager range performance, has built in equations which analytically model the incoherent diffraction MTF of a circular entrance pupil and has a capability to input a table that describes the MTF of other apertures. These can be calculated using CODE V, which can numerically calculate the incoherent diffraction MTF in the vertical or horizontal direction for an arbitrary aperture. However, we are not aware of any program that takes as input a description of the entrance pupil and analytically outputs equations that describe the incoherent diffraction MTF. This work explores the effectiveness of Mathematica to analytically and numerically calculate the incoherent diffraction MTF for an arbitrary aperture. In this work, Mathematica is used to analytically and numerically calculate the incoherent diffraction MTF for a variety of apertures and the results are compared with CODE V calculations.
Target identification performance of superresolution versus dither
Show abstract
This paper presents the results of a performance comparison between superresolution reconstruction and dither, also
known as microscan. Dither and superresolution are methods to improve the performance of spatially undersampled
systems by reducing aliasing and increasing sampling. The performance measured is the probability of identification
versus range for a set of tracked, armored military vehicles. The performance improvements of dither and
superresolution are compared to the performance of the base system with no additional processing. Field data was
collected for all types of processing using the same basic sensor. This allows the performance to be compared without
comparing different sensors. The performance of the various methods is compared experimentally using human
perception tests. The perception tests results are compared to modeled predictions of the range performance. The
measured and modeled performance of all of the methods agree well.
Development principles for evolving performance models
Show abstract
Systems Engineers utilize Technical Performance Models to predict, track and ultimately support requirements verification for sensors. These models evolve during the phases of sensor development to serve a variety of purposes. The model captures the performance of the systems under development from electromagnetic energy incident at the instrument aperture to instrument SNR and the associated noise equivalent quantities of interest required by our customers. In this paper we present a methodology where these models have been a key element of technical success, highlighting the efficiency of the approach.
Effect of image magnification on target acquisition performance
Show abstract
The current US Army target acquisition models have a dependence on magnification. This is due in part to the
structure of the observer Contrast Threshold Function (CTF) used in the model. Given the shape of the CTF,
both over-magnification and under-magnification can dramatically impact modeled performance. This paper
presents the results from two different perception studies, one using degraded imagery and the other using field
imagery. The results presented demonstrate the correlation between observer performance and model prediction
and provide guidance accurately representing system performance in under and over-magnified cases.
MWIR persistent surveillance performance for human and vehicle backtracking as a function of ground sample distance and revisit rate
Show abstract
Real MWIR Persistent Surveillance (PS) data was taken with a single human walking from a known point to different tents in the PS sensor field of view. The spatial resolution (ground sample distance) and revisit rate was varied from 0.5 to 2 meters and 1/8th to 4 Hz, respectively. A perception experiment was conducted where the observer was tasked to track the human to the terminal (end of route) tent. The probability of track is provided as a function of ground sample distance and revisit rate. These results can help determine PS design requirements for tracking and back-tracking humans on the ground. This paper begins with a summary of two previous simulation experiments: one for human tracking and one for vehicle tracking.
Airborne tracking resolution requirements for urban vehicles
Show abstract
This paper details the development, experimentation, collected data and the results of research designed to gain an understanding of the temporal and spatial image collection guidelines for tracking urban vehicles. More specifically, a quantitative understanding of the relationship between human observer performance and the spatial and temporal resolution is sought. Performance is measured as a function of the number of video frames per second, imager spatial resolution and the ability of the observer to accurately determine the destination of a moving vehicle target. The research is restricted to data and imagery collected from altitudes typical of modern low to mid altitude persistent surveillance platforms using a wide field of view. The ability of the human observer to perform an unaided track of the vehicle was determined by their completion of carefully designed perception experiments. In these experiments, the observers were presented with simulated imagery from Night Vision's EOSim urban terrain simulator. The details of the simulated targets and backgrounds, the design of the experiments and their associated results are included in this treatment.
Atmospheric Effects
Analysis of image distortions by atmospheric turbulence and computer simulation of turbulence effects
Show abstract
The development and implementation of a computer model to simulate the impact of atmospheric turbulence on image
quality is presented. The model is based on first- and second-order statistics of atmospheric turbulence. Necessary
simulation parameters were derived from data collected by Germany during the NATO-RTG40 White Sands Missile
Range field trials of November 2005. The data set consists of image sequences recorded with a high-speed TV camera.
Parameter values were derived by analyzing image sequences recorded at weak and strong turbulence conditions. The
procedures used to analyze the images and to extract simulation parameters are presented.
The FGAN-FOM computer model for turbulence simulation uses static images without turbulence as input and produces
image sequences that are degraded by the specified turbulence. Imagers with high frame rates can be simulated.
Examples are presented.
In order to further assess the accuracy of this heuristics-based and near real-time simulation of turbulence-degraded
image sequences, some test cases are compared against the results obtained by a full, physically based simulation of
radiation propagation through the turbulent atmosphere. Of special interest are the angle-of-arrival fluctuation statistics,
scintillation, and consequences for long exposure resolution.
Perception range prediction for IR pilot sight
Show abstract
The increasing use of IR pilot sight in helicopters calls for a reliable prediction of perception ranges for a variety of
objects, especially those needed for orientation and those posing as a potential hazard, like power poles, masts, isolated
trees etc. Since the visibility of objects in the IR depends mainly on the temperature differences between those objects
and a given background and only marginally on illumination, range prediction techniques used for the visual range or
light-amplified vision are only of very limited use. While range predictions based on the Johnson criterion do offer some
insight into expected ranges, the inherently nominal nature of distance estimates thus obtained hampers their use for an
actual field-deployable pre-flight consulting procedure. In order to overcome those limitations, long-term simultaneous
measurements of relevant objects and background temperatures and weather data were carried out and used for
temperature prediction from prevalent weather conditions. Together with a perception model derived from extensive
observer experiments based on synthetic images of the UH Tiger Pilot Sight Unit we developed a perception range
prediction package which is currently evaluated by the weather service of the Bundeswehr. We will present results from
the observer experiments together with the derived perception models. These are then compared to actual perception
ranges as obtained from flight experiments.
Dew, dust, and wind influencing thermal signatures of objects
Show abstract
Dew and dust layers on the surface of an object may significantly affect its thermal state and IR signature. Dew formation
begins when the object surface temperature falls below atmospheric dew point temperature. Due to the latent heat
released by the water accumulated on the surface the temperature drop stagnates and the object appears warmer then it
would be without dew formation. An attempt was made to modify RadThermIR software to account for dew effects. A
simple plate model and the more elaborate CUBI thermal modeling benchmarking object were used to study the extent to
which dew may change thermal object signatures. A dust layer on an object surface may affect its optical properties and
may act as additional thermal insulation when it is thick enough. Both effects influence the temperature and IR signature
of the object. Parametric calculations by RadThermIR were performed for various dust thicknesses and optical properties.
This data was used in an object/background contrast analysis. The obtained dust/dew layer results will be used in the
planning of the next CUBI experiment in natural desert environments. In addition, CUBI data from another geographic
location was used for studying different wind models resulting in some interesting conclusions concerning the applicability
of the wind model used in RadThermIR.
CACAMO - computer-aided camouflage assessment of moving objects
Show abstract
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods,
the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment
of objects in image sequences. Since camouflage success is directly correlated with the detection range of
target objects, the system supports the evaluation of image sequences with moving cameras. The main features
of CART comprise a semi-automatic annotation functionality for marking target objects (ground truth generation)
including a propagation of those markings over the image sequence, as well as a real-time evaluation of
the marked image regions by applying individually selected feature extractors. The system works with visualoptical,
infrared and SAR image data, which can be used separately or simultaneously. The software is designed
as a generic integration platform, which can be extended to further sensors, measurements, feature extractors,
methods, and tasks.
Besides the demand of using moving cameras, it is important to support also moving objects in the scene
(CACAMO - Computer Aided Camouflage Assessment of Moving Objects). Since moving objects are more
likely to be discovered than other ones, the state of movement obviously is a significant factor when designing
camouflage methods and should explicitly be incorporated into the assessment process. For this, the software
provides auto-annotation tools, as well as a specific movement measurement component in order to capture the
conspicuity depending on different moving states. The auto-annotation assistance for moving objects is done
with the aid of tracking algorithms, incorporating color information, optical flow and change detection using
Kalman and particle filters. The challenge is to handle semi or full camouflaged objects, a circumstance which
naturally hinders computer vision algorithms.
Systems and Testing I
Modulation transfer function measurement on QWIP focal plane array
Show abstract
Modulation Transfer Function (MTF) is an important quantitative measure of imaging quality of an infrared camera system. It provides quantitative description on how infrared camera system transfers contrast from object to image space. The higher the MTF values as a function of spatial frequency, the better the reconstruction of fine detail of the object space. Therefore, MTF is a good metric of performance measure for infrared camera systems. However, for QWIP Focal Plane Array (FPA) there is little comprehensive MTF result that has been reported. The over-sampled tilted knife-edge technique to generate the MTF plot is the most common method to measure MTF. Thus we have developed a method to construct the edge function from the image of tilted knife-edge. This construction is reversible since the tilted knife-edge image can be reconstructed from the edge function. Unfortunately, the knife-edge technique requires knowledge of the lens' MTF. The lens MTF is removed from the total system MTF to derive the detector and electronic MTF. In order to validate the knife-edge technique, an interferometric and small spot illumination method of MTF measurement was conducted. In the interferometric method the QWIP FPA was placed at the pupil plane, and therefore diffraction limitation was avoided. The interferogram also provides a high-resolution plot of the spectral response of the detector. The spot illumination mimics the point-spread function. Since spot illumination requires re-imaging of point source by lenses, it also requires lens' MTF that needs to be divided out. The objective of this paper is to report on an extensive MTF characterization of large format QWIP FPA.
Comparison of Fourier transform methods for calculating MTF
Show abstract
Fourier transform methods common in infrared spectroscopy were applied to the problem of calculating the modulation
transfer function (MTF) from a system's measured line spread function (LSF). Algorithms, including apodization and
phase correction, are discussed in their application to remove unwanted noise from the higher frequency portion of the
MTF curve. In general, these methods were found to significantly improve the calculated MTF. Apodization reduces
the proportion of noise by discarding areas of the LSF where there is no appreciable signal. Phase correction
significantly reduces the rectification of noise that occurs when the MTF is calculated by taking the power spectrum of
the complex optical transfer function (OTF).
Edge response revisited
Show abstract
The edge response is one of many techniques used to calculate the MTF (Modulation Transfer Function) of an imaging system. The MTF can be used to calculate the MRTD function (Minimal Resolvable Temperature Difference) for thermal imagers or the MRC (Minimal Resolvable Contrast) function for visible image systems. Most of the conventional techniques used to calculate the MRTD or MRC functions are time consuming and can be influenced by the subjectivity of the operator. A comparison of conventional MRTD measurements for more than 300 systems is compared with the MRTD function derived using the edge response technique.
Infrared lens characterization using common undersampled systems
Show abstract
This paper expands on the research presented in 'An Advance in Infrared Lens Characterization: Measurement of the
Lens MTF Using Common Undersampled IR Systems.' This update provides empirical data demonstrating the test
system's performance through experimental modulation transfer function and encircled energy tests. This research also
expands further on the software algorithms, describing the method used to obtain accurate real-time optical performance
analysis. Real-time testing has a number of valuable applications, including focus optimization, prototyping, rapid/high-volume
testing, and testing on-the-fly.
A means for calculating the optics MTF of an under-sampled IR imaging system
Show abstract
Using measured quantities, it is possible to arrive at a reasonable approximation for the optics MTF of a
longwave undersampled imaging system. Certain reasonable assumptions concerning the format of the data
from the imaging system should be made in order to make sure that there is no image processing artifacts. For
systems that contain imaging artifacts, such as an analog output, there are too many secondary effects that will
degrade the predicted Optics MTF beyond a reasonable approximation.
Systems and Testing II
Measurement of effective temperature range of fire service thermal imaging cameras
Show abstract
The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the
value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for
design and performance requirements of TIC as used by the fire service. The National Institute of Standards and
Technology facilitates this process by providing recommendations for science-based performance metrics and test
methods to the NFPA technical committee charged with the development of this standard. A suite of imaging
performance metrics and test methods, based on the harsh operating environment and limitations of use particular to the
fire service, has been proposed for inclusion in the standard. The Effective Temperature Range (ETR) measures the
range of temperatures that a TIC can view while still providing useful information to the user. Specifically, extreme heat
in the field of view tends to inhibit a TIC's ability to discern surfaces having intermediate temperatures, such as victims
and fire fighters. The ETR measures the contrast of a target having alternating 25 °C and 30 °C bars while an increasing
temperature range is imposed on other surfaces in the field of view. The ETR also indicates the thermal conditions that
trigger a shift in integration time common to TIC employing microbolometer sensors. The reported values for this
imaging performance metric are the hot surface temperature range within which the TIC provides adequate bar contrast,
and the hot surface temperature at which the TIC shifts integration time.
Measurement of the nonuniformity of first responder thermal imaging cameras
Show abstract
Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging
cameras in a very practical way every day. However, few performance metrics have been developed to assist first
responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for
evaluating the nonuniformity of thermal imaging cameras. Several commercially available uncooled focal plane array
cameras were examined. Because of proprietary property issues, each camera was considered a 'black box'. In these
experiments, an extended area black body (18 cm square) was placed very close to the objective lens of the thermal
imaging camera. The resultant video output from the camera was digitized at a resolution of 640x480 pixels and a
grayscale depth of 10 bits. The nonuniformity was calculated using the standard deviation of the digitized image pixel
intensities divided by the mean of those pixel intensities. This procedure was repeated for each camera at several
blackbody temperatures in the range from 30° C to 260° C. It has observed that the nonuniformity initially increases with
temperature, then asymptotically approaches a maximum value. Nonuniformity is also applied to the calculation of
Spatial Frequency Response as well providing a noise floor. The testing procedures described herein are being developed
as part of a suite of tests to be incorporated into a performance standard covering thermal imaging cameras for first
responders.
New automatic testing system for today's airborne laser sensors
Show abstract
Modern laser sensors are becoming more and more sophisticated in their capabilities, including direction, distance, threat type detection and other parameters. They are also becoming more specific and less affected by confusing environment signals such as sun illumination and other threat-unrelated sources. As a result, the testing needs in development and production are also becoming more demanding. CI Systems' new AEOT (Automatic Electro-Optic System) is an easy to use, computer controlled instrument, which is suitable for testing laser sensors in R&D and high volume production, in their important aspects. The general instrument design and functionality are presented in this paper.
Relative color delineation testing of visible camera systems
Show abstract
The human eye has the ability to distinguish millions of colors. Employing this attribute along with cognitive spatial cues a human being can differentiate between even the slightest color variations. The goal of any imager is to collect the maximum amount of information from a scene, both spatially and spectrally. Whether it is used for artistic reproduction or camouflage detection, a camera has the same ultimate specifications. While much sensor research and development has been conducted to improve both spatial and intensity resolution, less effort has been directed to color contrast delineation. This specification is not only difficult to define but complex to test. Most color testing is confined to print or display technology and is supported by a myriad of test equipment and standards. Typical camera color calibration may rely on color standards with defined illuminants but is ineffective in contrast resolution definition. This paper will discuss hardware and software developed by the authors that is utilized to project precise dual color controlled images to determine the color contrast resolution of an imager. Algorithmic challenges related to human-perceived versus machine-created color in conjunction with real-time color feedback loops will be addressed. Design issues including system stability, color resolution, channel matching, and target registration will also be discussed. Calibration routines and verification will be presented along with example results of the complete system.
Clutter and signatures from near infrared testbed sensor
Show abstract
A new tactical airborne multicolor missile warning testbed was developed as part of an Air Force Research Laboratory (AFRL) initiative focusing on the development of sensors operating in the near infrared where commercially available silicon detectors can be used. At these wavelengths, the rejection of solar induced false alarms is a critical issue. Multicolor discrimination provides one of the most promising techniques for improving the performance of missile warning sensors, particularly for heavy clutter situations. This, in turn, requires that multicolor clutter data be collected for both analysis and algorithm development.
The developed sensor test bed, as described in previous papers1, is a two-camera system with 1004x1004 FPA coupled with optimized filters integrated with the optics. The collection portion includes a high speed processor coupled with a high capacity disk array capable of collecting up to 48 full frames per second. This configuration allows the collection of temporally correlated, radiometrically calibrated data in two spectral bands that provide a basis for evaluating the performance of spectral discrimination algorithms.
The presentation will describe background and clutter data collected from ground and flight locations in both detection and guard bands and the statistical analysis to provide a basis for evaluation of sensor performance. In addition, measurements have been made of discrete targets, both threats and false alarms. The results of these measurements have shown the capability of these sensors to provide a useful discrimination capability to distinguish threats from false alarms.
Poster Session
Infrared imaging spectroscopic system based on a PGP spectrograph and a monochrome infrared camera
Show abstract
A non-intrusive and non-contact near infrared acquisition system based on a PGP spectrometer is presented. This work is an extension to the whole near infrared range of the spectrum, from 1000 to 2400 nm, of a previously designed system in the Vis-NIR range (400-1000 nm). The reason under this investigation is to improve material characterization and material classification performance. To our knowledge, no imaging spectroscopic system based on a PGP device working in this range has been previously reported. The components of the system, its assembling, alignment and calibration procedures will be described in detail.
Suppression of fixed pattern noise for infrared image system
Show abstract
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.
Suite of proposed imaging performance metrics and test methods for fire service thermal imaging cameras
Show abstract
The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the
value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for
design and performance requirements for TIC as used by the fire service. This standard will include performance
requirements for TIC design robustness and image quality. The National Institute of Standards and Technology
facilitates this process by providing recommendations for science-based performance metrics and test methods to the
NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and
test methods based on the harsh operating environment and limitations of use particular to the fire service has been
proposed for inclusion in the standard. The performance metrics include large area contrast, effective temperature range,
spatial resolution, nonuniformity, and thermal sensitivity. Test methods to measure TIC performance for these metrics
are in various stages of development. An additional procedure, image recognition, has also been developed to facilitate
the evaluation of TIC design robustness. The pass/fail criteria for each of these imaging performance metrics are derived
from perception tests in which image contrast, brightness, noise, and spatial resolution are degraded to the point that
users can no longer consistently perform tasks involving TIC due to poor image quality.
Application of spatial frequency response as a criterion for evaluating thermal imaging camera performance
Show abstract
Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging
cameras in a very practical way every day. However, few performance metrics have been developed to assist first
responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for
evaluating spatial resolution using an application of Spatial Frequency Response (SFR) calculations for thermal imaging.
According to ISO 12233, the SFR is defined as the integrated area below the Modulation Transfer Function (MTF) curve
derived from the discrete Fourier transform of a camera image representing a knife-edge target. This concept is modified
slightly for use as a quantitative analysis of the camera's performance by integrating the area between the MTF curve
and the camera's characteristic nonuniformity, or noise floor, determined at room temperature. The resulting value,
which is termed the Effective SFR, can then be compared with a spatial resolution value obtained from human
perception testing of task specific situations to determine the acceptability of the performance of thermal imaging
cameras. The testing procedures described herein are being developed as part of a suite of tests for possible inclusion
into a performance standard on thermal imaging cameras for first responders.
Using extended surfaces to reduce the thermal signatures of military assets
Show abstract
Because military assets often operate at temperatures different from their surroundings, they exhibit thermal signatures
observable with infrared (IR) imaging systems operating in the
mid-wave (MWIR) and long-wave (LWIR) infrared
bands of the spectrum. Reducing these thermal signatures provides a means of making these assets blend in with their
environments. Most often the assets are warmer than their environment because they must dissipate heat generated
internally from electronics, engines, or personnel. The radiation emitted by these assets is strongly dependent on the
temperature and the emissivity of the exposed surfaces. Thus, by reducing the emissivity or the temperature, or both, the
thermal signatures can be reduced. The current study addresses the use of extended surfaces, or fins, to reduce the
temperature of heat dissipating surfaces. Analytic, experimental, and computational studies are performed which
demonstrate that extended surfaces offer an effective way to reduce the temperature of exposed surfaces while still
dissipating the heat generated by the asset.
Applications of super-resolution and deblurring to practical sensors
Show abstract
In image formation and recording process, there are many factors that affect sensor performance and image
quality that result in loss of high-frequency information. Two of these common factors are undersampled
sensors and sensor's blurring function. Two image processing algorithms, including super-resolution image
reconstruction and deblur filtering, have been developed based on characterizing the sources of image
degradation from image formation and recording process. In this paper, we discuss the applications of these
two algorithms to three practical thermal imaging systems. First, super-resolution and deblurring are
applied to a longwave uncooled sensor in a missile seeker. Target resolution is improved in the flight phase
of the seeker operation. Second, these two algorithms are applied to a midwave target acquisition sensor for
use in long-range target identification. Third, the two algorithms are applied to a naval midwave distributed
aperture sensor (DAS) for infrared search and track (IRST) system that is dual use in missile detection and
force protection/anti-terrorism applications. In this case, super-resolution and deblurring are used to
improve the resolution of on-deck activity discrimination.