Proceedings Volume 11158

Target and Background Signatures V

cover
Proceedings Volume 11158

Target and Background Signatures V

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 16 December 2019
Contents: 8 Sessions, 23 Papers, 15 Presentations
Conference: SPIE Security + Defence 2019
Volume Number: 11158

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 11158
  • Sensor Properties and Target Detection
  • Understanding Scenes
  • Creating Databases
  • Thermal Behaviour
  • Surface Measurements and Modelling
  • Observer Performance
  • Poster Session
Front Matter: Volume 11158
icon_mobile_dropdown
Front Matter: Volume 11158
This PDF file contains the front matter associated with SPIE Proceedings Volume 11158, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Sensor Properties and Target Detection
icon_mobile_dropdown
Target detection performance of hyperspectral imagers
A. D. Cropper, David C. Mann, Milton O. Smith
Traditional performance metrics for hyperspectral imaging (HSI) systems include signal-to-noise ratio (SNR), ground sample distance (GSD), ground resolvable distance (GRD), and noise equivalent spectral radiance (NESR). These metrics characterize the sensor system itself, but there is a gap between these metrics and the ability of the system to detect and identify targets in realistic scenes. Three additional metrics, the target size to GRD ratio, target spectral variability and receiver operating characteristic (ROC) curve, are evaluated to quantify HSI system performance with scene and mission conditions considered. Historically, sensor design efforts do not use ROC curves because they are relatively difficult to calculate and depend on scene parameters as well as sensor parameters (e.g., SNR, GSD, GRD, and NESR).

Data from a recent experiment in which an airborne sensor collected data for a variety of targets are used to identify exploit performance factors that need to be included into the model to quantify end-to-end sensor exploit performance. The primary targets consisted of an array of blue tarpaulins cut to sizes less than and greater than one HSI spatial pixel projected to the ground. We designed the experiment to quantify and compare the effects of target size on ROC curves.

One key result of this work is that the radiance of targets in the scene exhibits a large degree of variation among many passes during two days of flight testing. This variability complicates the detection process. Another key result is that detection performance has a strong correlation with target size for subpixel targets. Finally, we demonstrate that in this case, sensor noise has little impact on detection performance.
A rapid dual-band infrared detection method for aerial targets based on LCM
Targets detection in single band images has problems such as poor clutter suppression capability and high false alarm rate, while the difference in imaging characteristics of the aerial target between the mid-length wave and the long-length wave is obvious and complementary, so dual-bands fusion can be utilized to improve detection efficiency and performance. An rapid infrared dual-band fusion target detection method based on local contrast method (LCM) is proposed, searching targets rapidly and extracting targets features in the mid-length wave images from same scene, finally fusing two bands images for precise positioning. The multi-scales LCM is used to quickly perform full-image range background suppression and target enhancement in lower-resolution long-length wave images, then the suspicious target position is obtained and sliced. Following the guidance from the long-length wave images, the position of targets is extracted at the corresponding position in the higher-resolution mid-length wave image. Two positions from different resolution images are fused, and the target is accurately positioned. This method achieves rapid detection of aerial targets in infrared images effectively, which has certain engineering application value.
A blind pixel dynamic detection method for IRFPA towards point target (Conference Presentation)
In the process of point target detection, the blind pixel can easily lead to wrong detection of point target. Aiming at the defect that the conventional blind pixel detection method cannot detect the random blind pixel dynamically, a blind pixel dynamic detection method for IRFPA towards point target was proposed. In this method, the suspicious point targets which are removed by the multi-frame detection of point targets are chosen as potential blind pixels. Blind pixel feature model is used to match potential blind pixels, and the likelihood function of the confirmed blind pixel is characterized by multi-frame accumulation. In the subsequent detection, the blind pixels are dynamically detected by using the blind pixel likelihood function which is iteratively updated by the detection result, so as to adaptively detect blind pixel. The results showed that this method can dynamically eliminate random blind pixel and fixed blind pixel without interrupting the detection process of the point target, and at the same time improve the detection accuracy of the point target with a small amount of computation and is easy to be implemented by hardware.
Understanding Scenes
icon_mobile_dropdown
Cloud detection and visibility estimation during night time using thermal camera images
Céline Portenier, Beat Ott, Peter Wellig, et al.
Reduced visibility and adverse cloud cover is a major issue for aviation, road traffic, and military activities. Synoptic meteorological stations and LIDAR measurements are common tools to detect meteorological conditions. However, a low density of meteorological stations and LIDAR measurements may limit a detailed spatial analysis. While geostationary satellite data is a valuable source of information for analyzing the spatio-temporal variability of fog and clouds on a global scale, considerable effort is still required to improve the detection of atmospheric variables on a local scale, especially during the night.

In this study we propose to use thermal camera images to (1) improve cloud detection and (2) to study visibility conditions during nighttime. For this purpose, we leverage FLIR A320 and FLIR A655sc Stationary Thermal Imagers installed in the city of Bern, Switzerland. We find that the proposed data provides detailed information about low clouds and the cloud base height that is usually not seen by satellites. However, clouds with a small optical depth such as thin cirrus clouds are difficult to detect as the noise level of the captured thermal images is high.

The second part of this study focuses on the detection of structural features. Predefined targets such as roof windows, an antenna, or a small church tower are selected at distances of 140m to 1210m from the camera. We distinguish between active targets (heated targets or targets with insufficient thermal insulation) and passive structural features to analyze the sensor's visibility range. We have found that a successful detection of some passive structural features highly depends on incident solar radiation. Therefore, the detection of such features is often hindered during the night. On the other hand, active targets can be detected without difficulty during the night due to major differences in temperature between the heated target and its surrounding non-heated objects. We retrieve response values by the cross-correlation of master edge signatures of the targets and the actual edge-detected thermal camera image. These response values are a precise indicator of the atmospheric conditions and allows us to detect restricted visibility conditions.
A depth estimation framework based on unsupervised learning and cross-modal translation
In recent years, with the vigorous development of artificial intelligence and autonomous driving technology, the importance of scene perception technology is increasing. Unsupervised deep learning based methods have demonstrated a certain level of robustness and accuracy in some challenging scenes. By inferring depth from a single input image without any ground truth label, a lot of time and resources can be saved. However, unsupervised depth estimation has defects in robustness and accuracy under complex environment which could be improved by modifying network structure and incorporating other modal information. In this paper, we propose an unsupervised, monocular depth estimation network achieving high speed and accuracy, and a learning framework with our depth estimation network to improve depth performance by incorporating transformed images across different modalities. The depth estimator is an encoder-decoder network to generate the multi-scale dense depth map. The sub-pixel convolutional layer is adopted to obtain depth super-resolution by replacing the up-sample branches. The cross-modal depth estimation using near-infrared image and RGB image enhances the performance of depth estimation than pure RGB image. The training mode is to transfer both images to the same modality and then carry out super-resolved depth estimation for each stereo camera pair. Compared with the initial results of depth estimation using only RGB images, the experiment verifies that our depth estimation network with the cross-modal fusion system designed in this paper achieves better performance on public datasets and a multi-modal dataset collected by our stereo vision sensor.
Visual place recognition based on multilevel descriptors for the visually impaired people
The Visually Impaired People (VIP) have the difficulty in perceiving the accurate localization in their daily life. Developing an efficient algorithm to address the localization issues of the VIP is crucial. Visual Place Recognition (VPR) refers to using the image retrieval algorithms to determine the location of a query image in the database, which is promising to help the VIP solve their localization problems. However, the accuracy of VPR is directly affected by the changes of scene appearances such as illumination, seasons and viewpoints. Therefore, finding a method to extract robust image descriptors under the changes of scene appearance is one of the most critical tasks in current VPR research. In this paper, we propose a VPR approach to assist the localization and navigation of visually impaired pedestrians. The core of our proposal is a combination of multi-level descriptors by using appropriate descriptors: the whole image, local regions and key-points, aimed to enhance the robustness of VPR. The matching procedure between query images and database images includes three steps. Firstly, we obtain the Convolutional Neural Networks (CNN) features of the whole images from a pre-trained GoogLeNet, and the Euclidean distances between the query images and the database images are computed to determine the top 10 good matches. Secondly, local salient regions are detected from the top-10 best-matched images with Non-Maximum Suppression (NMS) to control the number of bounding boxes. Thirdly, we detect the SIFT key-points and extract the geodesc descriptors of the key-points, from the local salient region, and determine the top 1 among the top 10 good matches. In order to verify our approach, a comprehensive set of experiments has been conducted on dataset with challenging environmental changes, such as the GardensPointWalking dataset.
Creating Databases
icon_mobile_dropdown
Exploitable synthetic sensor imagery from high-fidelity, physics-based target and background modeling
Stacy E. Howington, Jerrell R. Ballard Jr., Nagaraju V. Kala, et al.
It is well established that object recognition by human perception and by detection/identification algorithms is confounded by false alarms (e.g., [1]). These false alarms often are caused by static or transient features of the background. Machine learning can help discriminate between real targets and false alarms, but requires large and diverse image sets for training. The potential number of scenarios, environmental processes, material properties and states to be assessed is overwhelming and cannot practically be explored by field/lab collections alone. High-fidelity, physics-based simulation can now augment training sets with accurate synthetic sensor imagery, but at a high computational cost. To make synthetic image generation practical, it should include the fewest processes and coarsest spatiotemporal resolution needed to capture the system physics/state and accomplish the training.

Among the features known or expected to generate false alarms are: (1.) soil/material variability (spatial heterogeneity in density, mineral composition, reflectance), (2.) non-threat objects (rocks, trash), (3.) soil disturbance (physical and spectral effects), (4.) soil processes (moisture migration, evaporation), (5.) surface hydrology (rainfall runoff and surface ponding), (6.) vegetation processes (transpiration, rainfall interception and evaporation, non-saturating rain events, multi-layer canopy, (including thatch), discrete versus parameterized vegetation), and (7.) energy reflected or emitted by other scene components. This paper presents a suite of computational tools that will allow the community to begin to explore the relative importance of these features and determine when and how individual processes must be included explicitly or through simplifying assumptions/parameterizations. The justification for this decision to simplify is driven ultimately by the performance of a detection algorithm with the generated synthetic imagery. Knowing the required level of modeling detail is critical for designing test matrices for building image sets capable of training improved algorithms.

A related consideration in the creation of synthetic sensor imagery is validation of these complex, coupled modeling tools. Very few analytical solutions or laboratory experiments include enough complexity to thoroughly test model formulations. Conversely, field data collection cannot normally be characterized and measured with sufficient spatial and temporal detail to support true validation. Intermediate-scale physical exploration of near surface soil and atmospheric processes (e.g., Trautz et al., [2]) offers an alternative that is intermediary to the laboratory column and field scales. This allows many field-scale-dependent processes and effects to be reproduced, manipulated, isolated, and measured within a well characterized and controlled test environment at requisite spatiotemporal resolutions in both the air and soil.
Semi-synthetic naval scene generation for infrared-guided missile threat analysis with separate setting of apparent temperatures for each target part
N. Scherer-Negenborn, A. Schmied
As previously published, we developed a semi synthetic scene generator (SSSG). It takes parts of different scenes from real recorded infrared images or videos and combines them anew. As scene parts the sky, the water surface, and the target ship are used. Additional scene parts, such as counter measures, can be included. Missile paths can be generated arbitrarily, as long as the viewing angles do not deviate too much from the direct approach. It is also possible to include the semi synthetic rendering into a control loop together with tracking algorithms as they are assumed to be used in infrared guided anti-ship missiles, and to analyze the threat to a target in different created sequences. In this paper, we describe the extension of the semi synthetic scene generator to change the apparent temperature of the scene parts synthetically based on physical behavior. For the variation of the apparent temperature of the scene parts, usually only a few recorded examples are available. With this new feature it is possible to change the apparent temperature of the target ship, or parts of it. This gives the possibility to further increase the number of scenarios, which can be simulated, in order to get statistically more reliable results.
Effective 3D modeling method using indirect information of targets for SAR image prediction
Ji Hee Yoo, Daeyoung Chae, Ji-Hoon Park, et al.
A target database plays a key role in automatic target recognition (ATR) using SAR images. To construct a target database, indirect information of the targets such as plastic models, 2-dimensional draft and pictures of them can be used to make target 3-dimensional CAD models when real targets are not accessible. Each one of the indirect information is not enough to extract the accurate shape information of targets and they should be used together to make the 3D CAD models more accurately. We used 3D graphics models that are available on the internet, 2D drafts with size information and many pictures of targets at various aspects to reverse-engineer the 3D shape of the targets in detail. This paper proposes a practical target 3D CAD modeling method using the indirect information of a target and shows the results by comparing the 3D CAD models made with indirect information with the models generated by high precision laserscanning method. In addition to comparing the 3D shape itself, RCS calculation, ISAR imaging, scattering center extraction with electromagnetics (EM) analysis code were adopted to inspect how the difference of two models affects the scattering phenomena and SAR images as a result. The result shows that once we experience the building-comparingcorrecting process several times for the generation of accurate target CAD models, decent level of target CAD models can be made using indirect information only.
Thermal Behaviour
icon_mobile_dropdown
Accurate estimation of temperature distributions for IR signature monitoring with a dynamic thermal model and data assimilation
B. J. A. Peet
For infrared signature monitoring, knowledge on the surface temperature distribution of an object is required. TNO has developed a model to estimate the real-time evolution of the surface temperature distribution of an object that requires only sensor data from a standard weather station as an input. First, heat fluxes from and to surface elements are estimated based on these meteorological data. The dynamic heat balance is then solved numerically with the Finite Element Method. Radiation exchange, convection, heat conduction between surface facets and shadowing are all taken into account in the model. The model results were compared to half a year of thermocouple data of a CUBI test-object. The temperatures of all facets of the CUBI could be estimated with a diurnal root mean squared error smaller than 2 C. Similar accuracy was achieved when facets of the CUBI were cooled down or heated because they were rotated towards or from the sun. The performance of the model is mainly limited by estimates of the solar irradiance on tilted surfaces. Data assimilation techniques were used to further improve the estimates of the real-time CUBI surface temperature distribution. It is demonstrated that estimates of the temperature distribution of the CUBI are significantly improved when the data of a limited number of thermocouples is used to update the model results with a Kalman filter.
Infrared signature simulations of a mobile camouflage for a heavy military vehicle
Infrared signature simulations of a heavy military vehicle (a main battle tank, MBT) with a Mobile Camouflage System (MCS), from SAAB Barracuda, are presented and compared with measurements in this report. The measurements were conducted at a field trial by the Swedish Defence Materiel Administration (FMV), and SAAB Barracuda together. The heat generation and transport simulations were carried out with the thermal and infrared modeling program TAIThermIR from ThermoAnalytics Inc. The vehicle signature model used were developed at FOI using a commercial CAD shell model as a basic platform adjusted with measured data and together with a laser scanning of the real vehicle. The camouflage cover was modeled out of laser scan data together with knowledge about vehicle and support structure below the camouflage material. The signature simulations show good correspondence with images from the field trial. The signature simulations have been compared to IR images in both the long-wave and mid-wave infrared wavelength ranges.

The signature simulations were carried out for evening time situation when the sun is below the horizon.

The mobile camouflage system significantly reduced the infrared signature by shielding the absolute hottest structure's radiation, and directing the exhaust and hot air flows from visible structures, especially from the front and side directions. The changes in signature of the MBT were successfully simulate with TAIThermIR and the models of the MBT and camouflage that had been developed. The tool does not allow the calculation of the exhaust gas radiance contribution to the signature.
Analysis of target inversion temperature based on infrared dual band
When the target to be measured temperature is far away, and the prior information of the target such as the emissivity of the detected object can not be known, the traditional infrared single-band temperature measurement method will produce larger temperature measurement error. According to the characteristics of target infrared radiation spectrum, assuming that the target is grey body, this paper uses infrared dual-band temperature measurement algorithm and Monte Carlo method to extract the temperature feature of the object under test. The effectiveness of the algorithm is analyzed. The influence of parameters such as iteration times of the algorithm and the minimum error threshold of dual-band radiation ratio on the accuracy of target temperature inversion is simulated and analyzed. On this basis, the influence of different target emissivity, distance between wavebands and detection distance on target temperature inversion accuracy is analyzed. The results show that the two-band temperature measurement algorithm can extract the target temperature information quickly and accurately under the set parameters, and the temperature inversion error is related to the distance between infrared bands. This provides guidance for improving the accuracy of target temperature measurement in practical measurement.
Target infrared characteristic measurement in vacuum chamber
Dongxing Tao, Zhe Gong, Qinliang Guo, et al.
Orbit target IR characteristic is the basis of the IR imaging detection equipment design, the corresponding digital simulation model validation, and the application processing development, such as the target detection and tracking. In the work, an infrared measurement system for simulated space target is represented. The system can acquire the infrared characteristics of the target in the simulated space environment on the ground. In order to simulate the whole orbit environment, vacuum chamber and solar simulator are used in the work. An IR window is developed for the system. Then measurement instruments can be place outside the chamber to get the characteristic through the IR window. The work also provides a method to calibrate the IR window. The IR characteristic at the wavelength range of 3 μ~12 μm can be obtained with the system.
Surface Measurements and Modelling
icon_mobile_dropdown
Evaluation of several vegetation indices to detect deep man-made bunkers using field spectroscopy
This paper presents the results obtained from field spectroscopy campaigns for the detection of deep man-made bunker infrastructures in Cyprus. Several of vegetation indices combined with other in-band algorithms were utilized for the development of a vegetation index-based procedure aiming at the detection of deep man-made bunkers. The measurements were taken at the following test areas: Area (a) vegetation area covered with vegetation, in the presence of deep man-made bunker and Area (b) vegetation area covered with vegetation, in the absence of deep man-made bunker. The test areas were identified, analyzed and modeled under different scenarios. Comparing the two test areas, results have shown that there were differences in the spectral vegetation indices (SVIs).
Fusion of spectral and directional reflectance information
Marcos López Martínez
The measurement of the bidirectional reflectance distribution function (BRDF) is difficult due to the high dimensionality and complexity. The quantification of this function requires costly facilities, named gonioreflectometers, and long measurement times due to the great number of possible combinations between incoming and outgoing directions and additionally wavelengths.

We introduce a method for fusing directional information acquired from two different detectors, a spectrometer and a photodiode. The goal is to combine the strength of each one for a specific dimension of the BRDF without drastically extending the acquisition times. For that purpose the spectrometer delivers the spectral information and the photodiode highly directional monochromatic information of the reflectance function. This method is applied to a coated metal sheet in the visual and near infrared spectral regions, where monochromatic measurements are combined with spectral ones to get higher information densities.
Round robin comparison of BRDF measurements
Tomas Hallberg, Daniel A. Pearce, Peter Raven, et al.
The scattering properties of materials such as coated and painted surfaces are important in the design of low observable materials. These properties are also important to enable accurate modeling of targets in a scene of different background materials. The distribution of light scatter from surfaces can be determined by measurements of the Bidirectional Reflectance Distribution Function (BRDF) using devices such as scatterometers. The BRDF should ideally be possible to measure both in and outside the plane-of-incidence in order to characterize both isotropic and anisotropic scatter and with suitably high angular resolving power and signal to noise ratio at the wavelengths of interest. Both narrow-band light sources (e.g. lasers) and broad-band light sources in combination with spectral band pass filters may be used in combination with appropriate detectors. This type of instrumentation may consist of complex mechanically moving parts and optics requiring careful alignment to the sample surface to be measured. To understand the synergies and discrepancies between the outputs of different BRDF instruments measuring the same sample set, we have compared BRDF measurement results between our research laboratories in a round robin comparison of an agreed set of sample surfaces and measurement geometries and wavelengths. In this paper, the results from this study will be presented and discussed.
The influence of the water on scene IR signature
The influence of the water on the scene IR signature is analyzed. The atmospherical water influence i.e. influence of the atmospheric meteorological conditions and humidity, are well known and described throughout the literature. However the research of water film influence on the IR signature is less covered in the. In this article we analyzed the influence of the water film on the scene signature. This analysis consists of the simplified theoretical model definition, followed by qualitative experimental research results, and compared with selected published results. The structure of the experimental IR scenes is described. The main sources that contribute to the scene IR signature are identified – differences in emissivity and differences in temperature, and they are reproduced in the experimental scene. In the experimental investigations the two scene types were used: (a) scene elements are in thermal equilibrium, but thermal image generation is based on the differences in emissivity; (b) scene consists of uniform background and several pieces of the some type object with different temperature. Thin water film is applied on both scenes. The experimental results are presented and explained. Experimental results show that water film significantly influences to the appearance of the thermal image generated based on emissivity differences. In the case of the temperature differences in the scene, the effect of the water film influences to the generated thermal image is visible only during the process of the water film generation. The influence depends on the quantity of water. The experimental results qualitatively prove the correctness of the starting hypothesis also as results of the theoretical model evaluation.
Modelling sea clutter infrared synthetic images
B. A. Devecchi, K. W. Benoist, L. C. W. Scheers, et al.
Infrared imaging of the sea surface is used for many purposes, such as remote sensing of large oceanographic structures, environmental monitoring, surveillance applications and platform signature research. Many of these studies rely on determining the contrast of a target feature with its background and therefore benefit from accurately predicting the signature of the underlying sea surface background. We here present a model that synthesizes infrared spectral images of sea surfaces. This model traces explicitly the behaviour of the sea wave structure and light propagation. To self-consistently treat spatial and temporal correlations of the clutter, geometrical realizations of sea surfaces are built based on realistic sea wave spectra and their temporal behaviour is subsequently followed. A camera model and a ray tracer are used to determine which parts of the sea surface are observable by individual camera pixels. Atmospheric input elements of the model, being sky dome, path radiance and transmission, are computed with MODTRAN for a chosen atmosphere.
Observer Performance
icon_mobile_dropdown
Determination of the detection threshold of human observers in acoustic drone detection
Samuel Huber, Peter Wellig, Kurt Heutschi
Background: Nowadays, small drones are inexpensive and can be purchased and used very easily. Unfortunately, they are also relatively easy to convert to weapons. As they become more widespread, these drones may become a serious security risk. One possible way to address this threat could be the early detection of small drones by using acoustic cameras. However, the question arises as to how good the detection performance of such cameras is, compared to that of a human observer. The goal of this project was to determine the acoustic detection-threshold of human observers for drones in the presence of ambient noise. Methods: Nineteen subjects volunteered to take part in the study. The constancy method was used to determine the detection threshold. During the test, the study participants were presented with a recording of a DJI Phantom2 Vision+ drone that varied in level in steps of 1dB over a range of 27dB around the estimated threshold value. The signals were superimposed by three different kinds of ambient noise which were presented in three successive test-runs. The subjects wore headphones over which they heard the ongoing ambient noise while they were presented with the drone sound at random intervals and levels. The test signal was on for 2 seconds during which the trial subject had to confirm the detection of the drone sound by pressing an assigned key on a notebook. Results: We’ve found detection thresholds for white noise, water or highway noise at -17dB, -18dB and -17dB respectively, expressed as level differences between test signal and noise. Comparison of our results with the detection performance of human observers in a simulated drone detection scenario, reproduced by loudspeakers in an anechoic chamber, showed good agreement. Further, it seems possible to assess the detection performance of an acoustic camera using our results.
Comparison of land vehicle target detection performance in field observation, photo simulation and video simulation
Ryan Messina, Vivienne C. Wheaton, Alaster Meehan, et al.
Photo-simulation is a widely used method for target detection experimentation. In the defence context, such experiments are often developed in order to derive measures for the effectiveness of camouflage techniques in the field. This assumes that there is a strong link between photo-simulation performance and field performance. In this paper, we report on a three-stage experiment exploring that link. First, a field experiment was conducted where observers performed a search and detection task seeking vehicles in a natural environment, simultaneously with image and video capture of the scene. Next, the still images were used in a photo-simulation experiment, followed by a video-simulation experiment using the captured video. Analysis of the photo-simulation results has shown that there is moderate linear correlation between field and photo-simulation detection results (Pearson Correlation Coefficient, PCC = 0.64), but photo-simulation results only moderately fit the field observation results, with a reduced χ2 statistic of 1.996. Detectability of targets in the field was, mostly, slightly higher than in photo-simulation. Analysis of the video-simulation results using videos of stationary and moving targets has also shown moderate correlation with field observation results (PCC = 0.62), but these are a better fit with the field observation results, with a reduced x2 statistic of 1.45. However, when considering videos of moving targets and videos of stationary targets separately, there appear to be two distinct trends, with video-simulation detection results being routinely higher for moving targets than the field observation results, while for stationary targets, the video-simulation detection results are mostly lower than the field observation, similar to the trend noted in the photo-simulation results. There were too few moving target videos to confidently perform a fit, but the fit statistics for the stationary target videos becomes similar to that of the photo-simulation, having a reduced χ2 = 1.897.
Evaluation of target acquisition performance in photosimulation test
The photosimulation test is one of preferred method to evaluate the effectiveness of the camouflage cover. The arrangement of the photosimulation test is in principal similar to the way of testing the target acquisition performance. The deal of this paper is to refer about our experiences with the application of methods of target acquisition performance to the results of the robust set of the photosimulation trials.
Poster Session
icon_mobile_dropdown
Research on design distance of visible light camouflage on water target
With the increasing threat of attacks and reconnaissance on surface targets, in addition to radar and infrared measures, visible light camouflage measures for surface targets are receiving more and more attention. Camouflage measures such as camouflage color have been widely used in small surface targets in recent years. However, since the meteorological conditions on the sea are different from those on the land and in the sky, how to determine the design distance of visible light camouflage on surface targets through scientific analysis still needs to be solved. To this end, this paper establishes a calculation method of visible light transmittance on the sea through Modtran, combined with human visual characteristics and Johnson’s criteria, to calculate the camouflage standard design distance of different target sizes under different contrasts on the sea, and solve the problem of camouflage design distance selection under human eye observation conditions.
A new histogram PMHT incorporating pixel noise distribution for dim target tracking
The histogram probabilistic multi-hypothesis tracker (H-PMHT) is an attractive multi-target tracking method which directly processes raw sensor images to detect dim targets. In the H-PMHT, the raw sensor images are converted to histograms, and then the histograms are assumed to follow the multinomial distributions parameterized by mixture density functions, in which each mixture component corresponds to a target object or clutter. Combine this measurement model with the expectation-maximization (EM) method, H-PMHT estimates the states of targets and the mixture proportions. Recently, by assuming alternative measurement models based on Poisson distribution and Interpolated Poisson distribution, researchers proposed the Poisson H-PMHT (P-HPMHT) and the Interpolated Poisson PMHT (IPPMHT) to allow for fluctuating target amplitude.

However, these methods fail to take distribution information of pixel noise into tracking consideration, which then results the degradation of detection performance. In this paper, we address this problem by modifying the measurement model of IP-PMHT to allow for incorporating statistical information of pixel noise. A key point to achieve this is that Interpolated Poisson follows a thinning property, which means that the energy from clutter can be modeled with a parameterized Interpolated Poisson in the IP-PMHT. We replace the parameterized Interpolated Poisson with a given distribution, which describes the pixel noise, and propose a new tracking method. An important feature of this new method is that it retains the advantages of the H-PMHT, meanwhile naturally incorporates the prior information about pixel noise in target tracking. Through the Monte Carlo simulations, we prove the superiority of this new method in dim target tracking.