Proceedings Volume 10794

Target and Background Signatures IV

cover
Proceedings Volume 10794

Target and Background Signatures IV

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 16 November 2018
Contents: 9 Sessions, 32 Papers, 19 Presentations
Conference: SPIE Security + Defence 2018
Volume Number: 10794

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10794
  • Characteristics of Vegetation
  • Environmental Effects on Signatures
  • Observer Effects and Trials
  • Target Detection Techniques
  • Machine Learning
  • Scenes and Detection Performance
  • Hardware and Materials
  • Poster Session
Front Matter: Volume 10794
icon_mobile_dropdown
Front Matter: Volume 10794
This PDF file contains the front matter associated with SPIE Proceedings Volume 10794, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Characteristics of Vegetation
icon_mobile_dropdown
Detectability in the SWIR spectral range
Analysis of detectability in the short-wave infrared spectral regions shows the sensitivity of this imagery towards illumination conditions and different spectral regions. It will be described how difficult the assessment of camouflage materials can be.
Collecting information for spectral boundaries determination
V. Bárta, J. Hanuš
Each country has different requirements for camouflage parameters. The requirements are used for camouflage quality verification. Spectral reflectance limits are one of the criteria that every material used by army has to satisfy. Most countries use spectral borders to verify spectral behaviour of materials. Measuring of spectral characteristics has a long history in many countries. This article deals with current application of spectral borders in the Czech Republic. It is not allowed to openly publish the precise spectral curves. Therefore, this article shows the principles which play a significant role in establishing them.
NATO hyperspectral measurement of natural background
This article focuses on results of international measurement that took place in a military training area near Baad Shaarow in the summer of 2017. The main goal of this measurement was collecting information about current attitudes to camouflage targets detection. Participants measured spectral signatures of present targets by using hyperspectral sensors. Image spectroscopy has been in great demand for the last decade. This field was initially used in remote sensing application. The progress in electrotechnology allowed for spreading out into diverse branches. This system has not been used in military technology for too long. The military application involves specific tools. This event was attended by several research groups from several countries. Every group operated a different hyperspectral device and they documented an identical target. Using different devices there was a wide spectrum of apparatus which work in different spectrum from VIS to SWIR. The main part of this work is focused on hyperspectral data comparison.
Copernicus Sentinel opportunities using field spectroscopy to support deep man-made infrastructures in Cyprus
Satellite remote sensing is considered as an increasingly important technology for military intelligence. It can be applied to a wide range of military applications, as shown from various researchers. However, there is a great need to integrate information from a variety of sources, rendered available at different times and of different qualities using remote sensing tools. This paper provides a solid methodology to support Sentinel remote sensing detection of deep man made infrastructures in Cyprus using field spectroscopy. A number of vegetation indices such as the Normalized Difference Vegetation Index (NDVI), Simple Ratio (SR), Difference Vegetation Index (DVI) and Optimized Soil Adjusted Vegetation Index (OSAVI), combined with other in-band algorithms were utilized for the development of a vegetation index-based procedure aiming at the detection of underground military structures. The measurements were taken at the following test areas: (a) vegetation area covered with the vegetation (barley), in the presence of an underground military structure (b) vegetation area covered with the vegetation (barley), in the absence of an underground military structure. The test areas were identified, analyzed and modelled under different scenarios. Sentinel-2A is a promising tool for detecting underground structures.
Environmental Effects on Signatures
icon_mobile_dropdown
Visualizing simulated temperatures of a complex object calculated with FTOM using open source software (BLENDER)
The Fraunhofer thermal object model (FTOM) predicts the temperature of an object as a function of the environmental conditions. The model has an outer layer exchanging radiation and heat with the environment and a stack of layers beyond modifying the thermal behavior. The orientation of the layer is defined by the normal vector of the surface. The innermost layer is at a constant or variable temperature called core temperature. All the layers have heat capacity and thermal conductivity. The outer layers properties are color (visible), emissivity (IR), coefficients of free and forced convection, and a factor for latent heat. The environmental parameters are air temperature, wind speed, relative humidity, irradiation of the sun, and thermal radiation of the sky and ground. The properties of the model (7 parameters) are fitted to minimize the difference between the prediction and a time series of measured temperatures. The size of the time series is one or more days with 288 values per day (5 minute resolution). The model is useable for very different objects like backgrounds (meadow, forest, rocks, sand, or bricks) or parts of objects like vehicles. The STANDCAM is a decoy of a vehicle and is used to constitute a thermal signature and is not classified. The STANDCAM has a complex CAD-Model with thousands of triangular facets that had to be simplified for the thermal simulation. The CAD model was available through WTD 52, an agency of the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support (BAAINBw). Groups of elements of the model facing in the same direction and behaving similarly were cut out and grouped in distinct objects. The calculation of the temperature of the objects is based on measured environmental data and the model parameters are fitted on measured radiation temperatures of the objects and backgrounds. For the visualization the object is surrounded by a world texture. For the radiation temperature of the environment and the ground under the object measured air and meadow temperatures were used. The temperature is coded as a color from a palette (here we use a grey palette) and is updated regularly throughout the calculation of the scene for the selected view and is stored as a texture bitmap. The animation of the temperature textures is directly performed by BLENDER.The result of the visualization is available as movie that is watchable in real time or time lapse.
Evolution of the statistical fluctuations in the measured temperature differences between painted metal plates of a CUBI infrared calibration target
G. D. Lewis, P. Merken, M. Vandewal
To validate thermal infrared signature models, we need data from well-characterized targets, typically of simple geometric shape measured under known atmospheric and environmental conditions. An important parameter of the target's signature is the variability in surface temperature due to an orientation with regard to the dominant heat source: the sun. In this paper, we report on the fluctuations in the temperature differences between surfaces of a CUBI geometrical test target mounted at roof level in an urban environment. Specifically, we investigate those surfaces orientated in a step-like manner. Measurements are recorded at regular intervals by digital temperature sensors operating in a network and remotely accessed via intranet. Our results assess the statistical variation in relative temperature contrast between surfaces for the entire dataset, where queries to a database highlight patterns in the data. We show not only a pronounced difference in probability density functions due to the influence of the sun by day and radiative cooling by night, but also the statistical variability of the temperature differences with local time. In summary, we show statistical limits on the probable temperature differences between plates as a function of time of day over a number of months, which provide a useful insight to the IR signature.
Validation of target and background modeling in midwave infrared band for tropical maritime environment (Conference Presentation)
In February 2018 Australia and Norway jointly conducted a field trial in Darwin collecting IR imagery in adverse weather conditions. The wet season in the Northern Territory is charactersied by high temperatures and humidity with intensive rains, storms and cyclones. The monsoon conditions subsided early February, but the collected data still included the required variety of atmospheric conditions. Two fully instrumented small boats performed a set of pre-designed manoeuvres and data was collected throughout the diurnal cycle. DST team used FLIR long-wave and mid-wave IR cameras. Weather data (temperature, humidity, barometric pressure, wind speed and direction) was also locally collected for the duration of the trial. The purpose of this paper is to present aspects of modelling of elements of IR scenes using the DST-developed VIRSuite tool (Virtual Infrared Simulation). Modelling will focus on mid-wave IR rendition and direct comparison with the collected imagery.
Sensitivity of input parameters to modelling of atmospheric transmission of long-wave infrared radiation at sea under warm and humid conditions
Jan Thomassen, Arthur D. van Rheenen, Eirik Blix Madsen, et al.
A joint Australian-Norwegian field trial (Osprey) was held in February 2018 in Darwin, Australia. The objective of this trial was to measure IR transmission properties of the atmosphere in a marine environment under warm and humid conditions. Darwin is in the tropics (longitude 12° south), and February is the middle of the "wet season". Various temperature-controlled sources (blackbodies) were used during the trial. Land based weather stations recorded a number of meteorological data. The sensors used in the trial included long-wave, mid-wave and short-wave IR cameras. In this paper we present the analysis of measurements performed on two blackbodies across Darwin Harbour. The scene was recorded with an IRCAM LW camera and calibrated to blackbodies with known temperature. We have modelled the atmospheric transmittance using MODTRAN, and from this acquired the equivalent blackbody temperature of the scene. In our analysis, we are not only interested in the overall agreement between predictions and data, but also on the sensitivity of the predictions to uncertainties of the input parameters (calibration temperatures, air temperature, humidity, etc.). In order to study this sensitivity, we used variance based sensitivity analysis and Monte Carlo simulations to compute sensitivity indices, according to methods developed by Saltelli and others. Our main finding is that uncertainties in calibration parameters (blackbody and camera temperatures) give the dominant contributions to the error in the computed equivalent temperature.
A field-based method for evaluating thermal properties of static and mobile camouflage
Reliable and realistic methods for assessment of thermal infrared signature properties for military purposes are important. With a basis in ongoing developments of imaging technologies, especially towards mass markets, including small handheld cameras or automotive sensors, thermal infrared sensors are expected to pose an increasing detection threat in the future. In this paper, we present a field-based approach that evaluates thermal contrast of camouflage nets, as well as mobile camouflage systems. In the proposed method, relative differences in thermal behavior between target and background are evaluated in a controlled manner in an outdoor environment over extended periods of ten days or more. The camouflage materials under test are mounted identically, in operationally realistic environments, and recorded with a thermal sensor at an image rate of 6 images per hour. Hence, thermal contrast values between each target and selected parts of the scene background are obtained during a full 24 hour period of time. Weather data are collected along with the thermal image data. In the subsequent analysis, average thermal contrasts between targets and selected backgrounds are calculated for certain well-defined time slots, such as night, day and transition between day and night. Only time slots that satisfy weather conditions requirements are analyzed, as changing weather is expected to affect the thermal response to camouflage systems. We believe the proposed method is a good compromise between controlled lab-tests, which are hampered by their lack of transfer value to thermal behavior in theatre, and field measurements during operations, where reproducibility of data can be low.
Observer Effects and Trials
icon_mobile_dropdown
Methods for measuring time to detect in human observer trials
Joanne B. Culpepper, Vivienne C. Wheaton, Christopher S. Madden, et al.
Evaluating the signature of operational platforms has long been a focus of military research. Human observations of targets in the field are perceived to be the most accurate way to assess a target's visible signature, although the results are limited to observers present in the field. Field observations do not introduce image capture or display artefacts, nor are they completely static, like the photographs used in screen based human observation experiments. A number of papers provide advances in the use of photographs and imagery to estimate the detectability of military platforms; however few describe advances in conducting human observer field trials.

This paper describes the conduct of a set of human field observation trials for detecting small maritime crafts in a littoral setting. This trial was conducted from the East Arm Port in Darwin in February 2018 with up to 6 observers at a time and was used to investigate incremental improvements to the observation process compared to small craft trials conducted in 2013. This location features a high number of potential distractors, which make it more difficult to find the small target crafts. The experimental changes aimed to test ways to measure time to detect, a result not measured at the previous small craft detection experiment, through the use of video monitoring of the observation line to compare with the use of observer-operated stop watches. This experiment also included the occasional addition of multiple targets of interest in the field of regard. Initial analysis of time-to-detect data indicates the video process may accurately assess the time to detect targets by the observers, but only if observers are effectively trained. Ideas on how to further automate the process for the human observer task are also described; however this system has yet to be implemented. This improved human observer trial process will assist the development of signature assessment models by obtaining more accurate data from field trials, including targets moving through a dynamic scene.
Evaluation of validity of observer test for testing of camouflage patterns
František Racek, Adam Jobánek, Teodor Baláž, et al.
Physical characteristics of camouflage patterns such as color or remission spectra can be tested and measured by objective methods. In the vast majority of use of camouflage pattern, the human (obscure person) will recognize the camouflaged object. Therefore, the quality of the camouflage pattern ultimately determines how a person in a given environment perceives the camouflage pattern. Human perception is very subjective, and its assessment cannot be measured by simple physical methods. Therefore, we process the observer’s visual performance when searching for camouflaged objects. It must always be based on the statistical processing of information on perceived quality of camouflage by individual observers. One of the methods for assessing the quality of camouflage surfaces is so called observer test. The observer test is a simple visual test in which a number of viewers observe a series of images of different scenes containing camouflaged object. The time taken to find the camouflaged object is measured. Depending on the time, it takes to find the camouflaged object, the quality of the camouflage pattern is judged. The time required to find a camouflaged object depends, among other, on the arrangement of the scene, the conditions of the observer test, how the observer interacts with the test interface, the observer's properties and last but not least the camouflage pattern quality. The time taken to find a camouflaged object by a particular observer in a particular frame must be assumed as a random variable because it depends on a large number of independent factors. The rated quality of the camouflage pattern is only one of these factors. Among the others, it was aim of the experiment we performed to evaluate the statistical behavior of the random variable to be able to describe the behavior of it by a suitable type of distribution.
Glass detection and recognition based on the fusion of ultrasonic sensor and RGB-D sensor for the visually impaired
With the increasing demands of visually impaired people, developing assistive technology to help them travel effectively and safely has been a research hotspot. Red, Green, Blue and Depth (RGB-D) sensor has been widely used to help visually impaired people, but the detection and recognition of glass objects is still a challenge, considering the depth information of glass cannot be obtained correctly. In order to overcome the limitation, we put forward a method to detect glass objects in natural indoor scenes in this paper, which is based on the fusion of ultrasonic sensor and RGB-D sensor on a wearable prototype. Meanwhile, the erroneous depth map of glass object computed by the RGB-D sensor could also be densely recovered. In addition, under some special circumstances, such as facing a mirror or an obstacle within the minimum detectable range of the RGB-D sensor, we use a similar processing method to regain depth information in the invalid area of the original depth map. The experimental results show that the detection range and precision of the RGB-D sensor have been significantly improved with the aid of ultrasonic sensor. The proposed method is proved to be able to detect and recognize common glass obstacles for visually impaired people in real time, which is suitable for real-world indoor navigation assistance.
Novel infrared object detection and tracking algorithm based on visual attention
Lei Liu, Xu Chen, Qi Xia
In this paper, a novel target detection and tracking algorithm based on visual attention is proposed. Firstly, the algorithm extracts saliency map of the first frame by improved visual attention algorithm, then detects targets which moving very slowly or even close to stationary after eliminating the interference of background factors. Secondly, it makes the mean shift algorithm’s kernel fixed bandwidth to be a dynamically changing bandwidth, so it not only retains the feature of traditional mean shift algorithm and can accomplish real-time tracking, but also can reduce background interference. Thirdly, the target model is established based on the saliency map, so the model is described by a variety of features. Therefore, when the target’s single feature changes, as size or shape, it still can detect the target. Lastly, it uses the modified meanshift algorithm to track moving targets, which can reduce the probability of losing target. Experimental results show that this algorithm is applicable to image sequences of both infrared and visible light, and it has good tracking performance. What’s more, the algorithm provides the motion information of the moving targets, so it gives a possibility for accurate positioning.
Target Detection Techniques
icon_mobile_dropdown
Camouflage evaluation by bio-inspired local conspicuity quantification
In this paper we will present a conspicuity quantification model based on anomaly detection. This model extracts numerous local image parameters, in first and higher order (transformation-based) statistics and calculates local conspicuity by a multiscale center-surround comparison, as a point in an image draws attention to it, if it significantly differs from its surroundings in one or more relevant parameters. This is also biologically substantiated, as many parts of the visual system calculate center-surround differences, for example in color or luminance.

In our work we focused on biologically relevant parameters as the camouflage is targeted against human observers. In first order statistics we focused i.a. on local luminance, perceptual color difference in CIELAB color space, r.m.s. contrast and entropy. In the transformation-based higher order statistics we focused on spatial frequency distribution, power spectra, orientation bias and quefrency analysis via Fourier transformation and linear feature extraction via Radon Transformation.

This, at first, enables the possibility to parametrize camouflage patterns and textures in a comprehensive way, offering a similarity rating of textures compared to a mean background, but in particular facilitates the calculation of conspicuity maps, in which eye-catching regions of images are highlighted.

In this work we show that the linear combination of those conspicuity maps, gathered on different scales can provide a good value for local conspicuity and therefore directly acts as a useful quantification for camouflage, as drawing as little attention as possible to the camouflaged object quantified by a low conspicuity value results in a good camouflage rating.
Performance evaluation of NMF methods with different divergence metrics for landmine detection in GPR
Deniz Kumlu, Isin Erer
Ground-penetrating radar (GPR) is a non-destructive geophysical tool used to detect buried objects. However, detection of shallowly buried objects such as landmines is a challenging problem due to the inherent presence of the clutter. Various methods based on subspace decomposition or multiresolution analysis are proposed for clutter removal in GPR images. The recently proposed subspace based non-negative matrix factorization (NMF) method is similar to the other well-known image decomposition methods however it has different constraints such as all the elements in the decomposed matrices have to be non-negative which more appropriate for our problem. The method is based on the low rank approximation of the GPR image. Several divergence metrics/cost functions have been proposed in literature for NMF as a convergence criteria such as Euclidean (EUC) distance, Kullback- Leibler (KL) divergence and Itakura-Saito (IS) divergence. These metrics affect the performance of NMF during the clutter removal process. To find the most suitable divergence metric in NMF for GPR clutter removal problem, a simulated dataset is constructed by using gprMax free software. The GPR images in our constructed simulated dataset have also the ground-truth images and represent challenging scenarios. Therefore the quantitative results are given in addition to visual results which is hard to obtain in the real GPR measurements. For the quantitative analysis, the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) performance metrics are used since the ground-truth images are available. Both the quantitative and visual results show that NMF with KL divergence outperform the other divergence metrics for GPR imaging.
Evaluation of side-scan sonar performance for the detection of naval mines
Autonomous Underwater Vehicles (AUVs) equipped with high-resolution side-scan sonars (SSS) are used to carry out preliminary surveys of potentially hazardous areas in order to counter the threat posed by naval mines and reduce risk to personnel. The detection and classification of mine-like objects is conducted offline, after a scan has been completed while the actual identification and neutralization of potential targets is executed in a separate minehunting operation. In this paper the various influences on the imaging sonar system and, moreover, the resulting sonar imagery are assessed with regard to affecting the Probability of Detection and Classification (PDC). Image quality, sharpness and Signal-to-Noise Ratio (SNR) are among the more obvious and straightforwardly quantifiable factors. The complexity of a sonar image, however, can have a significant impact as well. Image lacunarity is used to characterize the seafloor in order to assess the corresponding minehunting difficulty. Additional factors under consideration are the heading angle of the AUV at any given measurement position as well as horizontal spreading and potential overlapping of successive sonic pulses.
Feature extraction using high-range resolution profiles for estimating the number of targets
Jung-Won Lee, Gak-Gyu Choi, Kyoungil Na
This paper proposes a new feature extraction method for automatically estimating the number of target using high range resolution profile (HRRP). Feature of one-dimensional range profile is expected to be limited or missing due to lack of information according to the time. The proposed method considers the dynamic movements of targets depending on the radial velocity. The observed HRRP sequence is used to construct a time-range distribution matrix, then assuming diverse radial velocities reflect the number of target, the proposed method utilizes the characteristic of the gradient distribution on the time-range distribution matrix which is validated by electromagnetic computation data and dynamic simulation.
Nanosat-based detection and tracking of launch vehicles
Caroline Schweitzer, Max Gulde, Clemens Horch , et al.
Effective sensor technology for space based early warning and detection is a key component in the defense against immediate threats. These sensors have to be designed and optimized based on realistic infrared signatures of both, background (atmospheric, terrestrial) and rocket exhaust plume (or: ballistic missile exhaust plume). In both cases, the lack of observations causes the use of comprehensive simulation tools, either instead or in addition to ground measurement data. In the paper, we will present the preparatory investigations carried out for the conceptual design of an electro-optical (EO) payload based on a nanosatellite platform for the purpose of space-based early warning. Initially, this will comprise a description of the atmospheric simulation tool used at Fraunhofer IOSB, the application of those, and the assessment of detection and tracking algorithms. We will then give a short side note about ground measurement data. To conclude the paper, the experimental spacecraft based on nanosatellite technology (name: ERNST) will be introduced with a special focus on the EO payload designed by Fraunhofer EMI.
Machine Learning
icon_mobile_dropdown
Supporting artificial intelligence with artificial images
Lars Aurdal, Alvin Brattli, Eirik Glimsdal, et al.
Infrared (IR) imagery is frequently used in security/surveillance and military image processing applications. In this article we will consider the problem of outlining military naval vessels in such images. Obtaining these outlines is important for a number of applications, for instance in vessel classification.

Detecting this outline is basically a very complex image segmentation task. We will use a special neural network for this purpose. Neural networks have recently shown great promise in a wide range of image processing applications, image segmentation is no exception in this regard. The main drawback when using neural networks for this purpose is the need for substantial amounts of data in order to train the networks. This problem is of particular concern for our application due to the difficulty in obtaining IR images of military vessels.

In order to alleviate this problem we have experimented with using alternatives to true IR images for the training of the neural networks. Although such data in no way can capture the exact nature of real IR images, they do capture the nature of IR images to a degree where they contribute substantially to the training and final performance of the neural network.
Detection technology of foreign matter on the ocean for MDA with hyperspectral imaging
Takaaki Ito, Daiki Nakaya, Shin Satori, et al.
Target detection using hyperspectral images is useful for Maritime Domain Awareness (MDA). For future application to MDA, in the previous study, targets on the sea was photographed with a hyperspectral camera mounted on a helicopter to demonstrate a target detection using a Reed-Xiaoli detector (RXD). Although the demonstration turned out to be successful, for there were many erroneous detections due to white waves, improvement of the detection accuracy was desired. In this study, pixels classified as white waves by random forest, which is a supervised machine learning method, were removed from pixels which were regarded as anomaly by RXD.As a result, 76% white waves were successfully removed. This study show that white wave removal is possible by machine learning. This will improve the detection accuracy of foreign matter on the ocean.
Target detection with deep learning in polarimetric imaging
Süha Kağan Köse, Selman Ergünay, Beat Ott, et al.
Polarimetric imaging techniques demonstrate enhanced capabilities in advanced object detection tasks with their capability to discriminate man-made objects from natural background surfaces. While spectral signatures carry information only about material properties, the polarization state of an optical field contains information related to surface features of objects, such as, shape and roughness. With these additional benefits, polarimetric imaging reveal physical properties operable for advanced object detection tasks which are not possible to acquire by using conventional imaging. In this work, the primary objective is to utilize the state-of-the-art deep learning models designed for object detection tasks using images obtained by polarimetric systems. In order to train deep learning models, it is necessary to have a sufficiently large dataset consisting of polarimetric images with various classes of objects in them. We started by constructing such dataset with adequate number of visual and infrared (SWIR) polarimetric images obtained using polarimetric imaging systems and masking relevant parts for object detection models. We managed to achieve a high performance score while detecting vehicles with metallic surfaces using polarimetric imaging. Even with limited number of training samples, polarimetric imaging demonstrated superior performance comparing to models trained using conventional imaging techniques. We observed that using models trained with both polarimetric and conventional imaging techniques in parallel gives the best performance score since these models were able to compensate for each other's lacking points. In the subsequent stages, we plan to expand the study to the application of spiking neural network (SNN) architectures for implementing the detection/classification tasks.
Scenes and Detection Performance
icon_mobile_dropdown
Improved EO/IR target and background scene simulation with MuSES using a rapid fluid flow solver
Corey D. Packard, David M. Less, Mark D. Klein, et al.
The ability to accurately predict electro-optical signatures for high-value targets in an outdoor scene is a tremendous asset for defense agencies. In the thermal infrared wavebands, physical temperature is the primary contributor to imager-detected radiance. Consequently, enhancements to the fidelity of thermal predictions are desirable, and convection estimates are often the most significant source of uncertainty. One traditional method employed by MuSES applies a single global convection coefficient across the entire scene, and assumes that all exposed surfaces are in contact with the ambient air temperature. This results in a rapid prediction of convection coefficients for a large scene that changes dynamically with wind speed but lacks localized detail concerning how each target surface can experience a different convection coefficient and local air temperature. In practice, wind speed and directions typically change frequently, which coupled with the thermal mass of most targets reduces the negative impacts this approach can have on thermal predictions; numerous validations of MuSES predictions bear this out. Computational fluid dynamics (CFD) simulations provide additional spatial fidelity in the calculation of localized convection coefficients and air temperatures but only at great computational cost. A fully transient outdoor scene simulation would be nearly impossible. Boundary layers near surfaces must be resolved with a fine mesh, creating a numerical problem difficult to solve for large spatial scales when spanning large periods of simulated time. In this paper, a novel thermal fluid flow solver is presented. This proprietary flow solver models fluid flow at the spatial resolution and accuracy needed for convective heat transfer at the scale viewed by EO/IR (Electro-optical/Infrared) sensors, avoiding the burdens associated with conventional CFD codes. Many of the same Navier-Stokes equations are solved, albeit in simpler form. ThermoAnalytics-developed correlations directly calculate convection coefficients based on local bulk flow conditions of temperature, velocity, and pressure. The resulting accuracy is a large step beyond using a constant convection correlation universally across the scene. Intelligent simplification of the flow equations provides robust efficiency, requiring minimal effort and expertise compared to CFD codes.
Semi synthetic naval scene generation with engagement simulation for infrared-guided missile threat analysis
N. Scherer-Negenborn, A. Schmied
In this paper, we describe the development of a semi synthetic scene generator. It takes parts of real recorded infrared images or videos and combines them anew. All scene parts are essentially two-dimensional textures, which also may contain transparent regions. It can be time invariant textures or time variant videos. As scene parts the sky, the water surface, and the target ship are used. Additional scene parts, such as counter measures, can be included. The sky is placed very far away and orthogonal to the viewing direction. The water surface is placed parallel to the viewing direction. During the approach, it must be cross-faded from time to time. The target textures are cropped from videos with a ship; where the ship is relatively close, in order to have fine resolution. The target is placed roughly orthogonal to the viewing direction but closer than the sky, and its distance decreases. Missile paths can be generated arbitrarily, as long as the viewing angles do not deviate too much from the direct approach. Comparably large numbers of semi synthetic approaches can be generated in order to assess the threat to the ship. It is also possible to include the semi synthetic rendering into a control loop together with tracking algorithms as they are assumed to be used in infrared guided anti-ship missiles. The main advantage as compared to full-synthetic closed loop rendering is in the drastically lower computational effort, while the synthesized data are quite realistic.
The IR modeling and simulation of the orbit target with celestial background
Orbit target IR model can be used to design orbit target detection sensor, generating simulation data to validate the data processing algorithms, such as the target detection and tracking. In the work, a novel orbit target IR model is built. IR detection uses the difference between the target and the background to achieve the target effectually. In order to increase the application ability, the IR model consists of the orbit target and the celestial background. The geometry module and IR radiometric module make up the orbit target IR model. The professional geometry modeling software CAD is used to build the geometry model. The reflection between the subassembly is considered in the radiometric, because the thermal control coat of the satellite (such as optical solar reflector) has very high specular reflectance generally. The Midcourse Space Experiment (MSX) catalog is used to calculate the IR celestial IR background. The IR radiation provided by the MSX is used to calculate the equivalent temperature and the observation angle by the SPSO (Stochastic Particle Swarm Optimization) method. The transfer algorithm adopted in this paper is compared with the Monte-Carlo method, and the results show that the relative deviation between them is less than 10%.
Scene text detection and recognition system for visually impaired people in real world
Visually Impaired (VI) people around the world have difficulties in socializing and traveling due to the limitation of traditional assistive tools. In recent years, practical assistance systems for scene text detection and recognition allow VI people to obtain text information from surrounding scenes. However, real-world scene text features complex background, low resolution, variable fonts as well as irregular arrangement which make it difficult to achieve robust scene text detection and recognition. In this paper, a scene text recognition system to help VI people is proposed. Firstly, we propose a high-performance neural network to detect and track objects, which is applied to specific scenes to obtain Regions of Interest (ROI). In order to achieve real-time detection, a light-weight deep neural network has been built using depth-wise separable convolutions that enables the system to be integrated into mobile devices with limited computational resources. Secondly, we train the neural network using the textural features to improve the precision of text detection. Our algorithm suppresses the effects of spatial transformation (including translation, scaling, rotation as well as other geometric transformations) based on the spatial transformer networks. Open-source optical character recognition (OCR) is used to train scene texts individually to improve the accuracy of text recognition. The interactive system eventually transfers the number and distance information of inbound buses to visually impaired people. Finally, a comprehensive set of experiments on several benchmark datasets demonstrates that our algorithm has achieved an extraordinary trade-off between precision and resource usage.
Sea-land segmentation in SAR images based on multifeature fused boundary clustering
Sea-land segmentation in synthetic aperture radar (SAR) images is a challenge due to the high complexity of littoral environment and speckle noise. In this work, we focus on develop a new procedure for sea-land segmentation of SAR images based on multi-feature fused boundary clustering. Multi-feature fusion, which combines strong scattering and high gradient features, is adopted to achieve fragmented boundaries of the original SAR images. Multi-direction clustering combined with possible geographic information is used to distinguish the real coastlines from the fragmented boundaries. Space-borne SAR image are processed to validate the proposed method. The results demonstrate that the multi-feature fusion technique can improve the accuracy in low-scattered land discrimination and the integrality in coastline detection.
Hardware and Materials
icon_mobile_dropdown
Optical polarization and the dependence of angle of incidence for different surfaces: comparison between different wavelengths from UV to IR
Tomas Hallberg, Johan Eriksson, Stefan Björkert, et al.
As the sensor technology for polarimetric imaging is advancing into more robust commercial systems such sensors could soon be expected for, e.g., military surveillance and reconnaissance applications in addition to more conventional sensor systems. Thus, there might be an upcoming need to understand limitations on present camouflage systems to meet this new sensor threat. Some of the reasons why polarimetric imaging has drawn attention is the ability to achieve a higher contrast for artificial surfaces against natural backgrounds, by analyzing the degree of linear polarization, which in this work has been analyzed for different types of surfaces as a function of wavelength. We also compare with the polarimetric vision of horse-flies and other aquatic insects via the polarization properties of different colors of horse coat hair in order to give some further insight into polarimetric vision techniques developed by nature. In this work we have used different measurement techniques, such as angle dependent polarimetric spectral directional hemispherical reflectance and polarimetric imaging.
Multispectral gonioreflectometer facility for directional reflectance measurements and its use on materials and paints
Marcos López Martínez, Tim Hartmann
Improvement of remote sensing technologies brings associated challenges in other research and production areas. In the field of coating and material science is an increase in the requirements to the reflectance properties. For many applications the spectral characterization alone no longer suffices and more knowledge about the directional reflectance characteristic is necessary. This directional information, where the ideal description is given by the bidirectional reflectance distribution function (BRDF), requires costly and complex measurement facilities called gonioreflectometers in order to extract it. We report on the realization of such a facility at Fraunhofer IOSB. The system consists of three parts: a rotation stage holding the sample, a semi-circular metal arm carrying the illumination source and a robot arm accommodating the sensor. The facility is highly adaptable to a variety of measurements, being capable of measuring anisotropy and retroreflection, which for several applications is of critical importance. Examples of measurements performed on several metallic paints in use on naval and inland applications will be shown as well as a lambertian reflectance reference for comparison.
Adaptive camouflage panel in the visible spectral range
Alexander Schwarz, Berndt Bartos, Michael Kunzer, et al.
In this work an adaptive panel in the visible spectral range is presented. Principal possibilities and basic aspects of adaptive camouflage in the VIS are considered and some details are discussed. The panel consists of modular tiles, each containing several high power four-color-LEDs controlled by a microcontroller and high current power supply and each tile designed to operate autonomously. To control the color and the intensity several color sensors were integrated into the system. The purpose of the panel is to take on a uniform color to best match its appearance to a given reference color, where both the panel and the reference color are subject to the same environmental conditions. The panel was not designed, however, to produce different camouflage patterns. The tiles on the surface were covered by a dark plastic plate in order to provide dark and saturated colors and to guarantee a dark appearance in the passive state of the system. As was to be expected, extreme situations like high ambient brightness and direct solar illumination turned out to be particularly challenging. Substantial tests and some modifications were performed to achieve a satisfactorily uniform color reproduction of a given reference color. Physical measurements as well as observer tests have been performed to demonstrate the capability of the adaptive system.
Poster Session
icon_mobile_dropdown
Water spray infrared extinction calculation and experimental validation
In recent years, the attenuation characteristics of log-normal water spray in the infrared region of atmospheric window have been studied in depth. However, there is no report on the comparison between calculation of the infrared transmittance of water spray based on Mie scattering using the LNMCM method and experiment, and the calculation error has not been publicly discussed. In this paper, we used Fluent to calculate the droplet concentration of water spray formed by a square arrangement of four FF-12 nozzles. After the water droplet number density distribution was obtained, the LNMCM method was used to calculate the infrared transmittance of water spray formed by the four-mounted FF-12 nozzles. On the other hand, an infrared attenuating test platform was built with four FF-12 nozzles, and an infrared imaging measurement test was conducted on the water spray formed by the four-mounted nozzles and the test panel that was shielded by it. The article compares the calculated results with the experimental results. Error analysis shows that the calculated value of the infrared radiation intensity of 7.5~13μm deviates from the experimental value by 3.1%. This paper verifies the accuracy of LNMCM and Fluent to calculate the infrared transmittance of log-normal distribution water spray through test measurement method, and forms a relatively complete calculation method of infrared radiation and attenuation of water spray.
Autoencoder versus pre-trained CNN networks: deep-features applied to accelerate computationally expensive object detection in real-time video streams
Traditional event detection from video frames are based on a batch or offline based algorithms: it is assumed that a single event is present within each video, and videos are processed, typically via a pre-processing algorithm which requires enormous amounts of computation and takes lots of CPU time to complete the task. While this can be suitable for tasks which have specified training and testing phases where time is not critical, it is entirely unacceptable for some real-world applications which require a prompt, real-time event interpretation on time. With the recent success of using multiple models for learning features such as generative adversarial autoencoder (GANS), we propose a two-model approach for real-time detection. Like GANs which learns the generative model of the dataset and further optimizes by using the discriminator which learn per sample difference between generated images. The proposed architecture uses a pre-trained model with a large dataset which is used to boost weekly labeled instances in parallel with deep-layers for the small aerial targets with a fraction of the computation time for training and detection with high accuracy. We emphasize previous work on unsupervised learning due to overheads in training labeled data in the sensor domain.
New developments in thermal targets
Pawel Hlosta, Grzegorz Polak, Waldemar Swiderski, et al.
Part of basic training for every soldier is firearms training, during which soldiers learn to master the principles of firearm operation, proper posture, and correct use of weapons including constructing and servicing the weapon. The main objective of this training is to improve their skills with small arms using different targets in different weather conditions. A particularly difficult part of this training is shooting at night. In night conditions, shooting is carried out using optoelectronic sights: night vision and thermovision. The principle of operation of a night vision sight is based on the reinforcement of residual visible light. Thermovision sights for imaging need infrared radiation in two basic so-called 3-5 μm and 8-14 μm windows. Therefore, targets used for daytime shooting, visible in the normal visible range, can’t be seen at night using these sights. Of course, these targets could be lit with reflectors of visible light and would then be visible without the use of night sights, but clearly these are not conditions that occur during real military operations. A variety of heated targets are used, but it is easy to damage (cut) them while shooting, especially the power cables used in their construction. As a result, the target immediately stops working. A consortium consisting of MIAT and OPTIMUM undertook the development of new target solutions, whose utility parameters will be much better than previously used targets. As a result of this project, three types of non-heated targets for both night and daytime shooting, and a heated target were developed. This paper presents both the concept of these targets and testing results of their models.