Show all abstracts
View Session
- Front Matter: Volume 9649
- Active Sensing
- Passive Sensing and Processing
- Multisensor Systems
- Maritime Sensing
- Processing of Electro-Optical Data
- Emerging Technologies
- Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing
- Poster Session
Front Matter: Volume 9649
Front Matter: Volume 9649
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 9649, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Active Sensing
SWIR laser gated-viewing at Fraunhofer IOSB
Show abstract
This paper reviews the work that has been done at Fraunhofer IOSB (and its predecessor institutes) in the past ten years
in the area of laser gated-viewing (GV) in the short-wavelength infrared (SWIR) band. Experimental system
demonstrators in various configurations have been built up in order to show the potential for different applications and to
investigate specific topics. The wavelength of the pulsed illumination laser is around 1.57 μm and lies in the invisible,
retina-safe region allowing much higher pulse energies than for wavelengths in the visible or near-infrared band
concerning eye safety. All systems built up, consist of gated Intevac LIVAR® cameras based on EBCCD/EBCMOS
detectors sensitive in the SWIR band. This review comprises military and civilian applications in maritime and land
domain – in particular vision enhancement in bad visibility, long-range applications, silhouette imaging, 3-D imaging by
sliding gates and slope method, bi-static GV imaging and looking through windows. In addition, theoretical studies that
were conducted – for example estimating 3-D accuracy or modelling range performance – are presented. Finally, an
outlook for future work in the area of SWIR laser GV at Fraunhofer IOSB is given.
Accuracy evaluation of 3D lidar data from small UAV
Show abstract
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution
and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor
on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and
recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over
the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing
parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the
accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height
accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E
lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point
cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with
lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the
navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch,
roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based
(microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved
accuracy compared to processing based solely on INS data.
Pulsed, tunable, single-frequency OP-GaAs OPO for the standoff detection of hazardous chemicals in the longwave infrared
Show abstract
We present our results on the first nanosecond single-frequency optical parametric oscillator (OPO) emitting in the
longwave infrared. It is based on orientation-patterned GaAs (OP-GaAs), and can be pumped by a pulsed singlefrequency
Tm:YAP microlaser thanks to its low oscillation threshold of 10 μJ. Stable single-longitudinal mode emission
of the OPO is obtained owing to Vernier spectral filtering provided by its nested cavity OPO (NesCOPO) scheme.
Crystal temperature tuning covers the 10.3-10.9 μm range with a single quasi-phase-matching period of 72.6 μm. Shortrange
standoff detection of ammonia vapor around 10.4 μm is performed with this source. We believe that this
achievement paves the way to differential absorption lidars in the LWIR with increased robustness and reduced footprint.
Reconstruction of time-correlated single-photon counting range profiles of moving objects
Show abstract
Time-correlated single-photon counting (TCSPC) is a laser radar technique that can provide range profiling with subcentimetre
range resolution. The method relies on accurate time measurements between a laser pulse sync signal and the
registration of a single-photon detection of photons reflected from an object. The measurement is performed multiple
times and a histogram of arrival times is computed to gain information about surfaces at different distances within the
field of view of the laser radar. TCSPC is a statistic method that requires an integration time and therefore the range
profile of a non-stationary object (target) will be corrupted. However, by dividing the measurement into time intervals
much shorter than the total acquisition time and cross correlating the histogram from each time interval it is possible
calculate how the target has moved relative to the first time interval. The distance as a function of time was fitted to a
polynomic function. This result was used to calculate a distance correction of every single detection event and the
equivalent stationary histogram was reconstructed. Series of measurements on the objects with constant or non-linear
velocities up to 0.5 m/s were performed and compared with stationary measurements. The results show that it is possible
to reconstruct range profiles of moving objects with this technique. Reconstruction of the signal requires no prior
information of the original range profile and the instantaneous and average velocities of the object can be calculated.
Lidar measurement as support to the ocular hazard distance calculation using atmospheric attenuation
Show abstract
The reduction of the laser hazard distance range using atmospheric attenuation has been tested with series of lidar
measurements accomplished at the Vidsel Test Range, Vidsel, Sweden. The objective was to find situations with low
level of aerosol backscatter during this campaign, with the implications of low extinction coefficient, since the lowest
atmospheric attenuation gives the highest ocular hazards.
The work included building a ground based backscatter lidar, performing a series of measurements and analyzing the
results. The measurements were performed during the period June to November, 2014.
The results of lidar measurements showed at several occasions’ very low atmospheric attenuation as a function of height
to an altitude of at least 10 km. The lowest limit of aerosol backscatter coefficient possible to measure with this
instrument is less than 0.3•10-7 m-1 sr-1. Assuming an aerosol lidar ratio between 30 – 100 sr this leads to an aerosol
extinction coefficient of about 0.9 - 3•10-6 m-1.
Using a designator laser as an example with wavelength 1064 nm, power 0.180 W, pulse length 15 ns, PRF 11.5 Hz,
exposure time of 10 sec and beam divergence of 0.08 mrad, it will have a NOHD of 48 km. With the measured aerosol
attenuation and by assuming a molecule extinction coefficient to be 5•10-6 m-1 (calculated using MODTRAN (Ontar
Corp.) assuming no aerosol) the laser hazard distance will be reduced with 51 - 58 %, depending on the lidar ratio
assumption.
The conclusion from the work is; reducing of the laser hazard distance using atmospheric attenuation within the NOHD
calculations is possible but should be combined with measurements of the attenuation.
Passive Sensing and Processing
Evaluating uniformity of IR reference sources
Show abstract
Infrared reference sources such as blackbodies are used to calibrate and test IR sensors and cameras.. Applications requiring a high thermal uniformity over the emissive surface become more and more frequent compared to the past applications. Among these applications are non uniformity correction of infrared cameras focused at short distance and simultaneous calibration of a set of sensor facing a large area blackbody. Facing these demanding applications requires to accurately measuring thee thermal radiation of each point of the emissive surface of the reference source. The use of an infrared camera for this purpose turns out to be absolutely inefficient since the uniformity off response of this camera is usually worse than the uniformity of thee source to bee measured. Consequently, HGH has developed a testing bench for accurate measurement of uniformity of infrared sources based on a low noise radiometer mounted of translating stages and using an exclusive drift correction method. This bench delivers a reliable thermal map of any kind of infrared reference source.
An approach to select the appropriate image fusion algorithm for night vision systems
Gabriele Schwan,
Norbert Scherer-Negenborn
Show abstract
For many years image fusion has been an important subject in the image processing community. The purpose of image
fusion is taking over the relevant information from two or several images to construct one result image. In the past many
fusion algorithms were developed and published. Some attempts were made to assess the results from several fusion
algorithms automatically with the objective of gaining the best suited output for human observers. But it was shown, that
such objective machine-assessment does not always correlate with the observer’s subjective perception. In this paper a
novel approach is presented, which selects the appropriate fusion algorithm to receive the best image enhancement
results for human observers. Assessment of the fusion algorithms’ results was done based on the local contrasts. Fusion
algorithms are used on a representative data set covering different use cases and image contents. These fusion results of
selected data are judged subjectively by some human observers. Then the assessment algorithm with the best fit to the
visual perception is used to select the best fusion algorithm for comparable scenarios.
Field trials for development of multiband signal processing solutions to detect disturbed soil
Show abstract
This paper briefly describes a field trial designed to give a realistic data set on a road section containing areas with
disturbed soil due to buried IEDs. During a time-span of a couple of weeks, the road was repeatedly imaged using a
multi-band sensor system with spectral coverage from visual to LWIR. The field trial was conducted to support a long
term research initiative aiming at using EO sensors and sensor fusion to detect areas of disturbed soil.
Samples from the collected data set is presented in the paper and shown together with an investigation on basic statistical
properties of the data. We conclude that upon visual inspection, it is fully possible to discover areas that have been
disturbed, either by using visual and/or IR sensors. Reviewing the statistical analysis made, we also conclude that
samples taken from both disturbed and undisturbed soil have well definable statistical distributions for all spectral bands.
We explore statistical tests to discriminate between different samples showing positive indications that discrimination
between disturbed and undisturbed soil is potentially possible using statistical methods.
Classification of vegetation types in military region
Show abstract
In decision-making process regarding planning and execution of military operations, the terrain is a determining factor.
Aerial photographs are a source of vital information for the success of an operation in hostile region, namely when the
cartographic information behind enemy lines is scarce or non-existent. The objective of present work is the development
of a tool capable of processing aerial photos. The methodology implemented starts with feature extraction, followed by
the application of an automatic selector of features. The next step, using the k-fold cross validation technique, estimates
the input parameters for the following classifiers: Sparse Multinomial Logist Regression (SMLR), K Nearest Neighbor
(KNN), Linear Classifier using Principal Component Expansion on the Joint Data (PCLDC) and Multi-Class Support
Vector Machine (MSVM). These classifiers were used in two different studies with distinct objectives: discrimination of
vegetation’s density and identification of vegetation’s main components. It was found that the best classifier on the first
approach is the Sparse Logistic Multinomial Regression (SMLR). On the second approach, the implemented
methodology applied to high resolution images showed that the better performance was achieved by KNN classifier and
PCLDC. Comparing the two approaches there is a multiscale issue, in which for different resolutions, the best solution to
the problem requires different classifiers and the extraction of different features.
Hue-preserving local contrast enhancement and illumination compensation for outdoor color images
Show abstract
Real-time applications in the field of security and defense use dynamic color camera systems to gain a better
understanding of outdoor scenes. To enhance details and improve the visibility in images it is required to per-
form local image processing, and to reduce lightness and color inconsistencies between images acquired under
different illumination conditions it is required to compensate illumination effects. We introduce an automatic
hue-preserving local contrast enhancement and illumination compensation approach for outdoor color images.
Our approach is based on a shadow-weighted intensity-based Retinex model which enhances details and compensates the illumination effect on the lightness of an image. The Retinex model exploits information from a shadow
detection approach to reduce lightness halo artifacts on shadow boundaries. We employ a hue-preserving color
transformation to obtain a color image based on the original color information. To reduce color inconsistencies
between images acquired under different illumination conditions we process the saturation using a scaling function. The approach has been successfully applied to static and dynamic color image sequences of outdoor scenes
and an experimental comparison with previous Retinex-based approaches has been carried out.
Multisensor Systems
3D sensing and imaging for UAVs
Show abstract
This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data.
Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed.
We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.
Comparison of high speed imaging technique to laser vibrometry for detection of vibration information from objects
Show abstract
The development of camera technology in recent years has made high speed imaging a reliable method in vibration and dynamic measurements. The passive recovery of vibration information from high speed video recordings was reported in several recent papers. A highly developed technique, involving decomposition of the input video into spatial subframes to compute local motion signals, allowed an accurate sound reconstruction. A simpler technique based on image matching for vibration measurement was also reported as efficient in extracting audio information from a silent high speed video. In this paper we investigate and discuss the sensitivity and the limitations of the high speed imaging technique for vibration detection in comparison to the well-established Doppler vibrometry technique. Experiments on the extension of the high speed imaging method to longer range applications are presented.
A proposal for combining mapping, localization and target recognition
Show abstract
Simultaneous localization and mapping (SLAM) is a well-known positioning approach in GPS-denied environments
such as urban canyons and inside buildings. Autonomous/aided target detection and recognition (ATR) is commonly
used in military application to detect threats and targets in outdoor environments. This papers present approaches to
combine SLAM with ATR in ways that compensate for the drawbacks in each method. The methods use physical objects
that are recognizable by ATR as unambiguous features in SLAM, while SLAM provides the ATR with better position
estimates. Landmarks in the form of 3D point features based on normal aligned radial features (NARF) are used in
conjunction with identified objects and 3D object models that replace landmarks when possible. This leads to a more
compact map representation with fewer landmarks, which partly compensates for the introduced cost of the ATR.
We analyze three approaches to combine SLAM and 3D-data; point-point matching ignoring NARF features, point-point
matching using the set of points that are selected by NARF feature analysis, and matching of NARF features using
nearest neighbor analysis. The first two approaches are is similar to the common iterative closest point (ICP). We
propose an algorithm that combines EKF-SLAM and ATR based on rectangle estimation. The intended application is to
improve the positioning of a first responder moving through an indoor environment, where the map offers localization
and simultaneously helps locate people, furniture and potentially dangerous objects such as gas canisters.
Maritime Sensing
Active-imaging-based underwater navigation
David Monnin,
Gwenaël Schmitt,
Colin Fischer,
et al.
Show abstract
Global navigation satellite systems (GNSS) are widely used for the localization and the navigation of unmanned and remotely operated vehicles (ROV). In contrast to ground or aerial vehicles, GNSS cannot be employed for autonomous underwater vehicles (AUV) without the use of a communication link to the water surface, since satellite signals cannot be received underwater. However, underwater autonomous navigation is still possible using self-localization methods which determines the relative location of an AUV with respect to a reference location using inertial measurement units (IMU), depth sensors and even sometimes radar or sonar imaging. As an alternative or a complementary solution to common underwater reckoning techniques, we present the first results of a feasibility study of an active-imaging-based localization method which uses a range-gated active-imaging system and can yield radiometric and odometric information even in turbid water.
Passive and active EO sensing of small surface vessels
Show abstract
The detection and classification of small surface targets at long ranges is a growing need for naval security. This paper
will present an overview of a measurement campaign which took place in the Baltic Sea in November 2014. The purpose
was to test active and passive EO sensors (10 different types) for the detection, tracking and identification of small sea
targets. The passive sensors were covering the visual, SWIR, MWIR and LWIR regions. Active sensors operating at 1.5
μm collected data in 1D, 2D and 3D modes. Supplementary sensors included a weather station, a scintillometer, as well
as sensors for positioning and attitude determination of the boats.
Three boats in the class 4-9 meters were used as targets. After registration of the boats at close range they were sent out
to 5-7 km distance from the sensor site. At the different ranges the target boats were directed to have different aspect angles
relative to the direction of observation.
Staff from IOSB Fraunhofer in Germany and from Selex (through DSTL) in UK took part in the tests beside FOI who
was arranging the trials. A summary of the trial and examples of data and imagery will be presented.
Experiences from long range passive and active imaging
Show abstract
We present algorithm evaluations for ATR of small sea vessels. The targets are at km distance from the sensors, which
means that the algorithms have to deal with images affected by turbulence and mirage phenomena. We evaluate
previously developed algorithms for registration of 3D-generating laser radar data. The evaluations indicate that some
robustness to turbulence and mirage induced uncertainties can be handled by our probabilistic-based registration method.
We also assess methods for target classification and target recognition on these new 3D data.
An algorithm for detecting moving vessels in infrared image sequences is presented; it is based on optical flow
estimation. Detection of moving target with an unknown spectral signature in a maritime environment is a challenging
problem due to camera motion, background clutter, turbulence and the presence of mirage. First, the optical flow caused
by the camera motion is eliminated by estimating the global flow in the image. Second, connected regions containing
significant motions that differ from camera motion is extracted. It is assumed that motion caused by a moving vessel is
more temporally stable than motion caused by mirage or turbulence. Furthermore, it is assumed that the motion caused
by the vessel is more homogenous with respect to both magnitude and orientation, than motion caused by mirage and
turbulence. Sufficiently large connected regions with a flow of acceptable magnitude and orientation are considered
target regions. The method is evaluated on newly collected sequences of SWIR and MWIR images, with varying targets,
target ranges and background clutter.
Finally we discuss a concept for combining passive and active imaging in an ATR process. The main steps are passive
imaging for target detection, active imaging for target/background segmentation and a fusion of passive and active
imaging for target recognition.
Maritime target identification in gated viewing imagery
Show abstract
The growing interest in unmanned surface vehicles, accident avoidance for naval vessels and automated maritime surveillance leads to a growing need for automatic detection, classification and pose estimation of maritime objects in medium and long ranges. Laser radar imagery is a well proven tool for near to medium range, but up to now for higher distances neither the sensor range nor the sensor resolution was satisfying. As a result of the mentioned limitations of laser radar imagery the potential of laser illuminated gated viewing for automated classification and pose estimation was investigated. The paper presents new techniques for segmentation, pose estimation and model-based identification of naval vessels in gated viewing imagery in comparison with the corresponding results of long range data acquired with a focal plane array laser radar system. The pose estimation in the gated viewing data is directly connected with the model-based identification which makes use of the outline of the object. By setting a sufficient narrow gate, the distance gap between the upper part of the ship and the background leads to an automatic segmentation. By setting the gate the distance to the object is roughly known. With this distance and the imaging properties of the camera, the width of the object perpendicular to the line of sight can be calculated. For each ship in the model library a set of possible 2D appearances in the known distance is calculated and the resulting contours are compared with the measured 2D outline. The result is a match error for each reasonable orientation of each model of the library. The result gained from the gated viewing data is compared with the results of target identification by laser radar imagery of the same maritime objects.
Processing of Electro-Optical Data
Methodology for the conception of speckle reduction elements in the case of short pulse illumination
Show abstract
One of the most efficient ways to decrease the speckle contrast in the field of laser illumination is to increase the
spatial diversity of coherent laser sources. For very short laser pulses such as those required for flash laser
imaging, the spatial diversity should take place instantaneously and no time averaging effect can be used. The
spatial diversity is realized by sampling the laser beam into m beamlets with increased optical path length. This
path length has to be greater than or equal to the coherence length of the laser beam. In this case, the beamlets
are no longer able to create interferences which each other. According to the Goodman’s theory of speckle
reduction, the speckle contrast is then reduced by a factor of 1/√m. Unfortunately, in the case of multimode
lasers, the number of uncorrelated beamlets is not infinite but is limited by a periodicity function resulting from
the laser resonator length itself. The speckle reduction possibility is therefore limited and is directly linked to
each laser source where the coherence length and cavity length are defined.
In this work we present a methodology to determine experimentally the optical path length difference as well as
the number of beamlets for de-speckling a laser source. An experimental realization is presented where both,
coherence length and periodicity function are measured with a Michelson interferometer where only the speckle
contrast of the two beams from each arm is analyzed. For the validation of the method, the chosen laser source is
a single emitter 660 nm laser diode. Two cylindrical steppers made with diamond turned PMMA have been
realized. Both elements yield interesting results with close values and in accordance with the theory of spatial
diversity. The speckle contrast could be reduced from about 10% to a value close to 4%. These values confirm
and validate the methodology presented in this work.
Steppers can also be a promising solution for the reduction of interference fringes which appear when using a
lightpipe in a laser illuminator design.
Automatic structural matching of 3D image data
Show abstract
A new image matching technique is described. It is implemented as an object-independent hierarchical structural
juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural
matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of
two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace
photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural
landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR
pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the
algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object
surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely
low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and
matching of images are described, and the examples of image matching are presented.
Results of implementation of the dynamic laser goniometer for non-contact measurement of angular movement
Show abstract
The report presents results of implementation of the dynamic laser goniometer in the mode of non-contact measurements
of an object’s angular position. One of obtained results is connected with determination of the time dependence of the
scanning mirror angular position. Another kind of implementation result is determination of parameters of a test table
oscillatory movement. The obtained results shows that the use of the LG makes it possible to calibrate various kinds of
test-beds making angular oscillations or angular movement of some other law.
Emerging Technologies
Modern fibre-optic coherent lidars for remote sensing
Chris Hill
Show abstract
This paper surveys some growth areas in optical sensing that exploit near-IR coherent laser sources and fibreoptic
hardware from the telecoms industry. Advances in component availability and performance are promising benefits
in several military and commercial applications. Previous work has emphasised Doppler wind speed measurements and
wind / turbulence profiling for air safety, with recent sharp increases in numbers of lidar units sold and installed, and with
wider recognition that different lidar / radar wavebands can and should complement each other. These advances are also
enabling fields such as microDoppler measurement of sub-wavelength vibrations and acoustic waves, including non-lineof-
sight acoustic sensing in challenging environments.
To shed light on these different applications we review some fundamentals of coherent detection, measurement
probe volume, and parameter estimation - starting with familiar similarities and differences between "radar" and "laser
radar". The consequences of changing the operating wavelength by three or four orders of magnitude – from millimetric
or centimetric radar to a typical fibre-optic lidar working near 1.5 μm - need regular review, partly because of continuing
advances in telecoms technology and computing.
Modern fibre-optic lidars tend to be less complicated, more reliable, and cheaper than their predecessors; and they
more closely obey the textbook principles of easily adjusted and aligned Gaussian beams. The behaviours of noises and
signals, and the appropriate processing strategies, are as expected different for the different wavelengths and applications.
For example, the effective probe volumes are easily varied (e.g. by translating a fibre facet) through six or eight orders of
magnitude; as the average number of contributing scatterers varies, from <<1 through ~1 to >>1, we should review any
assumptions about "many" scatterers and Gaussian statistics.
Finally, some much older but still relevant scientific work (by A G Bell, E H Armstrong and their colleagues) is
recalled, in the context of remote sensing of acoustic vibrations.
AlGaInN laser diode technology and systems for defence and security applications
Show abstract
AlGaInN laser diodes is an emerging technology for defence and security applications such as underwater communications and sensing, atomic clocks and quantum information. The AlGaInN material system allows for laser diodes to be fabricated over a very wide range of wavelengths from u.v., ~380nm, to the visible ~530nm, by tuning the indium content of the laser GaInN quantum well. Thus AlGaInN laser diode technology is a key enabler for the development of new disruptive system level applications in displays, telecom, defence and other industries. Ridge waveguide laser diodes are fabricated to achieve single mode operation with optical powers up to 100mW with the 400-440nm wavelength range with high reliability. Visible free-space and underwater communication at frequencies up to 2.5GHz is reported using a directly modulated 422nm GaN laser diode. Low defectivity and highly uniform GaN substrates allow arrays and bars to be fabricated. High power operation operation of AlGaInN laser bars with up to 20 emitters have been demonstrated at optical powers up to 4W in a CS package with common contact configuration. An alternative package configuration for AlGaInN laser arrays allows for each individual laser to be individually addressable allowing complex free-space or optical fibre system integration with a very small form-factor.
Microoptical gyros on the base of passive ring cavities
Show abstract
The review paper considers the state-of-the-art in the technique of the small-size optical gyros, based on the use of
passive ring-shaped optical cavities (resonators).
Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing
Compact full-motion video hyperspectral cameras: development, image processing, and applications
A. V. Kanaev
Show abstract
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV)
sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad
possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as
well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the
benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant
computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic
array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or
classification. The second is presentation of the video to an operator that can offer the best display of the content
depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis.
These processing streams can be executed in parallel or they can utilize each other’s results. The spectral analysis
algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands
has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and
show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band
SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the
demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband
processing.
Airborne thermal infrared hyperspectral imaging of buried objects
Show abstract
Characterization of hazardous lands using ground-based techniques can be very challenging. For this reason, airborne
surveys are often preferred. The use of thermal infrared imaging represents an interesting approach as surveys can be
carried out under various illumination conditions and that the presence of buried objects typically modifies the thermal
inertia of their surroundings. In addition, the burial or presence of a buried object will modify the particle size, texture,
moisture and mineral content of a small region around it. All these parameters may lead to emissivity contrasts which will
make thermal contrast interpretation very challenging. In order to illustrate the potential of airborne thermal infrared
hyperspectral imaging for buried object characterization, various metallic objects were buried in a test site prior to an
airborne survey. Airborne hyperspectral images were recorded using the targeting acquisition mode, a unique feature of
the Telops Hyper-Cam Airborne system which allows recording of successive maps of the same ground area. Temperatureemissivity
separation (TES) was carried out on the hyperspectral map obtained upon scene averaging. The thermodynamic
temperature map estimated after TES highlights the presence of hot spots within the investigated area. Mineral mapping
was carried out upon linear unmixing of the spectral emissivity datacube obtained after TES. The results show how the
combination of thermal information and mineral distribution leads to a better characterization of test sites containing buried
objects.
SIBI: A compact hyperspectral camera in the mid-infrared
Show abstract
Recent developments in unmanned aerial vehicles have increased the demand for more and more compact
optical systems. In order to bring solutions to this demand, several infrared systems are being developed at
ONERA such as spectrometers, imaging devices, multispectral and hyperspectral imaging systems. In the field
of compact infrared hyperspectral imaging devices, ONERA and Sagem Défense et Sécurité have collaborated
to develop a prototype called SIBI, which stands for "Spectro-Imageur Birefringent Infrarouge". It is a static
Fourier transform imaging spectrometer which operates in the mid-wavelength infrared spectral range and
uses a birefringent lateral shearing interferometer. Up to now, birefringent interferometers have not been
often used for hyperspectral imaging in the mid-infrared because of the lack of crystal manufacturers, contrary
to the visible spectral domain where the production of uniaxial crystals like calcite are mastered for various
optical applications. In the following, we will present the design and the realization of SIBI as well as the first
experimental results.
Poster Session
Structural and optical properties of TiO2–Al2O3 nanolaminates produced by atomic layer deposition
Show abstract
Structural and optical properties of Al2O3/TiO2 nanolaminates fabricated by atomic layer deposition (ALD) were
investigated. We performed Raman spectroscopy, transmission electron microscopy (TEM), X-Ray reflectivity (XRR),
UV-Vis spectroscopy, and photoluminescence (PL) spectroscopy to characterize the Al2O3/TiO2 nanolaminates. The
main structural and optical parameters of Al2O3/TiO2 nanolaminates were calculated. It was established that with
decreasing of the layer thickness, the value of band gap energy increases due to the quantum size effect related to the
reduction of the nanograins size. It was also shown that there is an interdiffusion layer at the Al2O3/TiO2 interface which
plays a crucial role in explaining the optical properties of Al2O3/TiO2 nanolaminates. Correlation between structural and
optical parameters was discussed.