Proceedings Volume 10796

Electro-Optical Remote Sensing XII

cover
Proceedings Volume 10796

Electro-Optical Remote Sensing XII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 16 November 2018
Contents: 7 Sessions, 26 Papers, 9 Presentations
Conference: SPIE Security + Defence 2018
Volume Number: 10796

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10796
  • Active Sensing I
  • Active Sensing II
  • Active Sensing III
  • Measurement Techniques
  • Signal Processing
  • Poster Session
Front Matter: Volume 10796
icon_mobile_dropdown
Front Matter: Volume 10796
This PDF file contains the front matter associated with SPIE Proceedings Volume 10796, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Active Sensing I
icon_mobile_dropdown
Influence of the obscurants and the illumination wavelengths on a range-gated active imaging system performance
F. Christnacher, J.-M. Poyet, E. Bacher, et al.
Range-gated active imaging is a prominent technique for night vision, remote sensing or vision through obstacles (fog, smoke, camouflage netting…). Indeed, by means of the "range gating" or the "time gating" technique, it is possible to eliminate the backscattering effects during the propagation of the illuminating light through scattering environments such as rain, snow, fog, mist, haze or smoke. The elimination of the backscattering effects leads to a significant increase in the vision range in harsh environments. Surprisingly, even if a lot of authors estimate that range-gated imaging brings a gain when used in scattering environments, there are no studies which systematically investigate and quantify the real gain provided in comparison with classical imaging systems in different controlled obscurant densities.

We put in evidence that the penetration depth improvement can drastically vary with the type of obscurant and with the illumination wavelength. For example, it can be improved by more than a factor of 10 for specific smokes to only a factor of 1.5 for water droplet based fog. In this paper, we thoroughly examined the performance enhancement of laser range gating in comparison with a color camera representing the human vision. On the one hand, we studied the influence of the different types of obscurants and showed that they lead to very different results. On the other hand, we examined the influence of the illumination wavelength.

As the global attenuation of an obscurant is the sum of its absorption and its diffusion, we also report on some experimental results in which we tried to separate the influence of each of these two parameters. To demonstrate the influence of the absorption by maintaining the diffusion constant, we worked with the same type of smoke, but with different colors. To work with different levels of diffusion, we maintained the particle material constant and worked with different particle diameters.
Mitigation of crosstalk effects in multi-LiDAR configurations
Axel L. Diehm, Marcus Hammer, Marcus Hebel, et al.
In this paper, we examine crosstalk effects that can arise in multi-LiDAR configurations, and we show a data-based approach to mitigate these effects. Due to the ability to acquire precise 3D data of the environment, LiDAR-based sensor systems (sensors based on “Light Detection and Ranging”, e.g., laser scanners) increasingly find their way into various applications, e.g. in the automotive sector. However, with an increasing number of LiDAR sensors operating within close vicinity, the problem of potential crosstalk between these devices arises. “Crosstalk” outlines the following effect: In a typical LiDAR-based sensor, short laser pulses are emitted into the scene and the distance between sensor and object is derived from the time measured until an “echo” is received. In case multiple laser pulses of the same wavelength are emitted at the same time, the detector may not be able to distinguish between correct and false matches of laser pulses and echoes, resulting in erroneous range measurements and 3D points. During operation of our own multi-LiDAR sensor system, we were able to observe crosstalk effects in the acquired data. Having compared different spatial filtering approaches for the elimination of erroneous points in the 3D data, we propose a data-based spatio-temporal filtering and show its results, which may be sufficient depending on the application. However, technical solutions are desired for future LiDAR sensors.
Novel high-energy short-pulse laser diode source for 3D lidar systems
C. Canal, A. Laugustin, A. Kohl, et al.
Recent research efforts on detector arrays in the near-infrared (NIR) and the short-wave infrared (SWIR) wavelength range have led to compact and robust 3D flash lidar architectures [1]. This technique can provide real-time 3D mapping of an area in various weather conditions for surveillance, but also for detection of moving objects in guidance of unmanned vehicles. Today main size limitation for compact, portable systems is the pulsed laser source. An alternative to solid-state or fiber lasers could be laser diodes since they have a better efficiency and their beam profile is compatible with active imaging by flash illumination. This paper will present recent advances in mJ class pulsed laser diodes made by Quantel Laser, France. Experimental results will be described both in NIR and in SWIR bands.
Active Sensing II
icon_mobile_dropdown
Panoramic single‑photon counting 3D lidar
Markus Henriksson, Lars Allard, Per Jonsson
In defense and security applications it is often of interest to find out if something is hidden inside the forest edge on the other side of an open field. High resolution 3D data has a possibility to allow segmentation of hidden objects from branches and leaves and to provide data for object identification. Multiple measurements with single photon counting lidar can provide very high range resolution to separate reflections from objects at different distances inside the system instantaneous field of view. Geiger mode avalanche photo diode (GmAPD) arrays provide high frame rate lidar data collection, but with array sizes that are in most cases smaller than the scenes that should be imaged. The high frame rate of GmAPD arrays and the low laser pulse energy necessary, however, make panoramic single photon counting 3D lidar of large scenes by sweeping the array field of view practically applicable. Indoor and outdoor measurements at up to 350 m range have been performed at FOI using single pixel and GmAPD array single photon counting 3D lidar systems. Panoramic imaging with a 128×32 GmAPD array 3D camera has been performed over an angle of 20° using 4 s total collection time in daylight conditions, resulting in 128×1400 panorama pixels where every pixel contains a histogram representing the laser radar response in that direction. A flat surface at ca 235 m distance had a standard deviation of point to plane distances of 10.6 mm. The laser excitation was far from the optimal that allows for over 0.5 photon detections per pixel and laser pulse. With a higher power laser the accuracy can hence be further improved or alternatively the necessary collection time can be reduced. A single panorama provides rapid high resolution 3D data for target detection. Continuous surveillance from a fixed sensor position allows change detection to provide segmentation of interesting activity in the large data volume.
A dual-frequency coherent noise lidar for robust range-Doppler measurements
Daniel Onori, José Azaña
In this work, we propose and characterize in laboratory a novel architecture of dual-frequency coherent noise lidar for simultaneous velocity and range measurements. In this system, the range/Doppler detection is obtained by correlating two detections achieved through two distinct optical noise signals, in order to reduce the noise that affects the received echo. Moreover, in the scheme, the two optical noise waveforms are generated by simply exploiting two continuouswave lasers with broad linewidth, avoiding the need for RF arbitrary waveform generators and synthesizers.
Portable bi-λ SWIR/NIR GV gated viewing system for surveillance and security applications
F. Christnacher, E. Bacher, N. Metzger, et al.
Active imaging is an emerging technology in the field of surveillance for security and military applications. Numerous works have demonstrated the value of active imaging in target recognition and identification, vision through fog, underwater vision or three-dimensional (3D) imaging. However, surveillance applications in civilian and military fields need the use of an eye-safe illumination. Unfortunately in this spectral region, there is still a lack of ITAR-free, commercially available and efficient intensified cameras or laser sources which are the two main components of an active imaging system. Though, a few years ago, ISL showed the feasibility of a portable SWIR night-vision goggle with the PELIS system. This goggle was based on a continuous illumination associated with an InGaAs camera, but with this technique, the fundamental properties of range-gating were not exploited.

In this paper, we report on a new portable and range-gated night-vision goggle in the SWIR spectral region. This goggle will be a useful eye-safe device for surveillance and imaging under all weather conditions. At 1.5 μm, it is well known that human skin appears black making face recognition ineffective. For applications which need facial identification as a legal proof, we implemented a bi-wavelength laser where it is also possible to extract one pulse of light at a second wavelength, where the skin appears with the same reflectance as in the visible spectrum (1.06 μm). After a theoretical analysis, we will describe the goggle technology and show some lab and outdoor recordings.
Shadows in laser imaging and mapping
Ove Steinvall, Lars Sjökvist, Per Jonsson, et al.
So far, the utilization of shadows and silhouettes in laser imaging and mapping has been limited. On the contrary, for Synthetic Aperture Radars (SAR) and Sonars (SAS), shadows are important observables for target detection, location, tracking and even identification. Compared to SAR systems, however, direct detection laser systems offer no restrictions on shadow degradation due to platform and target movements and diffraction effects. This means that the shape of the shadow can be directly related to the shape of the target if the illumination geometry and background behind the target are known. This paper will review some of the shadow characteristics for both SAR, SAS and laser radar. Examples of shadows from different type of laser imaging and mapping system will be presented. Finally different type of laser sensors are discussed in relation to shadow generation for target detection and classification.
Compressive sensing for active imaging in SWIR spectral range
Compressive sensing (CS) is an imaging method that enables the replacement of expensive matrix detectors by small and cheap detectors with one or a few detector elements. A high-resolution image is realized from a series of individual single-value measurements. Each measurement consists of capturing the image from an object or a scene after coding by a well-defined pattern. The reconstruction of the high-resolution image requires a number of measurements significantly smaller than the number of full-frame image pixels. This is because most natural images may be sparsely coded, i.e. we may find an appropriate basis for which most coefficients are close to zero. This paper reports CS experiments under pulse laser illumination at 1.55 μm. The light collected from the observed scene is spatially modulated using a digital micromirror device (DMD) and projected onto a single-pixel detector. The applied binary patterns are generated using a Hadamard matrix. Different approaches for pattern selection have been implemented and compared.
Active Sensing III
icon_mobile_dropdown
Software defined multifunction LIDAR
Peter D. Kightley, Matthew J. Croydon
There are many different types of LIDAR instrument serving a multitude of different sensing objectives (range finding, velocimetry, vibrometry and optical communications to name a few) but what they all share in common is the transmission of laser light. What discriminates one sensing modality from another is the manner in which that light is modulated and subsequently demodulated. Traditionally the modulation format has been ‘hard wired’ into the LIDAR instrument, but advances in digital waveform synthesis at useful bandwidths and bit depths combined with telecoms fibre optic technology allow sophisticated optical modulation schemes to be driven from software in real time. Coupled with high bandwidth digitization in the receive chain and real time digital demodulation software, the sensing modality can now be entirely defined and redefined on the fly offering unprecedented active optical sensing flexibility in a single instrument; so-called “Software Defined Multifunction LIDAR” (SDML). This paper presents SDML physical principles, progress to date in the field and discusses potential applications.
Comparative assessment of different active imaging technologies for imaging through obscurants
Philip Soan, Mark Silver, John Parsons, et al.
Natural and man-made obscurants like fog, cloud, smoke and dust are an impediment to the conduct of military operations, preventing effective pilotage, denying the ability to carry out surveillance and reconnaissance, and restricting situational awareness. Additionally, there is a growing interest in the ability to penetrate haze and fog for the safe navigation of autonomous vehicle applications. There are several electro-optic technologies that offer improved ability to image through obscurants [1,2]. In this study the authors assessed 4 different active imaging technologies in the presence of an artificial smoke, and obtained 3D imagery of targets at ranges of 100m out to 1400m. The four systems tested were: • a scanned time-correlated single photon counting (TCSPC) sensor using a InGaAs/InP single-photon avalanche diode (SPAD) detector operating at  ~ 1.55 µm [2]; • a 32  32 InGaAs/InP SPAD array using TCSPC at  ~ 1.55 µm; • a coherent frequency modulated continuous wave (FMCW) scanned lidar system  ~ 1.55 µm , [1]; • a CMOS SPAD array camera operating as a time gated imager operating at  ~ 670nm. The selection of sensors enables comparisons to be drawn between scanning and staring systems and direct detection and coherent detection, and between short-wave infrared and visible wavelengths. Three-dimensional structured targets were placed at ranges of 100 – 150m and smoke was introduced between the targets and the sensors. The smoke transmission was measured with a separate laser device to correlate the imagery with the level of attenuation presented by the smoke and thereby relate the image quality to the degree of optical loss in the system. For the coherent lidar system, long range 3D images were obtained out to a distance of 1400m, and imaging through smoke of a target at 900m was achieved. Under the test conditions at least 2 of the systems have demonstrated the ability to obtain images through greater than 4 attenuation lengths of obscurant between transceiver and target, and work is progressing on image processing approaches to reconstruct images at greater levels of loss. Imagery from the systems will be presented, the relative merits of the different techniques discussed, and the prospects for future practical systems will be explored. [1] “Demonstration of frequency modulated continuous wave (FMCW) eye-safe, coherent LIDAR to See Through Clouds”, M.Silver, P.Feneyrou, L.Leviander, A.Martin and J Parsons, Optro, Jan 2018. [2] “Depth imaging through obscurants using time-correlated single-photon counting”, R.Tobin, A.Halimi, A.McCarthy, M.Laurenzis, F.Christnacher and G.S.Buller, SPIE Vol 10659, April 2018
Extending the 3D range of a short-wave infrared laser-gated viewing system capable of correlated double sampling
Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with appropriately overlapping range gates, 3D information within a part of these range gates can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 30 Hz, the time difference between the two required GV images is approximately 33 ms which may be too long in a dynamic scenario with fast moving objects.

Therefore, in a previous work, we have applied this approach to the reset and signal level images of a short-wave infrared laser GV camera whose read-out integrated circuit is capable of correlated double sampling originally designed for the reduction of kTC noise (reset noise). This camera consists of a 640 x 512 avalanche photodiode focal plane array based on mercury cadmium telluride with a pixel pitch of 15 μm. The great advantage of this idea is the fact that these images are extracted from only one single laser pulse with a marginal time difference in between. This allows 3D imaging of fast moving objects. However, a drawback of this method is the very limited 3D range in which 3D reconstruction is possible.

In this paper, we describe and discuss two measures to extend the 3D range. First, refining the algorithm for 3D reconstruction is investigated, particularly using a quadratic model instead of a linear model as in previous work. Second, we use an illumination laser with longer pulse duration than before to study the influence of laser pulse length on 3D range in real experiments. Based on these measured data, we simulate further temporal stretching of the laser pulse, to evaluate the potential of this approach to extend the 3D range.
Measurement Techniques
icon_mobile_dropdown
Development of inkjet-deposited test standards for optical sensors
Raphael Moon, Kevin Hung, Erik Roese, et al.
The US Army Research, Development and Engineering Command – Chem Bio center is leading an inter-agency working group, to expand chemical inkjet printing techniques, and to fabricate surface standards in a controlled, uniform and quantifiable fashion, for the evaluation of stand-off active and passive optical systems. A CommercialOff-the-Shelf (COTS) standard inkjet printer was redesigned to deposit precise amounts of chemicals and explosive material on defense relevant surfaces, allowing for the generation of calibration test standards. RDECOM-CB is currently utilizing the inkjet techniques to support an Army forensics detection program where inkjet samples are used for detection of trace energetic materials and illicit drugs of abuse within residual latent fingerprints, as well as leading a North Atlantic Treaty Organization (NATO) Task Group (TG) to develop and recommend to NATO a reference standard methodology (or methodologies) for fabricating quantifiable surface standards for the evaluation of stand-off active and passive optical systems. QA/QC were performed on printed materials to determine accuracy and precision. Raman imaging and the Image-J software package was used to calculate particle statistics such as size distribution, average particle size, and fill factor. The software algorithm finds individual particles and calculates their area from a brightfield image montage. An approximate diameter of each particle, and the total fractional area of the surface covered are also calculated. For qualitative analysis Raman Chemical Imaging is performed to confirm the chemical make-up of the deposited samples. For the quantitative analysis, printed samples were analyzed by either Ion Chromatography with Conductivity Detection (IC-CD) for potassium chlorate based explosives analysis or LC-MS/MS for RDX analysis. We will present the results of inkjet samples produced for the Army forensics program as well as NATO benchmark exercise that consisted of printing trace amounts of inkjet explosive samples and performing QA/QC procedures to determine accuracy, precision and mass transport efficiency.
Application of nonlinear voxel distribution grid for computational speed-up for linear tomosynthesis reconstruction
The research focuses on considering the possibility of computational speedup during image reconstruction based on the SART algorithm through application of the nonlinear voxel grid. Motion scheme of linear tomosynthesis served as foundation, with a non-linear voxel grid being used in the reconstruction area.

The proposed method allows calculating the coefficients of the SART system matrix for only 2 planes, thus significantly reducing the computation time (up to 6-fold). Besides that, the amount of data stored decreases (approximately 295 times). This method allows performing parallel computations for each vertical layer in the reconstruction area, which provides a 10-fold gain in reconstruction rate.
Signal Processing
icon_mobile_dropdown
Computational sensing approaches for enhanced active imaging
Computational Imaging is an emerging technology in the field of computer vision which enables optical sensing with new perception capacities and sensing approaches. In contrast to classical approaches, computational imaging uses a strong mathematical model on both part of the imaging process: The data acquisition and the data analysis. In this paper, we present examples where the principles of computational imaging are adapted to active imaging and laser gated viewing. While classical active imaging rely on the projection of a remote line of sight scene with a sensor system with specific resolution (sensor array size) and measures the time of flight due to a predefined sampling rate, we demonstrate the super-resolution time of flight or range measurement and spatial sampling beyond the sensor resolution. Further, we demonstrate the analysis of scattered photons to enhance the perception range and to obtain information on non-line-of-sight targets which are hidden from direct view.
Error-free coding of range gates for super-resolution three-dimensional imaging
Error-free coding of range gates is a key performance for the application in super-resolution depth mapping and reliable compressed range imaging. Until now, coding of range gates suffer from a non-linear error in erroneous coding sequences. But, Gray Codes and other Hamiltonian-type coding sequences can be used to eliminate ambiguities. In this paper, different coding sequences based on 4-bit Gray Codes as well as further 4-bit Coding schemes were investigated and applied to perform three-dimensional imaging in a range of 4.2 m with super-resolution.
Persistent surveillance with small Unmanned Aerial Vehicles (sUAV): a feasibility study
In typical Intelligence Surveillance and Reconnaissance (ISR) missions, persistent surveillance is commonly defined as the exercise of automatic intelligence discovery by monitoring a wide area coverage at a high altitude leveraging an aerial platform (manned or unmanned). It can be large enough to carry a matrix of high resolution cameras and a rack of high performance computing images processing and exploitation units (PEU). The majority of the small unmanned aerial vehicles (sUAV) are able to carry optics payloads, allowing them to take aerial images from strategic viewpoints. This capability constitutes a key enabler for an immense number of applications, such as crowd monitoring, search and rescue, surveillance scenario, industrial inspection and so on. The constrained onboard processing power in addition to the strict limit in the flying time of sUAV are amongst the serious challenges that have to be overcome to enable a cost effective persistent surveillance based on sUAV platforms. In this paper, we conduct a feasibility study for developing a potential sUAV based persistent surveillance system with tethered power supply.
Poster Session
icon_mobile_dropdown
Research and development of a high-energy radiation imaging system based on SiPM and coding aperture
A high-energy radiation imaging system is presented which is based on the scintillation method. To detect and visualize high-energy radiation, a SiPM array is used together with a CsI(Tl) scintillation crystal. At the current stage of development, gamma or X-ray sources are not used to verify the operation of the system and initiate scintillation. Instead of them, vacuum ultraviolet (VUV) radiation is used. The possibility of using a VUV lamp as a source of high-energy radiation has been proven computationally and experimentally. This allows eliminating the use of sources of gamma or X-ray radiation and special protective equipment at the initial stage of development. A coded aperture is used as an imaging device. It is made by laser evaporation of titanium sputtering from the surface of a transparent substrate. The results of this study show methods and materials which allow investigating the high-energy radiation imaging system at the initial stage without using sources of hazardous radiation.
A novel jamming suppression method for spaceborne multi-channel synthetic aperture radar
Xingji Tang, Xianbin Li III, Li Zhang IV, et al.
In this paper, a new method to suppress both barrage jamming and deceptive jamming is proposed based on spaceborne azimuth multichannel synthetic aperture radar (AMSAR). The relationship between signals received over arbitrary channel and reference channel is obtained by analyzing the signal models of the jammed AMSAR. Based on this relationship, a system of equations, whose solution contains SAR echo and all jamming signals, is established. In order to obtain high-precision solutions, the system noise has to be removed and accurate direction of arrival (DOA) estimation of jammers is required. For this purposes, a singular value decomposition (SVD) based method for noise reduction and a least square method for the two-dimensional locations of jammers are put forward. By solving the equations, SAR echoes can be recovered. Finally, several simulation experiments are provided to illustrate the effectiveness of the proposed method.
Robust monocular model-based pose tracking of markerless rigid objects
Gang Wang, Hongliang Zhang, Xiaochun Liu, et al.
This paper presents a robust, accurate and real-time model-based tracking method for markerless objects in complex environments to replace the conventional 3D tracking approach based on cooperative targets. A known 3D model of the object is projected onto a 2D plane and occlusion culling is performed with the precalibrated intrinsic parameters and initialized pose. The correspondences between a 3D object model and 2D image edges are commonly used to estimate the camera pose, so the pose optimization problem is transformed into 3D/2D model-to-image registration. For each visible model sample point, a one-dimensional search for putative image edge points is then performed along a direction perpendicular to its line by state-of-the-art methods. However, false correspondences always occur due to cluttered backgrounds or partial occlusion. To overcome this problem, a new search scheme for obtaining line correspondences instead of edge point correspondences is implemented. The outliers of 3D/2D line correspondences are then effectively detected and removed with algebraic outlier rejection, where the camera pose is iteratively optimized from correct correspondences of 3D/2D lines by minimizing the perpendicular distances from the endpoints of 3D model lines to their corresponding projection planes. The method presented in this paper has been validated on both synthetic images and real data. The experimental results show that the method is robust to strong noise, exquisite illumination changes and highly cluttered backgrounds. Meanwhile, it can easily satisfy the real time request.
Visual tracker fusion and outlier detection on thermal image sequences
Sebastian Thome, Norbert Scherer-Negenborn, Michael Arens
Visual object tracking is a challenging task in computer vision, especially if there are no constraints to the scenario and the objects are arbitrary. The number of tracking algorithms is very large and all have different advantages and disadvantages. Often they are developed for a single task and they fail in other scenarios. Normally their failures in the tracking process occur at different moments in the sequence. So far, there is no tracker which can solve all scenarios robustly and accurately. One possible approach to this problem is using a collection of tracking algorithms and fusing them. There exist various strategies to fuse tracking algorithms. In some of them only the resulting outputs are fused. This means that new algorithms can be integrated with less effort. This fusion can be called ”high-level” because the tracking algorithms only interact through the last step in their procedure. Trackers in the collection which lost the object are called outliers. To ensure the robustness of the fusion methods these outliers should be detected and reinitialized or removed from the tracking process. Additionally three fusion methods are investigated. They are called Weighted mean fusion, MAD fusion and attraction field fusion. In order to evaluate the performance of the fusion methods and the outlier detection a collection of thermal image sequences has been investigated.
Numerical modeling and measurement of polydimethylsiloxane deformation with fiber Bragg grating sensor
Pavel Mec, Marcel Fajkus, Jan Nedoma, et al.
This paper deals with mechanical properties of polydimethylsiloxane (PDMS) as material for preparation of deformation sensors. PDMS is widely used especially in medical area. Some application of PDMS as material for preparation of deformation sensors with fiber optics has been prepared. Due to its hyper-elastic and viscoelastic of this material is complicate to use it with long time and large deformation mechanics. In this paper authors compare the numerical models of prepared sensor using hyper-elastic models and mechanical experiment at large deformation.
Application of FBG in the experimental measurements of structural elements deformation from cement composites
Pavel Mec, Martin Stolarik, Stanislav Zabka, et al.
Monitoring deformations of structures or testing of supporting elements are important activities in life cycle of structures. Determined large deformation may indicate defect or failure of supporting structures. The aim of the experiments is to show another, less frequently used strain measurement method. The fibber Bragg grating sensor was used for measuring of the strain changes in simple supported concrete beam. These sensors based on optical fibers allow to measure the strain of the fiber by changing the wavelength of light in the deformed fiber. The article describes an experiment conducted at the Faculty of Civil engineering VŠB Technical University of Ostrava. The values measured by these sensors are compared with measurements using traditional methods.
Deformation sensor composed of fiber Bragg grating and the strain gauge for use in civil engineering
Monitoring of building structures is an integral part not only during the phase of construction but also during their use throughout their lifetime. Fiber optic technology offers a number of unrivaled features and forms an interesting alternative to these applications. From the point of view of the safety of large building structures such as buildings, bridges or tunnels, increased attention should be paid to the reliability of monitoring devices. One option is to duplicate standard strain-gauge sensors with modern fiber optic sensors such as Bragg grating. This article brings a new hybrid sensory element, which consists of a Bragg grating and a standard foil strain gauge. The proposed sensor was tested in concrete structures and loaded with force and temperature effects. The results show the functionality of the proposed hybrid sensor implemented in concrete structures and the long-term reliability and independence of both parts of the hybrid sensors.
Similarity-transform invariant similarity measure for robust template matching
Cong Sun, Shengyi Chen, Mengna Jia, et al.
A good similarity measure is the key to robust template matching. In this paper, we present a Similarity-Transform invariant Best-Buddies Similarity (SiTi-BBS) to deal with the template matching with obvious geometric distortion. Similar to the BBP, SiTi-BBS still adopts Best-Buddies Pair (BBP) to vote. However, differing from the classic BBS acquiring the point pair via bidirectional matching in xyRGB space, SiTi-BBS takes only the color information (RGB components) to acquire BBPs, while the position information (xy components) of each BBP is employed to calculate the geometric distortion between the template and matching window. To further improve the robustness of template matching, we novelly take advantage of the interval voting to accommodate the case where the two images do not strictly satisfy the similarity transformation. Therefore, SiTi-BBS, to a certain extent, can be applied to the affine and perspective transformation. In this way, the highest number of votes is taken as the similarity measure between the two images. Mathematical analysis indicates that the proposed method is capable of dealing with the case of obvious geometric distortion between images. Furthermore, the test results of simulated and real challenging images show the outstanding performance of the proposed similarity measure for template matching.
Influence analysis of the projection forming method on the reconstruction quality in the digital tomosynthesis
The aim of tomographic synthesis is to reconstruct the internal structure of a three-dimensional object from a set of its projections in a space of smaller dimension. The foundation of the three-dimensional reconstruction is the operation of backprojection, without additional transformations; however, its result features a low contrast and is significantly blurred due to the overlap effect. The quality of reconstruction is also affected by the number of projections of the object, the range of viewing angles, and the instrumental error of the geometrical configuration of the X-ray unit upon obtaining each of the projections. This work is aimed at studying the influence of the latter factor. By instrumental error in this context, one should understand the positioning accuracy of X-ray source and detector, the projection angle, focus point positioning.

The study was carried out on a three-dimensional mathematical phantom. For the reconstruction we used the algorithm of filtered reverse projections (Feldkamp algorithm) and algebraic reconstruction algorithm (ART). In the process of reconstruction, noise with a specified RMS was added to the data reflecting the angles of obtaining the projection, as well as the position of the focus point. The result of the study are the dependences of the normalized mean square error and the normalized absolute error of reconstruction for different layers from the NMS of the introduced noise.