Proceedings Volume 9896

Optics, Photonics and Digital Technologies for Imaging Applications IV

cover
Proceedings Volume 9896

Optics, Photonics and Digital Technologies for Imaging Applications IV

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 26 September 2016
Contents: 9 Sessions, 48 Papers, 0 Presentations
Conference: SPIE Photonics Europe 2016
Volume Number: 9896

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9896
  • Displays
  • Computational Imaging
  • Holography
  • 3D Imaging by Nonconventional Methods I
  • 3D Imaging by Nonconventional Methods II
  • Testing and Image Quality Assessment
  • Image Analysis and Transformations
  • Poster Session
Front Matter: Volume 9896
icon_mobile_dropdown
Front Matter: Volume 9896
This PDF file contains the front matter associated with SPIE Proceedings Volume 9896, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Displays
icon_mobile_dropdown
Polychromatic see-through near-eye display design with two waveguides and a large field-of-view
Jianming Yang, Philippe Gérard, Patrice Twardowski, et al.
We propose a new kind of waveguide near eye display (WGNED) with new in-coupling and propagation subsystems to vertically enlarge the field-of-view (FOV). Two waveguides are stacked up with a 0.1 mm air gap so that light can propagate inside the two waveguides independently. The light from a micro-display is coupled into the first waveguide by the in-coupling subsystem. Light propagates inside the first waveguide until reaching a cylindrical mirror at its edge. In the area near the mirror, the two waveguides are combined together. Then, the light is reflected by a cylindrical mirror and coupled to the second waveguide. The out-coupling from the second waveguide is realized either by a holographic optical element or cascaded micro mirrors. A cylindrical mirror allows most of the light coming out to reach the viewer’s eye. A two-lens subsystem with freeform surfaces is used as an in-coupling element to correct the aberrations. An advantage of our design is that the chief ray of each object field converges to the eye with an enlarged FOV in the vertical direction. The system has been simulated in mixed sequential and non-sequential mode in Zemax®. It can achieve a 20°×55° total FOV which is, to our knowledge, larger than the published WGNED designs.
LED-based projection source based on luminescent concentration
The concept of an LED-based source with high lumen density is described. It contains a luminescent rod in which LED light is converted to light with a longer wavelength that is extracted from a small face of the rod. The fundamental limitations and possibilities are discussed, as well as the constituents needed. Results are shown for two realized high lumen density sources. A source with YAG:Ce as phosphor is extensively characterized and the results are compared to modeling results. A source with an optimized green emitting phosphor is used for projection. With 64 pump LEDs at 490 W peak electrical input and 50% duty cycle, a peak luminous flux of 18000 lm and a peak luminance of over 1000 cd/mm2 is obtained, with an efficacy of 37 peak lm/W.
Computational Imaging
icon_mobile_dropdown
Compensating for colour artefacts in the design of technical kaleidoscopes
Šárka Němcová, Vlastimil Havran, Jiří Čáp, et al.
In computer graphics and related fields, bidirectional texture function (BTF) is used for realistic and predictive rendering. BTF allows for the capture of fine appearance effects such as self-shadowing, inter-reflection and subsurface scattering needed for true realism when used in rendering algorithms. The goal of current research is to get a surface representation indistinguishable from the real world. We developed, produced and tested a portable instrument for BTF acquisition based on kaleidoscopic imaging. Here we discuss the colour issues we experienced after the initial tests. We show that the same colour balance cannot be applied to the whole picture as the spectral response of the instrument varies with the position in the image. All optical elements were inspected for their contributions to the spectral behaviour of the instrument. The off-the-shelf parts were either measured or the manufacturer’s data were considered. The custom made mirrors’ spectral reflectivity was simulated. The mathematical model of the instrument was made. We found a way how to implement all these contributions to the image processing pipeline. In this way, a correct white balance for each individual pixel in the image is found and applied, allowing for a more faithful colour representation. Also proposed is an optimized dielectric protective layer for the kaleidoscope’s mirrors.
Sensorless adaptive optics system based on image second moment measurements
Temitope E. Agbana, Huizhen Yang, Oleg Soloviev, et al.
This paper presents experimental results of a static aberration control algorithm based on the linear relation be- tween mean square of the aberration gradient and the second moment of point spread function for the generation of control signal input for a deformable mirror (DM). Results presented in the work of Yang et al.1 suggested a good feasibility of the method for correction of static aberration for point and extended sources. However, a practical realisation of the algorithm has not been demonstrated. The goal of this article is to check the method experimentally in the real conditions of the present noise, finite dynamic range of the imaging camera, and system misalignments. The experiments have shown strong dependence of the linearity of the relationship on image noise and overall image intensity, which depends on the aberration level. Also, the restoration capability and the rate of convergence of the AO system for aberrations generated by the deformable mirror are experi- mentally investigated. The presented approach as well as the experimental results finds practical application in compensation of static aberration in adaptive microscopic imaging system.
Coded access optical sensor (CAOS) imager and applications
Nabeel A. Riza
Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel photo-detection (also called a single pixel camera) that is suited for operations with both coherent and incoherent light across broad spectral bands. This imager design operates with the agile pixels programmed in a limited SNR operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the incident image radiation is present. Motivated by modern day advances in RF wireless, optical wired communications and electronic signal processing technologies and using our prior-art SLM-based optical imager design, described using a surprisingly simple approach is a new imager design called Coded Access Optical Sensor (CAOS) that has the ability to alleviate some of the key prior imager fundamental limitations. The agile pixel in the CAOS imager can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). Data from a first CAOS camera demonstration is described along with novel designs of CAOS-based optical instruments for various applications.
Agile wavefront splitting interferometry and imaging using a digital micromirror device
Juan Pablo La Torre, M. Junaid Amin, Nabeel A. Riza
Since 1997, we have proposed and demonstrated the use of the Texas Instrument (TI) Digital Micromirror Device (DMD) for various non-display applications including optical switching and imaging. In 2009, we proposed the use of the DMD to realize wavefront splitting interferometers as well as a variety of imagers. Specifically, proposed were agile electronically programmable wavefront splitting interferometer designs using a Spatial Light Modulator (SLM) such as (a) a transmissive SLM, (b) a DMD SLM and (c) a Beamsplitter with a DMD SLM. The SLMs operates with on/off or digital state pixels, much like a black and white state optical window to control passage/reflection of incident light. SLM pixel locations can be spatially and temporally modulated to create custom wavefronts for near-common path optical interference at the optical detectors such as a CCD/CMOS sensor, a Focal Plane Array (FPA) sensor or a point-photodetector. This paper describes the proposed DMD-based wavefront splitting interferometer and imager designs and their relevant experimental results.
Holography
icon_mobile_dropdown
Slightly off-axis holography with partially coherent illumination implemented into a standard microscope
We have recently reported on a simple, low cost and highly stable way to convert a standard microscope into a holographic one [Opt. Express 22, 14929 (2014)]. The method, named as Spatially-Multiplexed Interferometric Microscopy (SMIM), proposes an off-axis holographic architecture implemented onto a regular (non-holographic) microscope with minimum modifications: the use of coherent illumination and a properly placed and selected onedimensional diffraction grating. In this contribution, we report on the implementation of partially (temporally reduced) coherent illumination in SMIM as a way to improve quantitative phase imaging. The use of low coherence sources forces the application of phase shifting algorithm instead of off-axis holographic recording to recover the sample’s phase information but improves phase reconstruction due to coherence noise reduction. In addition, a less restrictive field of view limitation (1/2) is implemented in comparison with our previously reported scheme (1/3). The proposed modification is experimentally validated in a regular Olympus BX-60 upright microscope considering calibration samples (resolution test and microbeads) and for two different microscope objectives (10X and 20X).
Superresolution imaging system by color-coded tilted-beam illumination in digital in-line holographic microscopy
Digital in-line holographic microscopy (DIHM) relates with the capability to achieve microscopic imaging working without lensless in the regime of holography. In essence, DIHM proposes a simple layout where a point source of coherent light illuminates the sample and the diffracted wavefront is recorded by a digital sensor. However, DIHM lacks high numerical aperture (NA) due to both geometrical distortion and the mandatory compromise between the illumination pinhole diameter, the illumination wavelength, and the need to obtain a reasonable light efficiency. One way to improve the resolution in DIHM, is by allowing superresolution imaging by angular multiplexing using tilted beam illumination. This illumination allows the on-axis diffraction of different spatial frequency content of the sample’s spectrum, different in comparison to the case when on-axis illumination is used. And after recover this additional spectral content, a synthetic numerical aperture (SNA) expanding up the cutoff frequency of the system in comparison with the on-axis illumination case can be assembled in a digital post-processing stage. In this contribution, we present a method to achieve one-dimensional (1-D) superresolved imaging in DIHM by a SINGLE SHOT illumination, using color-coded tilted beams. The method has been named as L-SESRIM (Lensless Single-Exposure Super-Resolved Interferometric Microscopy). Although the technique was previously presented showing very preliminary results [34], in this contribution we expand the experimental characterization (USAF resolution test target) as well as derive the theoretical frame for SNA generation using different illumination wavelengths.
Speckle noise reduction for computer generated holograms of objects with diffuse surfaces
Digital holography is mainly used today for metrology and microscopic imaging and is emerging as an important potential technology for future holographic television. To generate the holographic content, computer-generated holography (CGH) techniques convert geometric descriptions of a 3D scene content. To model different surface types, an accurate model of light propagation has to be considered, including for example, specular and diffuse reflection. In previous work, we proposed a fast CGH method for point cloud data using multiple wavefront recording planes, look-up tables (LUTs) and occlusion processing. This work extends our method to account for diffuse reflections, enabling rendering of deep 3D scenes in high resolution with wide viewing angle support. This is achieved by modifying the spectral response of the light propagation kernels contained by the look-up tables. However, holograms encoding diffuse reflective surfaces depict significant amounts of speckle noise, a problem inherent to holography. Hence, techniques to improve the reduce speckle noise are evaluated in this paper. Moreover, we propose as well a technique to suppress the aperture diffraction during numerical, viewdependent rendering by apodizing the hologram. Results are compared visually and in terms of their respective computational efficiency. The experiments show that by modelling diffuse reflection in the LUTs, a more realistic yet computationally efficient framework for generating high-resolution CGH is achieved.
Sparsity assisted phase retrieval of complex valued objects
Charu Gaur, Kedar Khare
Iterative phase retrieval of complex valued objects (phase objects) suffers from twin image problem due to the presence of features of image and its complex conjugate in the recovered solution. The twin-image problem becomes more severe when object support is centro-symmetric. In this paper, we demonstrate that by modifying standard Hybrid-Input output (HIO) algorithm using an adaptive sparsity enhancement step, the twin-image problem can be addressed successfully even when the object support is centro-symmetric. Adaptive sparsity enhanced algorithm and numerical simulation for binary as well as gray scale phase objects are presented. The high quality phase recovery results presented here show the effectiveness of adaptive sparsity enhanced algorithm.
3D Imaging by Nonconventional Methods I
icon_mobile_dropdown
3D digitization methods based on laser excitation and active triangulation: a comparison
3D reconstruction of surfaces is an important topic in computer vision and corresponds to a large field of applications: industrial inspection, reverse engineering, object recognition, biometry, archeology… Because of the large varieties of applications, one can find in the literature a lot of approaches which can be classified into two families: passive and active [1]. Certainly because of their reliability, active approaches, using imaging system with an additional controlled light source, seem to be the most commonly used in the industrial field. In this domain, the 3D digitization approach based on active 3D triangulation has had important developments during the last ten years [2] and seems to be mature today if considering the important number of systems proposed by manufacturers. Unfortunately, the performances of active 3D scanners depend on the optical properties of the surface to digitize. As an example, on Fig 1.a, a 3D shape with a diffuse surface has been digitized with Comet V scanner (Steinbichler). The 3D reconstruction is presented on Fig 1.b. The same experiment was carried out on a similar object (same shape) but presenting a specular surface (Fig 1.c and Fig 1.d) ; it can clearly be observed, that the specularity influences of the performance of the digitization.
A fiber-compatible spectrally encoded imaging system using a 45° tilted fiber grating
Guoqing Wang, Chao Wang, Zhijun Yan, et al.
We propose and demonstrate, for the first time to our best knowledge, the use of a 45° tilted fiber grating (TFG) as an infiber lateral diffraction element in an efficient and fiber-compatible spectrally encoded imaging (SEI) system. Under proper polarization control, the TFG has significantly enhanced diffraction efficiency (93.5%) due to strong tilted reflection. Our conceptually new fiber-topics-based design eliminates the need for bulky and lossy free-space diffraction gratings, significantly reduces the volume and cost of the imaging system, improves energy efficiency, and increases system stability. As a proof-of-principle experiment, we use the proposed system to perform an one dimensional (1D) line scan imaging of a customer-designed three-slot sample and the results show that the constructed image matches well with the actual sample. The angular dispersion of the 45° TFG is measured to be 0.054°/nm and the lateral resolution of the SEI system is measured to be 28 μm in our experiment.
3D transient temperature measurement in homogeneous solid material with THz waves
M. Romano, A. Sommier, J.-C. Batsale, et al.
The first imaging system that is able to measure transient temperature phenomena taking place inside a bulk by 3D tomography is presented. This novel technique combines the power of terahertz waves and the high sensitivity of infrared imaging. The tomography reconstruction is achieved by the 3D motion of the sample at several angular positions followed by inverse Radon transform processing to retrieve the 3D transient temperatures. The aim of this novel volumetric imaging technique is to locate defects within the whole target body as well as to measure the temperature in the whole volume of the target. This new-fashioned thermal tomography will revolutionize the non-invasive monitoring techniques for volume inspection and in-situ properties estimations.
Measurement of 3D displacement fields from few tomographic projections
Thibault Taillandier-Thomas, Clément Jailin, Stéphane Roux, et al.
The present paper aims at providing 3D volume images of a deformed specimen based on i) a full 3D image describing the reference state as obtained e.g., from conventional computed tomography and ii) the 3D displacement field accounting for its motion. The displacement field, which is described by much fewer degrees of freedom than the specimen volume itself, is here proposed to be determined from very few projections. The reduction in number of needed projections may be larger than two orders of magnitude. In the proposed approach, the displacement field is described over an unstructured mesh composed of tetrahedra with linear shape functions. The mesh is based on the reconstructed reference volume so that it provides a faithful and accurate description of the specimen, including its boundary. Nodal displacements are determined from the minimization of the quadratic difference between the computed projections of the deformed configuration and the acquired projections (radiographs) for the selected orientations. Well-posedness of the problem requires the number of kinematic unknowns to be small. However, in cases where the geometry is complex, the displacement field may call for many parameters. To deal with such conflicting demands it is proposed to use a regularization based on the mechanical modeling of the displacement field using a linear elastic description.
3D Imaging by Nonconventional Methods II
icon_mobile_dropdown
3D high- and isotropic resolution in tomographic diffractive microscopy by illumination angular scanning, specimen rotation and improved data recombination
Tomographic diffractive microscopy allows for imaging unlabeled specimens, with a better resolution than conventional microscopes, giving access to the index of refraction distribution within the specimen, and possibly at high speed. Principles of image formation and reconstruction are presented, and progresses towards realtime, three-dimensional acquisition, image reconstruction and final display, are discussed, as well as towards three-dimensional isotropic-resolution imaging.
Motionless active depth from defocus system using smart optics for camera autofocus applications
M. Junaid Amin, Nabeel A. Riza
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
Evaluation of computational radiometric and spectral sensor calibration techniques
Alkhazur Manakov
Radiometric and spectral calibration are essential for enabling the use of digital sensors for measurement purposes. Traditional optical calibration techniques require expensive equipment such as specialized light sources, monochromators, tunable filters, calibrated photo-diodes, etc. The trade-offs between computational and physics-based characterization schemes are, however, not well understood. In this paper we perform an analysis of existing computational calibration schemes and elucidate their weak points. We highlight the limitations by comparing against ground truth measurements performed in an optical characterization laboratory (EMVA 1288 standard). Based on our analysis, we present accurate and affordable methods for the radiometric and spectral calibration of a camera.
Short review of polarimetric imaging based method for 3D measurements
In the domain of 3D measurement for inspection purposes, standard systems based on triangulation approaches, can be limited by the nature of the observed surface. When the surface is not lambertian, i.e. highly reflective or transparent, other strategies to ease the measurement need to be developed. The idea of using the polarization property of the light is one of them. This was explored to be a complementary modality, in triangulation or shape from distortion methods, or used in a standalone system through the concept of "shape from polarization". In this paper we propose a short state of the art of the usage of polarimetric imaging for the study of surface and the measurement of 3D data. We focus on recent development applied to the industrial domain, or the health domain.
Testing and Image Quality Assessment
icon_mobile_dropdown
Presence capture cameras - a new challenge to the image quality
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Speckle perception and disturbance limit in laser based projectors
We investigate the level of speckle that can be tolerated in a laser cinema projector. For this purpose, we equipped a movie theatre room with a prototype laser projector. A group of 186 participants was gathered to evaluate the speckle perception of several, short movie trailers in a subjective ‘Quality of Experience’ experiment. This study is important as the introduction of lasers in projection systems has been hampered by the presence of speckle in projected images. We identify a speckle disturbance threshold by statistically analyzing the observers’ responses for different values of the amount of speckle, which was monitored using a well-defined speckle measurement method. The analysis shows that the speckle perception of a human observer is not only dependent on the objectively measured amount of speckle, but it is also strongly influenced by the image content. As is also discussed in [Verschaffelt et al., Scientific Reports 5, art. nr. 14105, 2015] we find that, for moving images, the speckle becomes disturbing if the speckle contrast becomes larger than 6.9% for the red, 6.0% for the green, and 4.8% for the blue primary colors of the projector, whereas for still images the speckle detection threshold is about 3%. As we could not independently tune the speckle contrast of each of the primary colors, this speckle disturbance limit seems to be determined by the 6.9% speckle contrast of the red color as this primary color contains the largest amount of speckle. The speckle disturbance limit for movies thus turns out to be substantially larger than that for still images, and hence is easier to attain.
Image quality metrics applied to digital pathology
Ana Jiménez, Gloria Bueno, Gabriel Cristóbal , et al.
Several full-reference and blind metrics from literature have been tested on a set of digitized pathology slides under different known distortion conditions. Those ones showing the most uniform behavior are presented in this paper. Also, an algorithm that provides a blur map of the whole slide images (WSIs) has been implemented based on one of such methods.
Feedforward operation of a lens setup for large defocus and astigmatism correction
Hans R. G. W. Verstraete, MItra Almasian, Paolo Pozzi, et al.
In this manuscript, we present a lens setup for large defocus and astigmatism correction. A deformable defocus lens and two rotational cylindrical lenses are used to control the defocus and astigmatism. The setup is calibrated using a simple model that allows the calculation of the lens inputs so that a desired defocus and astigmatism are actuated on the eye. The setup is tested by determining the feedforward prediction error, imaging a resolution target, and removing introduced aberrations.
Image Analysis and Transformations
icon_mobile_dropdown
An active contour framework based on the Hermite transform for shape segmentation of cardiac MR images
Early detection of cardiac affections is fundamental to address a correct treatment that allows preserving the patient’s life. Since heart disease is one of the main causes of death in most countries, analysis of cardiac images is of great value for cardiac assessment. Cardiac MR has become essential for heart evaluation. In this work we present a segmentation framework for shape analysis in cardiac magnetic resonance (MR) images. The method consists of an active contour model which is guided by the spectral coefficients obtained from the Hermite transform (HT) of the data. The HT is used as model to code image features of the analyzed images. Region and boundary based energies are coded using the zero and first order coefficients. An additional shape constraint based on an elliptical function is used for controlling the active contour deformations. The proposed framework is applied to the segmentation of the endocardial and epicardial boundaries of the left ventricle using MR images with short axis view. The segmentation is sequential for both regions: the endocardium is segmented followed by the epicardium. The algorithm is evaluated with several MR images at different phases of the cardiac cycle demonstrating the effectiveness of the proposed method. Several metrics are used for performance evaluation.
Local retrodiction models for photon-noise-limited images
Matthias Sonnleitner, John Jeffers, Stephen M. Barnett
Imaging technologies working at very low light levels acquire data by attempting to count the number of photons impinging on each pixel. Especially in cases with, on average, less than one photocount per pixel the resulting images are heavily corrupted by Poissonian noise and a host of successful algorithms trying to reconstruct the original image from this noisy data have been developed. Here we review a recently proposed scheme that complements these algorithms by calculating the full probability distribution for the local intensity distribution behind the noisy photocount measurements. Such a probabilistic treatment opens the way to hypothesis testing and confidence levels for conclusions drawn from image analysis.
A current-assisted CMOS photonic sampler with two taps for fluorescence lifetime sensing
H. Ingelberts, M. Kuijk
Imaging based on fluorescence lifetime is becoming increasingly important in medical and biological applications. State-of- the-art fluorescence lifetime microscopes either use bulky and expensive gated image intensifiers coupled to a CCD or single-photon detectors in a slow scanning setup. Numerous attempts are being made to create compact, cost-effective all- CMOS imagers for fluorescence lifetime sensing. Single-photon avalanche diode (SPAD) imagers can have very good timing resolution and noise characteristics but have low detection efficiency. Another approach is to use CMOS imagers based on demodulation detectors. These imagers can be either very fast or very efficient but it remains a challenge to combine both characteristics. Recently we developed the current-assisted photonic sampler (CAPS) to tackle these problems and in this work, we present a new CAPS with two detection taps that can sample a fluorescence decay in two time windows. In the case of mono-exponential decays, two windows provide enough information to resolve the lifetime. We built an electro-optical setup to characterize the detector and use it for fluorescence lifetime measurements. It consists of a supercontinuum pulsed laser source, an optical system to focus light into the detector and picosecond timing electronics. We describe the structure and operation of the two-tap CAPS and provide basic characterization of the speed performance at multiple wavelengths in the visible and near-infrared spectrum. We also record fluorescence decays of different visible and NIR fluorescent dyes and provide different methods to resolve the fluorescence lifetime.
Non-diffracting super-airy beam with intensified main lobe
Brijesh Kumar Singh, Roei Remez, Yuval Tsur, et al.
We study, theoretically and experimentally, the concept of non-diffracting super-Airy beam, where the main lobe of the beam is observed to be nearly half in size and with increased intensity compared to the main lobe of the Airy beam. However, reducing the main lobe size does not affect the transverse acceleration and non-spreading features of the beam. Furthermore, we observed that during propagation, super Airy main lobe shows faster self-reconstruction after an obstruction than the Airy main lobe. Therefore, we envision that specifically, a beam with a smaller lobe size and higher intensity can out-perform the Airy beam for applications such as nonlinear optics, curved plasma generation, laser micromachining, and micro- particle manipulation, while the faster reconstruction property of the super-Airy main lobe can surpass the Airy beam in applications of scattering and turbulent media.
Poster Session
icon_mobile_dropdown
Performance prediction of optical image stabilizer using SVM for shaker-free production line
HyungKwan Kim, JungHyun Lee, JinWook Hyun, et al.
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
High-speed digital color fringe projection technique for three-dimensional facial measurements
Cheng-Yang Liu, Li-Jen Chang, Chung-Yi Wang
Digital fringe projection techniques have been widely studied in industrial applications because of the advantages of high accuracy, fast acquisition and non-contact operation. In this study, a single-shot high-speed digital color fringe projection technique is proposed to measure three-dimensional (3-D) facial features. The light source used in the measurement system is structured light with color fringe patterns. A projector with digital light processing is used as light source to project color structured light onto face. The distorted fringe pattern image is captured by the 3-CCD color camera and encoded into red, green and blue channels. The phase-shifting algorithm and quality guided path unwrapping algorithm are used to calculate absolute phase map. The detecting angle of the color camera is adjusted by using a motorized stage. Finally, a complete 3-D facial feature is obtained by our technique. We have successfully achieved simultaneous 3-D phase acquisition, reconstruction and exhibition at a speed of 0.5 s. The experimental results may provide a novel, high accuracy and real-time 3-D shape measurement for facial recognition system.
Efficient detection and recognition algorithm of reference points in photogrammetry
Weimin Li, Gang Liu, Lichun Zhu, et al.
In photogrammetry, an approach of automatic detection and recognition on reference points have been proposed to meet the requirements on detection and matching of reference points. The reference points used here are the CCT(circular coded target), which compose of two parts: the round target point in central region and the circular encoding band in surrounding region. Firstly, the contours of image are extracted, after that noises and disturbances of the image are filtered out by means of a series of criteria, such as the area of the contours, the correlation coefficient between two regions of contours etc. Secondly, the cubic spline interpolation is adopted to process the central contour region of the CCT. The contours of the interpolated image are extracted again, then the least square ellipse fitting is performed to calculate the center coordinates of the CCT. Finally, the encoded value is obtained by the angle information from the circular encoding band of the CCT. From the experiment results, the location precision of the CCT can be achieved to sub-pixel level of the algorithm presented. Meanwhile the recognition accuracy is pretty high, even if the background of the image is complex and full of disturbances. In addition, the property of the algorithm is robust. Furthermore, the runtime of the algorithm is fast.
Products recognition on shop-racks from local scale-invariant features
Jacek Zawistowski, Grzegorz Kurzejamski, Piotr Garbat, et al.
This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.
3D phase stepping optical profilometry using a fiber optic Lloyd's mirror
This study defines measurements of three-dimensional rigid-body shapes by using a fiber optic Lloyd’s mirror. A fiber optic Lloyd's mirror assembly is basically a technique to create an optical interference pattern using the real light point sources and their images. The generated fringe pattern thanks to this technique is deformed when it is projected on an object's surface. The introduced surface profilometry algorithm depends on a multi-step phase shifting process. The deformed fringe patterns containing information of the object's surface profile are captured by a digital CCD camera. While each frames are captured, required π∕2 phase shifts for interference fringe pattern are obtained by mechanically sliding the Lloyd assembly via an ordinary micrometer stage. Some preprocess algorithms are applied to the frames and are processed with an algorithm to accomplish 3D topographies. Finally, the continuous data determines the depth information and the surface topography of the object. The experimental setup is simple and low cost to construct, and is insensitive to the ambient temperature fluctuations and environmental vibrations that cause unwanted effects on the projected fringe pattern. Such a fiber optic Lloyd’s system which provides an accurate non-contact measurement without contaminating and harming the object surface has a wide range of applications from laser interference based lithography in nano-scale to macro-scale interferometers.
Influence of array photodetectors characteristics on the accuracy of the optical-electronic system with optical equisignal zone
Vadim F. Gusarov, Anton A. Maraev, Aleksandr N. Timofeev, et al.
In this paper we attempt to assess the possibility of applying the array of photodetectors and digital information processing in positioning systems based on optical equisignal zone. The basic theoretical information about working principles of the system with equisignal zone and the formation of the base direction are listed. The influence on the accuracy of the positioning system some parameters such as the number and size of the array elements of the receiver, capacity analog-digital converter, aberrations of optical elements are researched. The possibility of registering the angular displacements (reversals) of the controlled object by using the developed system is shown.
Deformably registering and annotating whole CLARITY brains to an atlas via masked LDDMM
Kwame S. Kutten, Joshua T. Vogelstein, Nicolas Charon, et al.
The CLARITY method renders brains optically transparent to enable high-resolution imaging in the structurally intact brain. Anatomically annotating CLARITY brains is necessary for discovering which regions contain signals of interest. Manually annotating whole-brain, terabyte CLARITY images is difficult, time-consuming, subjective, and error-prone. Automatically registering CLARITY images to a pre-annotated brain atlas offers a solution, but is difficult for several reasons. Removal of the brain from the skull and subsequent storage and processing cause variable non-rigid deformations, thus compounding inter-subject anatomical variability. Additionally, the signal in CLARITY images arises from various biochemical contrast agents which only sparsely label brain structures. This sparse labeling challenges the most commonly used registration algorithms that need to match image histogram statistics to the more densely labeled histological brain atlases. The standard method is a multiscale Mutual Information B-spline algorithm that dynamically generates an average template as an intermediate registration target. We determined that this method performs poorly when registering CLARITY brains to the Allen Institute's Mouse Reference Atlas (ARA), because the image histogram statistics are poorly matched. Therefore, we developed a method (Mask-LDDMM) for registering CLARITY images, that automatically finds the brain boundary and learns the optimal deformation between the brain and atlas masks. Using Mask-LDDMM without an average template provided better results than the standard approach when registering CLARITY brains to the ARA. The LDDMM pipelines developed here provide a fast automated way to anatomically annotate CLARITY images; our code is available as open source software at http://NeuroData.io.
Long-distance eye-safe laser TOF camera design
Anton V. Kovalev, Vadim M. Polyakov, Vyacheslav A. Buchenkov
We present a new TOF camera design based on a compact actively Q-switched diode pumped solid-state laser operating in 1.5 μm range and a receiver system based on a short wave infrared InGaAs PIN diodes focal plane array with an image intensifier and a special readout integration circuit. The compact camera is capable of depth imaging up to 4 kilometers with 10 frame/s and 1.2 m error. The camera could be applied for airborne and space geodesy location and navigation.
Robust Bessel beam scanning without mechanical movement
Maria Eloisa M. Ventura, Giovanni A. Tapang, Caesar A. Saloma
Bessel beams are sensitive to tilt limiting its use in confocal scanning microscopy. We discuss a method for scanning a Bessel beam using a spatial light modulator (SLM) without the need for mechanical movement. Adding a grating phase to the SLM input produces robust point spread functions across the imaging plane that deviate less than 1.5% away from unity in terms of its normalized fidelity.
Sparsity metrics for autofocus in digital holographic microscopy
Xin Fan, John J. Healy, Yan Guanshen, et al.
Digital holographic microscopy is an optic-electronic technique that enables the numerical reconstruction of the complex wave-field reflected from, or transmitted through, a target. Together with phase unwrapping, this method permits a height profile, a thickness profile, and/or a refractive index profile, to be extracted, in addition to the reconstruction of the image intensity. Digital holographic microscopy is unlike classical imaging systems in that one can obtain the focused image without situating the camera in the focal plane; indeed, it is possible to recover the complex wave-field at any distance from the camera plane. In order to reconstruct the image, the captured interference pattern is first processed to remove the virtual image and DC component, and then back-propagated using a numerical implementation of the Fresnel transform. A necessary input parameter to this algorithm is the distance from the camera to the image plane, which may be measured independently, estimated by eye following reconstruction at multiple distances, or estimated automatically using a focus metric. Autofocus algorithms are commonly used in microscopy in order to estimate the depth at which the image comes into focus by manually adjusting the microscope stage; in digital holographic microscopy the hologram can be reconstructed at multiple depths, and the autofocus metric can be evaluated for each reconstructed image intensity. In this paper, fifteen sparsity metrics are investigated as potential focus metrics for digital holographic microscopy, whereby the metrics are applied to a series of reconstructed intensities. These metrics are tested on the hologram of a biological cell. The results demonstrate that many of the metrics produce similar profiles, and groupings of the metrics are proposed.
The rejection of vibrations in adaptive optics systems using a DFT-based estimation method
Dariusz Kania, Józef Borkowski
Adaptive optics systems are commonly used in many optical structures to reduce perturbations and to increase the system performance. The problem in such systems is undesirable vibrations due to some effects as shaking of the whole structure or the tracking process. This paper presents a frequency, amplitude and phase estimation method of a multifrequency signal that can be used to reject these vibrations in an adaptive method. The estimation method is based on using the FFT procedure. The undesirable signals are usually exponentially damped harmonic oscillations. The estimation error depends on several parameters and consists of a systematic component and a random component. The systematic error depends on the signal phase, the number of samples N in a measurement window, the value of CiR (number of signal periods in a measurement window), the THD value and the time window order H. The random error depends mainly on the variance of noise and the SNR value. This paper shows research on the sinusoidal signal phase and the estimation of exponentially damped sinusoids parameters. The shape of errors signals is periodical and it is associated with the signal period and with the sliding measurement window. For CiR=1.6 and the damping ratio 0.1% the error was in the order of 10-5 Hz/Hz, 10-4 V/V and 10-4 rad for the frequency, the amplitude and the phase estimation respectively. The information provided in this paper can be used to determine the approximate level of the efficiency of the vibrations elimination process before starting it.
Singlet oxygen detection in water by means of digital holography and digital holographic tomography
The paper presents results on singlet oxygen detection in aqueous solutions of a photosensitizer based on the reconstruction of 3D temperature gradients resulting from nonradiative deactivation of excited oxygen molecules. 3D temperature distributions were reconstructed by means of the inverse Abel transformation from a single digital hologram in the case of cylindrically symmetric distribution of the temperature gradient and using holographic tomography algorithm with filtered back projection in the case of nonsymmetrical distribution. Major features of the applied techniques are discussed and results obtained by the two methods are compared.
Optical-electronic system for real-time structural health monitoring of roofs
Sergey V. Mikheev, Igor A. Konyakhin, Oleg A. Barsukov
The paper reports the results of computing and physical modeling of measurement optical-electronic system for real-time position control of extended objects with an active tags. We proposed an original method for solving systems of differential equations to calculate the coordinates of the objects. We offer an original multichannel monitoring optical-electronic system based on orthogonal channels. We create the physical model of this system for controlling the position of the pool’s roof.
Robust object tracking techniques for vision-based 3D motion analysis applications
Vladimir A. Knyaz, Sergey Yu. Zheltov, Boris V. Vishnyakov
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system “Mosca” is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms’ evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
High resolution spectroscopic mapping imaging applied in situ to multilayer structures for stratigraphic identification of painted art objects
The development of non–destructive techniques is a reality in the field of conservation science. These techniques are usually not so accurate, as the analytical micro–sampling techniques, however, the proper development of soft–computing techniques can improve their accuracy. In this work, we propose a real–time fast acquisition spectroscopic mapping imaging system that operates from the ultraviolet to mid infrared (UV/Vis/nIR/mIR) area of the electromagnetic spectrum and it is supported by a set of soft–computing methods to identify the materials that exist in a stratigraphic structure of paint layers. Particularly, the system acquires spectra in diffuse–reflectance mode, scanning in a Region-Of-Interest (ROI), and having wavelength range from 200 up to 5000 nm. Also, a fuzzy c–means clustering algorithm, i.e., the particular soft–computing algorithm, produces the mapping images. The evaluation of the method was tested on a byzantine painted icon.
Shape extraction in fetal ultrasound images using a Hermite-based filtering approach and a point distribution model
In this work we present a segmentation framework applied to fetal cardiac images. One of the main problems of the segmentation in ultrasound images is the speckle pattern that makes difficult to model images features such as edges and homogeneous regions. Our approach is based on two main processes. The first one aims at enhancing the ultrasound image using a noise reduction scheme. The Hermite transform is used for this purpose. In the second process a Point Distribution Model (PDM), previously trained, is used for the segmentation of the desired object. The filtering process is then employed before the segmentation stage with the aim of improving the results. The obtained result in the filtering process is used as a way to make more robust the segmentation stage. We evaluate the proposed method in the segmentation of the left ventricle of fetal ultrasound data. Different metrics are used to validate and compare the performance with other methods applied to fetal echocardiographic images.
Position estimation for fiducial marks based on high intensity retroreflective tape
Anna Trushkina, Mariya Serikova, Anton Pantyushin
3D position estimation of an object usually involve computer vision techniques, which require fiducial markers attached to the objects surface. Modern technology provides a high intensity retroreflective material in the form of a tape which is easy to mount to the object and can be used as a base for fiducial marks. But inevitable drawback of the tapes with the highest retroreflective intensity is the presence of technological pattern which affects spatial distribution of retroreflected light and deforms border of any print on tape's surface. In this work we compare various shapes of metrological pattern and examine Fourier descriptors based image processing to obtain estimation of accuracy of mark image position. To verify results we developed a setup consisting of a camera based on Sony ICX274 CCD, 25 mm lens, 800 nm LED lightning and high intensity microprismatic tape. The experiment showed that there is no significant difference between proposed mark shapes as well as between direct and indirect contrast when proposed image processing is used. The experiments confirmed that the image processing implemented without elimination of non-reflective netting pattern can only provide an accuracy of coordinates extraction close to 1 pix.
Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection
Alexey A. Gorbachev, Mariya G. Serikova, Ekaterina N. Pantyushina, et al.
Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of ±1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.
Simulator of human visual perception
Vitalii V. Bezzubik, Nickolai R. Belashenkov
Difference of Circs (DoC) model allowing to simulate the response of neurons – ganglion cells as a reaction to stimuli is represented and studied in relation with representation of receptive fields of human retina. According to this model the response of neurons is reduced to execution of simple arithmetic operations and the results of these calculations well correlate with experimental data in wide range of stimuli parameters. The simplicity of the model and reliability of reproducing of responses allow to propose the conception of a device which can simulate the signals generated by ganglion cells as a reaction to presented stimuli. The signals produced according to DoC model are considered as a result of primary processing of information received from receptors independently of their type and may be sent to higher levels of nervous system of living creatures for subsequent processing. Such device may be used as a prosthesis for disabled organ.
Texel-based image classification with orthogonal bases
Periodic variations in patterns within a group of pixels provide important information about the surface of interest and can be used to identify objects or regions. Hence, a proper analysis can be applied to extract particular features according to some specific image properties. Recently, texture analysis using orthogonal polynomials has gained attention since polynomials characterize the pseudo-periodic behavior of textures through the projection of the pattern of interest over a group of kernel functions. However, the maximum polynomial order is often linked to the size of the texture, which implies in many cases, a complex calculation and introduces instability in higher orders leading to computational errors. In this paper, we address this issue and explore a pre-processing stage to compute the optimal size of the window of analysis called “texel.” We propose Haralick-based metrics to find the main oscillation period, such that, it represents the fundamental texture and captures the minimum information, which is sufficient for classification tasks. This procedure avoids the computation of large polynomials and reduces substantially the feature space with small classification errors. Our proposal is also compared against different fixed-size windows. We also show similarities between full-image representations and the ones based on texels in terms of visual structures and feature vectors using two different orthogonal bases: Tchebichef and Hermite polynomials. Finally, we assess the performance of the proposal using well-known texture databases found in the literature.
Perspective projection for variance pose face recognition from camera calibration
M. M. Fakhir, W. L. Woo, J. A. Chambers, et al.
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.