Proceedings Volume 10997

Three-Dimensional Imaging, Visualization, and Display 2019

cover
Proceedings Volume 10997

Three-Dimensional Imaging, Visualization, and Display 2019

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 26 July 2019
Contents: 10 Sessions, 28 Papers, 12 Presentations
Conference: SPIE Defense + Commercial Sensing 2019
Volume Number: 10997

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10997
  • 3D Imaging and Related Technologies I
  • 3D Displays
  • 3D Imaging and Related Technologies II
  • 3D Imaging and Related Technologies III
  • Digital/Electro-Holography and Related I
  • Digital/Electro-Holography and Related II
  • Digital/Electro-Holography and Related III
  • 3D Imaging and Related Technologies IV
  • Poster Session
Front Matter: Volume 10997
icon_mobile_dropdown
Front Matter: Volume 10997
This PDF file contains the front matter associated with SPIE Proceedings Volume 10997, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
3D Imaging and Related Technologies I
icon_mobile_dropdown
Compressive sensing with variable density sampling for 3D imaging
Compressive Sensing (CS) can alleviate the sensing effort involved in the acquisition of three dimensional image (3D) data. The most common CS sampling schemes employ uniformly random sampling because it is universal, thus it is applicable to almost any signals. However, by considering general properties of images and properties of the acquisition mechanism, it is possible to design random sampling schemes with variable density that have improved CS performance. We have introduced the concept of non-uniform CS random sampling a decade ago for holography. In this paper we overview the non-uniform CS random concept evolution and application for coherent holography, incoherent holography and for 3D LiDAR imaging.
3D image processing using deep neural network
In 3D image processing field, many researches have been conducted, such as multiview image coding and data compression, view interpolation, coded aperture based light field acquisition, and light field display signal calculation. The challenge of these technologies is that they usually require heavy computation due to the large amount of data. In this paper, we report the results of some experiments where we replace these computation with deep neural network (DNN) and convolutional neural network (CNN). In some of the cases, DNN and CNN show better performance than conventional methods both in quality and calculation speed.
Sampling requirements for lightfield displays with a large depth of field
Tim Borer
Lightfield displays potentially offer a new form of video content providing greater immersion and a stronger sense of presence. Invented by Gabriel Lippman in 1908, lightfield displays can present natural 3D images that have motion parallax in both horizontal and vertical directions. Importantly, they also allow the viewer to focus at different depths within the image, which is not possible with stereoscopic displays. Ideally they require a large depth of field. The depth of field is the range of depths, perpendicular to the display, over which the full resolution of the display (at zero depth) is maintained. To make the best use of lightfield displays we need to know the depth of field and how image resolution decreases outside this range. Prior literature provides an indication of the depth of field for displays with shallow depths of field. Such calculations are based on the Nyquist limit for multidimensional (angular and spatial) sampling. Extrapolating this approach for larger depths of field indicates that an infinite number of elemental pixels would be required to achieve an infinite depth of field. If true this would be a disappointing result. However, such calculations are based on the physical lightfield and do not take account of the observer. Taking account of the observer indicates that only a finite number of elemental pixels are required to achieve an infinite depth of field. This paper presents formulas, and their derivation, for the depth of field of lightfield displays with a large depth of field.
Comparison of reconstructed image quality in 3D display using optimized binary phase modulation
We have been investigating the three-dimensional (3D) display system using binary phase modulation. Basically, the elimination of the amplitude distribution and the binarization of the phase distribution degrade the reconstructed image quality and enhance the speckle influence. Therefore, the optimization of the binary phase pattern is required. So far, the Gerchberg–Saxton and modified Fresnel ping-pong algorithms have been proven to be the powerful iterative phase retrieval methods in 3D displays. Error diffusion technique is available for gray-scale phase distribution. In this paper, we compare the reconstructed image quality of two-dimensional imaged floating in the air by using several methods.
Manipulation of material perception with light-field projection
Reflected illumination from a surface provides a variety of information about material perception. A precisely designed illumination projection with a video projector changes the reflected illumination, and it manipulates our material perception, apart from the apparent texture and color. In this paper, several works on perceptual material appearance manipulation that employ projector-camera systems are presented. Psychophysics-based image processing algorithms that utilize co-axial optical configuration facilitate successive manipulation. In our latest work, viewing-direction-dependent appearance manipulation using light-field projection on an anisotropic reflection surface is proposed for advanced material perception manipulation.
3D Displays
icon_mobile_dropdown
Aquatic information display and its applications for behavioral biology experiments
Hirotsugu Yamamoto, Erina Abe, Masaki Yasugi, et al.
This paper proposes a new display that forms information screen in water or around a water tank. The information screen is formed with aerial imaging by retro-reflection (AIRR). AIRR employs three elements, which are a light source, a beam splitter, and a retro-reflector. The retro-reflector converges the retro-reflected lights to the plane-symmetrical position of the light source regarding the beam splitter. AIRR features a wide viewing angle, a large-size scalability, and a low cost with mass-productive process. An information screen inside a water tank can be formed with the following three types of optical systems. In Type I, we place a beam splitter above the water surface to reflect lights from the light source and transmit the retro-reflected lights. Type II uses the water surface as a beam splitter because the intersection between water and air causes reflection and refraction. Type III uses the bottom of the water tank as a beam splitter. Preliminary experimental results show that Type III forms a clearer and stable image inside water. By use of a coneshaped beam splitter and a flat panel display, we have formed an omni-directional information screen that surrounds a cylindrical water tank. We have utilized this omni-directional display to investigate optomotor reaction of fish for the visual stimuli. A medaka in the water tank followed to the rotating stripes shown on the surrounding screen. Furthermore, a medaka reacted to a biological motion image that was shown in water. Thus, the proposed display is useful for behavioral biology experiments.
Aktina vision: full-parallax light field display system with resolution of 330,000 pixels using top-hat diffusing screen
Hayato Watanabe, Naoto Okaichi, Takuya Omura, et al.
Light field displays technologies are popular glasses-free three-dimensional (3D) display methods, whereby natural 3D images can be viewed by precisely reproducing light rays from the objects. However, sufficient display performances cannot be obtained with conventional display techniques because reproduction of a great number of high-density light rays is required for high quality 3D images. Therefore, we develop a novel light field display method named Aktina Vision, which consists of a special 3D screen with isotropic narrow diffusion characteristics and a display optical system for projecting high-density light rays. In this method, multi-view images with horizontal and vertical parallaxes are projected onto the 3D screen at various angles in a superposed manner. The 3D screen has narrow diffusion angle and top-hat diffusion characteristics for optimal widening of the light rays according to the discrete intervals between the rays. 3D images with high resolution and depth-reproducibility can be displayed by suppressing crosstalk between light rays and reproducing them with continuous luminance distribution. We prototype a display system using 14 exclusively designed 4K projectors and develop a light field calibration technique. The reproduction of 3D images with a resolution of approximately 330,000 pixels, which is three times higher than that of conventional display methods using a lens array, and viewing angles of 35.1° in the horizontal direction and 4.7° in the vertical direction is realized by projecting 350 multiview images in a superposed manner.
Calibration method applied to a tunable tensor display system
David Carmona-Ballester, Viana L. Guadalupe-Suárez, Juan M. Trujillo-Sevilla, et al.
We present our latest advances in the design and implementation of a tunable automultiscopic display based on the tensor display model. A design comprising a three-layer display was introduced. In such design, front and rear layers were enabled to be controlled in a six-degree of freedom manner related to the central layer of the system. A calibration method consisting on displaying a checkerboard pattern in each layer was proposed. By computing the homography of these patterns with respect to the reference plane, it was possible to estimate the needed adjustments. An implementation based on such design was carried over and calibrated following the aforementioned technique. The obtained results demonstrated the feasibility of such implementation.
AR optics using two depths
Sung Kyu Kim, Yong Won Kwon, Ki-Hyuk Yoon
For the AR optics that provides only one virtual depth, it is impossible to be applied to a wide range of depths due to the vergence and accommodation conflict (VAC). We have developed a way to overcome this mismatch problem by devising the AR optics system with two depths. And from the analysis of FOV, ER, EB and experiment results we show the possibility that the depth of virtual image has wide range.
3D Imaging and Related Technologies II
icon_mobile_dropdown
Reproducibility of depth distance by one-dimensional integral photography
We developed an integral photography with parallax only in the horizontal direction, namely one-dimensional integral photography. For developing, 4K LCD and lenticular lenses were used for the display equipment. With regard to the reproduced three-dimensional image, a fixation point was provided to the camera array in the computer, and then a multi-view stereoscopic image was picked-up, pixel position conversion was performed, and an elemental image was generated on the LCD and displayed. The display characteristics of resolution and depth distance related to the prototyped one-dimensional integral photography were described. Furthermore, from the subjective evaluation experiment results, we described the degree of the influence of the reduction of vertical spatial frequency on the depth perception and the influence of the toed-in capturing method on the reproduction of the depth distance.
Active 3D fluorescence imaging based on holography
3D fluorescence imaging based on incoherent digital holography is very useful in the biological field. We have proposed a common-path and off-axis incoherent digital holography using a diffraction grating embedded with a lens to make the self-interference pattern. Based on the observation of the fluorescence image, active illumination of the holographic method is more efficient to avoid unwanted exposure to another area. We present the feasibility experiment to show the potential characteristics of our proposed system.
Accurate and consistent depth estimation for light field camera arrays
Sang-Heon Shim, Jae Woo Kim, Sang-Eek Hyun, et al.
In this paper, we propose a depth estimation framework for light field camera arrays. The goal of the proposed framework is to compute consistent depth information over the multiple cameras which is hardly achieved by conventional approaches based on the pairwise stereo matching. We first perform stereo matchings on adjacent image pairs using a convolutional neural network-based correspondence scoring model. Once the local disparity maps are estimated, we consolidate the disparity values to make them globally sharable over the multiple views. We finally refine the depth values in the image domain by introducing a novel image segmentation method considering edges in the image to obtain a semantic-aware global depth map. The proposed framework is evaluated on three different real world scenarios, and the experimental results validate that our proposed method produces accurate and consistent depth maps for images captured by the light field camera arrays.
3D Imaging and Related Technologies III
icon_mobile_dropdown
Emulation of three-dimensional vision in plants in the red/far-red region by artificial photosynthesis
Ji-Hoon Kang, Minjeong Kim, Hyejin Jung, et al.
Animals see the world through their eyes. Even though plants do not have organs of the visual system, plants are receptive to their visual environment. However, the exact mechanism of vision in plants has yet to be determined. For plants, vision is one of the important senses because they store energy from light. Light is not only the source of growth but also a vector of information for plants. Photosynthesis is one of the typical phenomena where light induces the response from plants. Photosynthesis is the process that coverts light energy into chemical energy and produces oxygen. In this study, we have emulated the three-dimensional vision in plants by artificial photosynthesis. Instead of using real plant cell, we have exploited the artificial photosynthetic properties of photoelectrochemical (PEC) cell. The siliconbased PEC cell sensitive to red/far-red region (600 - 850 nm) was used as a single-pixel sensor, and a mechanical scanner was used to simulate two-dimensional sensor array with a single-pixel sensor. We have successfully obtained the result by measuring photocurrents generated by photosynthetic water splitting.
Capturing of a light field image and its real-time aerial reconstruction with AIRR
This paper proposes a novel system enables aerial light-field television system. Our aerial light-field TV system consists of a light-field camera, a light-field display, and an aerial imaging optics. Our light-field camera is composed of an imaging lens and a micro-lens array on a high-resolution image sensor. Our light-field display is composed of a lens array, a slightly diffuser plate, and a high-resolution flat panel display. The diffuser plate is used to eliminate noticeable fringes due to the black matrix on the flat panel display. Our aerial imaging optics is composed of a projection lens, a beam splitter, and a retro-reflector. The principle is called AIRR (aerial imaging by retro-reflection). In this work, AIRR forms the aerial image of the projection lens. That is, the light coming out of the projection lens transmits the beam splitter and impinges the retro-reflector. The retro-reflected light reflects on the beam splitter and converges to the aerial image position of the projection lens. When the viewer’s eyes are in front of the aerial image position, the viewer can recognize the reconstructed aerial light-field image. In the conventional light-field camera and display system, image conversion in each elemental image was needed in order to show the true 3D depth. Because imaging optics forms an inverted image in each elemental image. On the contrary, our aerial light-field TV system requires no image conversion because the aerial imaging optics converts the depth. Thus, it is possible to reconstruct the light-field image in the air in real time.
Digital/Electro-Holography and Related I
icon_mobile_dropdown
Dedicated computer for computer holography and its future outlook
Takashi Nishitsuji, Yota Yamamoto, Takashige Sugie, et al.
Electro-holography is a prospective television technology for realizing photorealistic three-dimensional (3D) movies. However, the enormous computational power requirement for generating computer-generated holo- grams (CGHs) for digitally recording 3D information of the displayed image has been a barrier for the practical application of electro-holography. To solve this problem, our team has developed a dedicated computer for electro-holography, namely Holographic Reconstruction (HORN). HORN is a peripheral board-type computer comprising of field programmable gate arrays (FPGAs) and a PCI-express interface to configure cluster systems. In this paper, we introduced the detailed structure of HORN-8 and the implemented algorithms on it. Moreover, we discuss future prospects for improving its visual performance using executed experimental results.
New product development through fusion of hologram technology
Hideyoshi Horimai, Toshihiro Kasezawa
Holograms have many unique features and functions. We have been developing several new products by fusing hologram technology. In Holo-Table, by synchronously projecting the parallax image on the rotational holographic scanning plate, we succeeded in displaying the stereoscopic image that can be observed from 360 degrees. In Holo- Window, we are developing brand-new building-integrated photovoltaic (BIPV) system by simply pastes a hologram film on the window. Hologram film is guiding sunlight to the inside of the window glass and leads it to a small solar cell that generates electricity. In this paper, we introduce more examples of our product developments applied by hologram technology.
Digital/Electro-Holography and Related II
icon_mobile_dropdown
Marker-free automatic quantification of red blood cell fluctuations with different storage periods by holographic imaging
Inkyu Moon, Keyvan Jaferzadeh
This paper overviews the methods to quantitatively measure the cell membrane fluctuation (CMF) rate of red blood cells (RBCs) with different storage periods with millisecond temporal sensitivity at the single-cell level by using marker-free holographic imaging techniques. We quantitatively measured fluctuations in the discocyte shape of RBCs membrane, ring and dimple in the case of storage lesion with time-lapse phase images of RBCs. Our experimental results demonstrate that normal RBCs with a discocyte shape become stiffer with storage period and there is a significant negative correlation between CMFs and the sphericity coefficient, which describes the RBCs morphology. Correlation between CMF and projected surface area is also performed.
Digital holographic microscopy as a screening technology for diabetes
Ana Doblas, Jorge Garcia-Sucerquia, Genaro Saavedra, et al.
Label-free quantitative phase imaging (QPI) is the hallmark of digital holographic microscopy (DHM). One of the most interesting medical applications of QPI-DHM is that it can be used to analyze illnesses in which the refractive index or/and the morphology of cells/tissues are distorted, from the acquisition of a single image. In this contribution, we obtain the phase maps of red blood cells (RBCs) samples of patients suffering from diabetes mellitus type 1 (DM1) by using a DHM. Our experimental results show that the measured phase values are significantly different between control non-diabetic and diabetic patients. The high correlation coefficient between the phase and the glycated hemoglobin (HbA1C) values determined by the gold standard method to screen diabetes and the clear separation between the two groups indicate that DHM may potentially be used to evaluate long-term glycemic control in diabetic patients as well as to diagnose diabetes.
Digital/Electro-Holography and Related III
icon_mobile_dropdown
Hologram image quality of binary modulating SLM-based holographic display (Conference Presentation)
We have successfully implemented and reported 360-degree viewable tabletop-style holographic display prototype systems. In order to support 360 degree horizontal viewing angle, which is very much wider than previous systems, we used binary amplitude modulating DMD device as SLM of the display system. DMD has higher total data rate due to its high refresh rate, which is denoted as extended SBP or eSBP, than other SLM devices like LCD or LCoS. This is much beneficial for the system design aiming at wider viewing angle or larger image size. However, Binary amplitude modulating hologram has inherent limitation in terms of resulting hologram image quality, and thus there has been lots of studies on quality-improving coding algorithms like BERD or DBS. In this paper, pros and cons of using DMD in holographic display system are discussed, and in-depth analysis and experimental results are presented on the behavior and limitations of the reconstructed image quality based on our prototype system. Image quality is measured with various metrics like 3D-MTF, depth resolution and color reproducing fidelity for different pixel resolutions from QVGA up to 4K UHD. 3D-MTF represents lateral image resolution and depth resolution has to do with degree of supporting accommodation of human vision in holographic displays. Based on these observations, we draw some projection on the required pixel resolutions of binary modulating SLM device for achieving acceptable hologram image quality.
Phase imaging in-line digital holography with random phase modulation
In-line digital holography using a random phase modulation is proposed. Owing to the random phase modulation, twin-image problem inherent to the in-line optical setup is relieved. Furthermore, because no multiplexed recording is required, dynamic phenomena can be recorded. This idea is inspired by double-random phase- encryption. The difference between the proposed method and the double-random phase-encryption is brie y described. Preliminary experimental results confirm the feasibility of the proposed method. The difference of coherence of a light source is also discussed.
Quality of electro-holographic image measured with Shack-Hartmann wavefront sensor (Conference Presentation)
The reconstructed image from digital holography are laden with many distortions. The main cause of these distortions is known as the finite size of pixels in the display panel/chip. Due to this finite size, the starting position of the reconstructed rays in each pixel can be any place in the pixel. Hence the starting position can be different from the recording beam position which is usually considered as the center of each pixel. This difference makes that the reconstructed rays are no longer the phase conjugated rays of their corresponding recording rays. The reconstructed rays are somewhat distorted in their wavefronts. To estimate these wavefront distortions, a Shack-Hartmann wavefront sensor is used in the pathway of the reconstructed beam. The phase distribution obtained with the sensor reveal that the distortion is more for the bigger pixel size and for the images with more reconstructed image points as expected. This result indicates that the sensor is a reasonable method of estimating the distortions in the reconstructed image. The same sensor is also used to estimate the functional performance of holographic optical elements for image projection.
3D Imaging and Related Technologies IV
icon_mobile_dropdown
FPGA-based phase measuring profilometry system
Albrecht Hess, Christina Junger, Maik Rosenberger, et al.
This paper proposes an architecture for a phase measuring profilometry system, that can be efficiently implemented on a Xilinx Zynq-7000 SoC. After a brief system overview, the paper starts at the very beginning point of such a task, that is the camera calibration. A calibration procedure using OpenCV functions is outlined and the calculation of compressed rectification maps is described in more detail. The compressed rectification maps are used for lens undistortion and rectification to reduce the memory load. The hardware accelerated part of the system comprises the image acquisition, the lens undistortion and image rectification, the phase accumulation with following phase unwrapping, the phase matching and the 3D reconstruction. For phase unwrapping a multi-frequency approach is used that can be easily implemented on the given architecture. The interfacing of the hardware modules follows a fully pipelined implementation scheme so that the image processing can be done in real time.
A HMD with automatic control of interocular distance
Beom-Ryeol Lee, Wook-Ho Son, Jung-Young Son, et al.
A design concept of a goggle type HMD (Head Mount Display) which is capable of controlling automatically user’s interocular distance is introduced. A linear motor is hired for each of the left and right pupillary distance control based on the measurement of the interocular distance with a micro-camera located at the top of the microprojector for each eye. A half mirror for each eye is used to connect the projector/camera to a corresponding eye. Each camera measures its corresponding eye’s pupil with a high accuracy under the illumination of infrared light located at near the camera. The distance range of the controlling is 55 mm to 75 mm. The maximum travelling distance of each linear motor with the four optical components is 10 mm.
3D visualization in multifocus fluorescence microscopy
Julia R. Alonso, Alejandro Silva, Miguel Arocena
Limited depth-of-field can be overcome through computational optical imaging. In this work, a custom built fluorescence microscope with a focus electrically tunable lens is used for the acquisition of a multifocus image sequence (z-stack) of a 3D fluorescent sample. Image registration between the acquired images is often needed as a preprocessing step before the reconstruction of images with new characteristics. Then a multifocus image fusion algorithm is implemented to accomplish all-in-focus image reconstruction from the registered z-stack. Also computational perspective shifts that allow to the reconstruction of stereoscopic pairs of the sample and its three-dimensional visualization are implemented.
Extracting sound from flow measured by parallel phase-shifting interferometry using spatio-temporal filter
Risako Tanigawa, Kenji Ishikawa, Kohei Yatabe, et al.
We have proposed a method of simultaneously measuring aerodynamic sound and fluid ow using parallel phase- shifting interferometry (PPSI). PPSI can observe phase of light instantaneously and quantitatively. This method is useful for understanding the aerodynamic sound because PPSI can measure near the source of the aerodynamic sound. However, the components of sound and ow should be separated in order to observe detail near the source of sound inside a region of ow. Therefore, we consider a separation of the component of sound from simultaneously visualized images of sound and ow. In previous research, a spatio-temporal filter was used to extract a component satisfying the wave equation. The ow and the sound are different physical phenomena, and the ow cannot be expressed by the wave equation. Hence, we think that the spatio-temporal filter enables us to separate the component of sound from the simultaneously visualized images. In this paper, we propose a method for separation of ow and sound using spatio-temporal filter in order to visualize the component of the aerodynamic sound near its source. We conducted an experiment of the separation of data measured by PPSI. The results show that the spatio-temporal filter can extract the sound from air-ow except for the sound near objects and boundaries.
Poster Session
icon_mobile_dropdown
Overview of automated sickle cell disease diagnosis by analysis of spatio-temporal cell dynamics in digital holographic microscopy
We overview a previously reported system for automated diagnosis of sickle cell disease based on red blood cell (RBC) membrane fluctuations measured via digital holographic microscopy. A low-cost, compact, 3D-printed shearing interferometer is used to record video holograms of RBCs. Each hologram frame is reconstructed in order to form a spatio-temporal data cube from which features regarding membrane fluctuations are extracted. The motility-based features are combined with static morphology-based cell features and inputted into a random forest classifier which outputs the disease state of the cell with high accuracy.
Three-dimensional ghost imaging based on differential optical path
Jie Cao, Fanghua Zhang, Kaiyu Zhang, et al.
We present a novel structure based on differential optical path (DOP). The performance of three-dimensional ghost imaging (3DGI) is improved by DOP with high sensitivity and suppressed common noise because of the benefits of extracting zerocrossing point (i.e., interesting target position). Simulation results agree well with the theoretical analysis. Moreover, the relation between time slice and the signal-noise-ratio of 3DGI is discussed, and the optimal differential distance is obtained, thus motivating the development of a high-performance 3DGI.
Robust object recognition in 3D scene by stereo vision image processing with the generalized Hough transform
Ariel Fernández, Juan M. Llaguno
Object recognition is an automated image processing application of great interest in areas ranging from defect inspection to robot vision. In this regard, the generalized Hough transform (GHT) is a well-established technique for the recognition of geometrical features out of binary images even corrupted by noise or when the target is partially occluded. In order to enhance the performance of the original algorithm in detecting a given geometrical feature out of a single image, we consider the transformation of an stereo pair of a 3D scene under the GHT, one of the images using the template we are looking for and the other using its corresponding according to the perspective transformation that relates the images of the stereo pair. Validation experiments using partially occluded targets in noisy environments are presented.
An overview of spatial-temporal human gesture recognition under degraded environments using integral imaging
Xin Shen, Hee-Seung Kim, Satoru Komatsu, et al.
We overview a previously reported method for spatial-temporal human gesture recognition under degraded environmental conditions using three-dimensional (3D) integral imaging (InIm) technology with correlation filters. The degraded conditions include low illumination environment and occlusion in front of the human gesture. The human gesture is captured by passive integral imaging, the signal is then processed using computational reconstruction algorithms and denoising algorithms to decrease the noise and remove partial occlusion. Gesture recognition is finally processed using correlation filters. Experimental results show that the proposed approach is promising for human gesture recognition under degraded environmental conditions compared with conventional recognition algorithms.