Proceedings Volume 10883

Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXVI

cover
Proceedings Volume 10883

Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXVI

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 7 June 2019
Contents: 11 Sessions, 21 Papers, 28 Presentations
Conference: SPIE BiOS 2019
Volume Number: 10883

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10883
  • Molecular Imaging and Tracking
  • High Speed and Automation in Microscopy
  • Algorithms and Modeling
  • Compressive Sensing in Microscopy
  • Cell and Tissue Imaging I
  • Cell and Tissue Imaging II
  • Advances in Instrument Design
  • Quantitative Phase and Holographic Methods
  • Nonlinear and Fluorescence Microscopy
  • Poster Session
Front Matter: Volume 10883
icon_mobile_dropdown
Front Matter: Volume 10883
This PDF file contains the front matter associated with SPIE Proceedings Volume 10883, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Molecular Imaging and Tracking
icon_mobile_dropdown
Simultaneous detection of 3D orientation and 3D spatial localization of single emitters for nanoscale structural imaging (Conference Presentation)
Measuring single molecule 3D orientational behavior is a challenge that, if solved in addition to 3D localization, would provide key elements for super resolution structural imaging. Orientation contains indeed information on local conformational properties of proteins, while orientational fluctuations are signatures of local steric, charges or viscosity constraints. Both these properties are not perceptible in pure super resolution imaging, which relies on position localization measurements. Imaging 3D orientation together with 3D localization is however not easily accessible due to the intrinsic coupling between spatial deformation of the single molecules’ point spread function (PSF) and their off-plane orientations, as well as the requirement to measure six parameters which are not directly distinguishable (two angles of orientation, aperture of angular fluctuations, and three spatial position coordinates). In this work, we report a method that is capable of resolving these six parameters in a modality that is compatible with super resolution imaging. The method is based on the use of a stress-engineered spatially-variant birefringent phase plate placed in the Fourier plane of the microscope detection path. This modifies the PSF of single emitters in a way that can be non-ambiguously decomposed onto the nine 3D-analogs of the Stokes parameters. Moreover, the use of two complementary co/counter circular polarizations projections provides a non-ambiguous determination of the 3D spatial position of single emitters with tens of nanometers precision. This method, which opens to nanoscale structural imaging of proteins organization, is presented on model nano-beads emitters and applied to single fluorophores used for cytoskeleton labelling.
3D molecular orientation imaging by polarization IR microscopy (Conference Presentation)
A non-tomographic analysis method is proposed to determine the 3D angles and the order parameter of molecular orientation using polarization-dependent infrared (IR) spectroscopy. Conventional polarization-based imaging approaches provide only 2D-projected orientational information of single chromophores or vibrational modes. The newly proposed method concurrently analyses polarization-dependent absorption profiles of two non-parallel transition dipole moments. The relative phase angle and the maximum-to-minimum ratios of the two polarization-dependent absorption profiles are used to calculate the 3D angles and the order parameter of molecular orientation. The relativity of those intermediate observables makes the analysis output values unaffected by variations in concentration, thickness, absorption peak, and absorption cross-section, which can occur in typical imaging conditions. This analysis is based on a single-step, non-iterative calculation that does not require any analytical model function of an orientational distribution function. This concurrent polarization analysis method is demonstrated using two simulation data examples and the error propagation analysis is discussed as well. Application of this robust spectral analysis method to polarization IR microscopy will provide a full molecular orientation image without tilting that tomographies require. In this talk, I describe this new approach that non-iteratively determines the 3D angles and the orientational order parameter without assuming a model function for an ODF. Then, I will demonstrate an application of this analysis using experimental image data acquired from a semicrystalline polymer film with polarization IR microscopy. The results clearly show how the 3D angles and the order parameter are determined for every pixel using straightforward formulas without iterative calculation.
Fluorescence Recovery After Photobleaching (FRAP) with simultaneous Fluorescence Lifetime and time-resolved Fluorescence Anisotropy Imaging (FLIM and tr-FAIM)
Y. Teijeiro-Gonzalez, A. Le Marois, A. M. Economou, et al.
We report the simultaneous combination of three powerful techniques in uorescence microscopy: Fluorescence Lifetime Imaging (FLIM), Fluorescence Anisotropy Imaging (FAIM) and Fluorescence Recovery After Photobleaching (FRAP), also called F3 microscopy. An exhaustive calibration of the setup was carried out with several rhodamine 6G (R6G) solutions in water-glycerol and from the combination of the FAIM and FRAP data, the hydrodynamic radius of the dye was directly calculated. The F3 data was analyzed with a home-built MATLAB script, and the setup is currently explored further with Green Fluorescent Protein (GFP). Some molecular dynamic (MD) simulations are currently being run in order to help with the interpretation of the experimental anisotropy data.
High Speed and Automation in Microscopy
icon_mobile_dropdown
Evaluation of a high speed multispectral light source for stroboscopic differential imaging for endocardial examination of Daphnia magna
Marcus Wittig, Georg N. Rogler, Alexander Kabardiadi, et al.
In the field of medicine and biology there are fast repetitive movements at the microscopic level which influence the overall dynamics and behavior of the system. In order to present details of these fast movements which are above the temporal resolution limit of the human eye, a new stroboscopic multispectral imaging system was developed. The Daphnia magna, which is a good test organism due to its transparent shell, served as a test animal to evaluate this new imaging system. The heart rate of the Daphnia magna is about 400 beats per minute and thus the dynamics of the individual heart contractions of the animal can no longer be clearly differentiated using standard microscopy. These cardiac phases were visualized by stroboscopic illumination with pulse duration of 500 ns and with the aid of a microscope. The stroboscopic illumination was realized by a pulsed light source consisting of four light emitting diods (LED). In general, the spectral range of the illumination is configurable using combinations of these LEDs, however, in this instance the wavelengths were selected with the known absorption of haemoglobin at 410 nm, 470 nm, 680 nm and 870 nm. Furthermore, it was also possible to use the four wavelength differences available to generate images of Daphnia magna utilizing the transmission and absorption properties of biological tissue and its surrounding environment. In addition to a clear representation of the heart, the blood flow in the open cardiovascular system [1, 2, 3] of the Daphnia magna was imaged by observing the absorption of the macromolecule haemoglobin with different wavelengths.
Development of a ultrahigh speed line-scan OCT for measuring hemodynamics of chicken embryo's developing heart (Conference Presentation)
Shau Poh P. Chong, Zhen Yu Gordon Ko, Nanguang Chen
Hemodynamics is a critical factor for healthy embryonic and fetal development, and when altered, could result in congenital heart defects (CHD), the most common birth defect in the newborns. The fluid mechanical forces in the blood flow during early cardiac development could influence overall morphogenesis of cardiovascular system. Though near-infrared light (NIR) point-scan OCT has been used to quantitatively assess the hemodynamics in the embryo, high speed visualization of the developing chicken embryo is still lacking. Here, we developed a line-scanning NIR OCT for high speed visualization of chicken embryo hemodynamics, which dramatically improved the overall imaging throughput and also lowered the threshold of maximal exposure limit for the power delivered to the samples. The noise performance of the supercontinuum light source with up to 200 MHz pulse repetition rate, will be characterized across different pulse repetition rates, camera exposure time, and wavelengths. An improved spectrometer employing a 1200 lpmm reflective grating, Zeiss Interlock® lenses and a two-dimensional high speed CMOS camera was built to optimize the maximal sensitivity and sensitivity rolloff. Furthermore, a phase scanning mechanism at the reference arm will also be implemented to remove image artifacts and double the imaging range. The effective performance of the line-scan OCT system in term of maximal sensitivity, imaging speed, and contrast will be assessed by imaging developing heart in chicken embryo. The structural and functional information of dynamic cardiac tissue deformation and blood flow in ultrahigh spatiotemporal resolution will further enhance our understanding of the roles of hemodynamics in embryonic development.
Automated large-volume confocal imaging system (Conference Presentation)
Confocal microscopy has been a standard tool to acquire 3D fluorescence biological images at sub-micron resolution. The scattering effects in turbid tissue and the specifications of high N.A. objective lens limit the image dimensions, so the confocal microscopy frequently provides images for micro-anatomy. However, the high quality large volume tomography is still desired to provide correlative images between micro- and macro-anatomy. In this presentation, we extend the dimensions of micro-image at single-cell resolution from tens micrometer levels to multi-millimeter levels by integrating steps of tissue clearing, vibratome sample sectioning, stepper image stitching, and confocal imaging techniques, and we named this system as Serial Tiled-Z axial (STZ) tomography. Mapping the whole-body connectome, a wiring diagram of the entire nervous system is the first application of STZ tomography and provides the whole body neural circuits for governing internal body functions and external behaviors. STZ tomography generates high-resolution in situ datasets for accurate registration of structural and functional data collected from different individuals into a common three-dimensional space for big data storage, search, sharing, analysis, and visualization. Inserting a super resolution module, STZ tomography opens the door to super resolution imaging of routine systematic neuroanatomy of large tissues, such as the whole mouse and human brain. The second application is to map tumor tissue samples which are free from distortion problem from dehydration in the H&E protocol.
Fast stimulated Raman projection tomography with iterative reconstruction from sparse projections
As an emerging volumetric imaging technique, Stimulated Raman projection tomography (SRPT) can provide quantitative distribution of chemical components in a three-dimensional (3D) volume, with a label-free manner. Currently, the filtered back-projection (FBP) algorithm is used to reconstruct the 3D volume in SRPT. However, to obtain a satisfactory reconstruction result, the FBP algorithm requires a certain amount of projection data, usually, at least 180 projections in a half circle. This leads to a long data acquisition time and hence limits dynamic and longitudinal observation of living systems. Iterative reconstruction from sparsely sampled data may reduce the total data acquisition time by reducing the projections used in the reconstruction. In this work, two total variation regularization based iterative reconstruction algorithms were selected and used in SRPT, including the simultaneous algebra reconstruction technique (SART) and the two-step iterative shrinkage/thresholding algorithm (TwIST). The well-known distance-driven model was utilized as the forward and back-projectors. We evaluated these two algorithms with numerical simulations. Using the original image as the reference, we calculated the quality of the reconstructed images. Simulation results showed that both the SART and TwIST performed better than the FBP algorithm, with larger values of the structural similarity (SSIM). Furthermore, the number of projection images can be largely reduced when the iterative reconstruction algorithm was used. Especially when the SART was used, the projection number can be reduced to 15, providing a satisfactory reconstruction image (SSIM is larger than 0.9).
Visualization of three-way and higher order data sets (Conference Presentation)
Data sets of order three or more are increasingly common in areas ranging from biomedical imaging to threat detection, and are output from a number of spectroscopy (e.g. NIR, Raman, Excitation Emission Fluorescence) and spectrometry (e.g. SIMS) methods. Various chemometrics methods can be used to reduce the dimensionality of these data sets, and the resulting compressed data can then be visualized. These methods include Principal Components Analysis (PCA), Multivariate Curve Resolution (MCR), and Maximal Autocorrelation Factors (MAF) as well as numerous data clustering methods (e.g. HCA, DBSCAN, KNN) and classification techniques (e.g. PLS-DA, SIMCA). These methods can also be combined with traditional image analysis techniques such as particle analysis. This talk gives examples of how up front chemometric modeling can be used to extract relevant information which can then be visualized in two and three dimensions, and in time.
Algorithms and Modeling
icon_mobile_dropdown
Learning approach to computational microscopy
Emerging deep learning based computational microscopy techniques promise novel imaging capabilities beyond traditional techniques. In this talk, I will discuss two microscopy applications. First, high space-bandwidth product microscopy typically requires a large number of measurements. I will present a novel physics-assisted deep learning (DL) framework for large space-bandwidth product (SBP) phase imaging,1 enabling significant reduction of the required measurements, opening up real-time applications. In this technique, we design asymmetric coded illumination patterns to encode high-resolution phase information across a wide field-of-view. We then develop a matching DL algorithm to provide large-SBP phase estimation. We demonstrate this technique on both static and dynamic biological samples, and show that it can reliably achieve 5x resolution enhancement across 4x FOVs using only five multiplexed measurements. In addition, we develop an uncertainty learning framework to provide predictive assessment to the reliability of the DL prediction. We show that the predicted uncertainty maps can be used as a surrogate to the true error. We validate the robustness of our technique by analyzing the model uncertainty. We quantify the effect of noise, model errors, incomplete training data, and “out-of-distribution” testing data by assessing the data uncertainty. We further demonstrate that the predicted credibility maps allow identifying spatially and temporally rare biological events. Our technique enables scalable DL-augmented large-SBP phase imaging with reliable predictions. Second, I will turn to the pervasive problem of imaging in scattering media. I will discuss a new deep learning- based technique that is highly generalizable and resilient to statistical variations of the scattering media.2 We develop a statistical ‘one-to-all’ deep learning technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable deep learning approach for imaging through scattering media.
An open source software tool for arbitrary vector beams in free-space and stratified media (Conference Presentation)
Peter R. T. Munro
Vectorial models of focused beams are important to the field of advanced optical microscopy and a variety of other fields including lithography, optical physics and biomedical imaging. This has led to many models being developed which calculate how beams of various profiles are focused both in free space and in the presence of stratified media. The majority of existing models begin with a vectorial diffraction formula, often referred to as the Debye-Wolf integral, which must be evaluated partially analytically and partially numerically. The complexity of both the analytic and numerical evaluations increases significantly when exotic beams are modeled, or, a stratified medium is located in the focal region. However, modern day computing resources permit this integral to be evaluated entirely numerically for most applications. This allows for the development of a vectorial model of focusing in which the focusing itself, interaction with a stratified medium and incident beam specification are independent, allowing for a model of unprecedented flexibility. In this presentation we outline the theory upon which this model is developed and show examples of how the model can be used in applications including optical coherence tomography, high numerical aperture microscopy and the properties of cylindrical vector beams. We have made the computer code freely available.
Multi-layer Born scattering: an efficient model for 3D phase tomography with multiple scattering objects (Conference Presentation)
3D quantitative phase (refractive index) microscopy reveals volumetric structure of biological specimens. Optical diffraction tomography (ODT) is a common technique for 3D phase imaging. By angularly scanning a spatially coherent light source and measuring scattered fields on the imaging plane, 3D refractive index (RI) is recovered by solving an inverse problem. However, ODT often linearizes the process by using a weakly scattering model, e.g. the first Born approximation or Rytov approximation, which underestimate the RI and fail to reconstruct realistic shape of high RI contrast multiple scattering objects. On the other hand, non-linear models such as the multi-slice or beam propagation methods mitigate artifacts by modeling multiple scattering. However, they ignore back-scattering and intra-slice scattering and make a paraxial approximation by assuming each slice is infinitesimally thin. In this work, we propose a new 3D scattering model Multi-layer Born (MLB), which treats the object as thin 3D slabs with finite thickness and applies the first Born approximation on each slab as the field propagates through the object, increasing the accuracy significantly. In the meantime, a similar computation complexity is achieved comparing to the previously proposed multi-slice models. Therefore, MLB can achieve accuracy similar to that of FDTD or SEAGLE, a frequency domain solver, with orders of magnitude less computation time. In addition to forward scattering, multiple back-scattering effects are also captured by MLB unlike existing models. We apply MLB to recover the RI distribution of 3D phantoms and biological samples with intensity-only measurements from an LED array microscope and show that the results are superior to existing methods.
Towards the analytical modeling of backscattering polarimetric patterns recorded from multiply scattering systems (Conference Presentation)
Hidayet Günhan Akarçay, Manes Hornung, Arushi Jain, et al.
A better understanding of the interaction of polarized light with biological tissue can lead to the development of valuable and minimally invasive diagnostics tools. Yet, this can be a challenging undertaking in the presence of multiply scattering. We have built a polarimetric microscope to probe multiply scattering systems In the backscattering geometry. Our apparatus has been calibrated and validated by comparing measurements to the outcomes of Monte Carlo simulations. We have recorded the spatially distributed Stokes vectors of the light backscattered from various standard samples (colloidal suspensions with varying sphere sizes, birefringent cellophane tape, etc.). To understand the behavior of these samples, we have developed: (i) a methodology that makes explicit the dependence of the polarimetric properties on the polarization state of the probing light beam; (ii) and a forward analytical model that is based on the coherency matrix and generates backscattered polarimetric patterns. Our findings demonstrate that additionally to ‘classical’ polarimetric properties (diattenuation, retardance, and depolarization), the helicity flip induced by the samples should also be included into the parametrization. This allows not only to reproduce the measurements, but also to discriminate between the different samples. Identifying the birefringent properties and structural anisotropy of the probed samples is made possible. This study is in line with research aimed at re-evaluating the polarimetric properties of multiply scattering systems and constitutes the groundwork to the accompanying contribution entitled «Polarimetric imaging of the light backscattered from multiply scattering nanofibrous PVDFhfp scaffolds».
Computational improvement in single-pixel imaging contrast and resolution (Conference Presentation)
Robert J. Stokoe, Patrick A. Stockton, Ali Pezeshki, et al.
Single-pixel imaging is a developing family of techniques which offer several advantages over conventional imaging with a segmented detector. These include higher speed, improved availability and quality of detectors at long wavelengths. Examples include laser-scanning microscopy, frequency-domain techniques, ghost imaging, and methods employing an orthogonal mask sequence such as Hadamard masks. We analyze this class of imaging techniques in terms of Frame theory, which concerns sets of vectors that span a given vector space but are not linearly independent as in the case of a basis. The use of frames (rather than bases) allows for redundant measurements, which can improve the signal-to-noise ratio (SNR) of the reconstructed image. Current single-pixel techniques admit an intuitive, physically-motivated reconstruction scheme, but the reconstruction method is not always obvious. The analysis provides a prescription for reconstruction with any single-pixel imaging scheme. For example, illumination with speckle-like patterns which lack the statistical properties associated with speckle does not allow accurate reconstruction with conventional methods, but frame theory-inspired analysis allows production of high-contrast, diffraction-limited images. Even for schemes where reconstruction methods exist, the theory can improve contrast, accuracy and resolution. Frame theory-motivated reconstruction from simulated ghost imaging data results in markedly improved contrast, and resolution. This analysis makes viable new single-pixel techniques which lack intuitive reconstruction strategies, and tuning of imaging properties such as noise for specific applications.
Compressive Sensing in Microscopy
icon_mobile_dropdown
Exploiting patterned illumination and detection in optical projection tomography (Conference Presentation)
Samuel P. Davis, Sunil Kumar, Laura Wisniewski, et al.
Optical Projection Tomography (OPT), the optical equivalent of x-ray computed tomography, reconstructs the 3D structure of a sample from a series of wide-field 2D projections acquired at different angles [1]. OPT is used to map the optical attenuation and/or fluorescence distributions of intact transparent samples without the need for mechanical sectioning. While it is typically applied to chemically cleared samples, it can also be used to image inherently transparent or weakly scattering live organisms including adult zebrafish up to ~1cm in diameter [2]. When applying OPT to live samples it is important to minimise the data acquisition time while maximising the image quality in the presence of scattering. The former issue can be addressed using compressive sensing to reduce the number of projections required [3]. Scattered light can be rejected using structured illumination [4], but this removes emission from regions the excitation modulation does not reach and reduces the available dynamic range. To address this, we have explored the rejection of scattered light by acquiring projections with parallel semi-confocal line illumination and detection in an approach we describe as slice-OPT (sl-OPT). The impact of optical scattering can also be reduced by imaging at longer wavelengths [5]. We are exploring OPT in the NIR 1&2 spectral windows. However, exotic array detectors, e.g. for short wave infrared light, are costly and so we are also developing a single pixel camera [6] approach. We will present our progress applying these techniques to 3D imaging of vasculature and tumour burden in live adult zebrafish. [1] Sharpe et al, Science, vol. 296, Issue 5567, pp. 541-545, 2002. [2] Kumar et al, Oncotarget, vol. 7, no.28, pp. 43939-43948, 2016. [3] Correia et al, PloS one, vol. 10, no. 8, p. e0136213, 2015. [4] Kristensson et al, Optics express, vol. 20, no. 13, pp. 14437-14450, 2012. [5] Shi et al., Journal of Biophotonics, vol. 9, no. 1-2, pp. 38-43, 2016. [6] Duarte et al., IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 83-91, 2008.
Monte-Carlo model for speckle contrast imaging
Speckle contrast imaging has been shown to be useful in measuring blood flow in vivo. Recently spatial frequency domain imaging has been combined with speckle contrast to provide information about the depth of the blood vessels. Here we develop a Monte-Carlo model for speckle contrast imaging using a pattern of binary illumination bars. Our model is constructed in such a way that we can simulate different time steps in the imaging process from a single Monte-Carlo run. We show that the technique can measure flow using either spatial or temporal contrast, and that differences in depth can be seen in the pattern of speckle contrast. With this tool, we can investigate different processing algorithms to optimize the imaging process and extract useful depth information. Further work will exploit the harmonic frequencies in the Fourier transform of the binary bar chart.
Model-based iterative reconstruction for spectral-domain optical coherence tomography
Jonathan H. Mason, Yvonne Reinwald, Ying Yang, et al.
Spectral domain optical coherence tomography (OCT) offers high resolution multidimensional imaging, but generally suffers from defocussing, intensity falloff and shot noise, causing artifacts and image degradation along the imaging depth. In this work, we develop an iterative statistical reconstruction technique, based upon the interferometric synthetic aperture microscopy (ISAM) model with additive noise, to actively compensate for these effects. For the ISAM re-sampling, we use a non uniform FFT with Kaiser-Bessel interpolation, offering efficiency and high accuracy. We then employ an accelerated gradient descent based algorithm, to minimize the negative log-likelihood of the model, and include spatial or wavelet sparsity based penalty functions, to provide appropriate regularization for given image structures. We evaluate our approach with titanium oxide micro-bead and cucumber samples with a commercial spectral domain OCT system, under various subsampling regimes, and demonstrate superior image quality over traditional reconstruction and ISAM methods.
Novel time-resolved detector for fluorescence lifetime imaging (Conference Presentation)
Time-resolved imaging is a fundamental tool for biomedical applications such as fluorescence lifetime imaging microscopy (FLIM) and mapping of tissue optical parameters. FLIM, in particular, enables us to study the micro-environment of fluorophores in cell biology, providing relevant information like pH, ion concentration and molecular coupling (e.g. FRET). Compressed sensing approaches, based on image sparsity, have been recently proposed as a novel imaging paradigm allowing to preserve information content while significantly reducing the number of measurements. Single Pixel Camera (SPC) approach is one possible implementation of this idea. The object is imaged on a spatially modulated system (e.g. DMD, SLM), then by focusing the exiting light on a single pixel detector, the inner product between image and pattern is measured. In this work we propose and validate a novel scheme of time-resolved camera with ps temporal resolution where all the elements required for compressed sensing are combined into a single chip, allowing a significant cost reduction, compactness and performance improvement. The proposed device is based on a high-density array of detection elements, operating in the single photon regime, which can be selectively enabled/disabled. All pixels are connected to one single Time-to-Digital Converter (TDC). In order to experimentally validate the imaging and temporal capability of the proposed system, fluorescence lifetime imaging and time-gated imaging in a diffused medium have been carried out. We believe the proposed time-resolved camera can be a convenient approach in many biomedical applications where a gated camera or a time-resolved scanning system are currently used.
Cell and Tissue Imaging I
icon_mobile_dropdown
Feasibility study of limited-angle reconstruction based in vivo optical projection tomography
Optical projection tomography(OPT) provides an approach to recreating three-dimensional images of small biological specimens. Light traverses through a straight line to achieve a homogeneous illumination of the specimen. As the specimens in the conventional OPT could not survive or the survival time was too short, this paper proposes a new type of sample fixation method for OPT imaging. The specimen was anaesthetized in a petri dish, and the dish was fixed under the rotational stage of our homemade OPT system for imaging. This method can reduce the damage to the specimen and be more conducive to the continuous observation for in vivo OPT. However, the sample fixation causes the problem of insufficient sampling. To obtain optical projection tomographic image with insufficient samples, this paper uses the iterative reconstruction algorithm combining with the prior information to solve the inverse reconstruction problem.
Label-free cellular viability imaging in 3D tissue spheroids with dynamic optical coherence tomography (Conference Presentation)
Ahbid Zein-Sabatto, Julia S. Lee, Madison Kuhn, et al.
The recent development of 3D tissue spheroids aims to address current limitations with traditional 2D cell cultures in various studies, including cancer drug screening and environmental toxin testing. In these studies, measurements of cellular viability are commonly utilized to assess the effects of drug or toxins. Existing methods include live/dead assays, colorimetric assays, fluorescence calcium imaging, and immunohistochemistry. However, those methods involve the addition of histological stains, fluorescent proteins, or other labels to the sample; some methods also require sample fixation. Fixation-based methods preclude the possibility of longitudinal study of viability, and confocal fluorescence imaging-based methods suffer from insufficient delivery of labels near the center of 3D spheroids. Here, we demonstrate the use of label-free optical coherence tomography (OCT) for quantitative cellular viability imaging of 3D tissue spheroids. OCT intensity and decorrelation signals acquired from neurospheroids exhibited changes correlated with cellular viability as manipulated with ethanol. Interestingly, when we repeated the imaging while cells gradually became less viable, the intensity and decorrelation signals exhibited different time courses, suggesting that they may represent different cellular processes in cell death. More quantitative measurements of viability using dynamic light scattering optical coherence microscopy (DLS-OCM) will be also presented. DLS-OCM enables us to obtain 3D maps of the diffusion coefficient, and we found that the diffusion coefficient of intra-cellular motility correlated with cellular viability manipulated by changes in temperature and pH. Finally, applications of these novel methods to human-cell 3D spheroids will be discussed.
Cell and Tissue Imaging II
icon_mobile_dropdown
3D phase imaging for thick biological samples (Conference Presentation)
Regina Eckert, Michael Chen, Li-Hao Yeh, et al.
Phase imaging provides quantitative structural data about biological samples as an alternative or complementary contrast method to the functional information given by fluorescence imaging. In certain cases, fluorescence imaging is undesirable because it may harm the development of living cells or add time and complexity to imaging pipelines. However, current 3D phase reconstruction methods, such as optical diffraction tomography [1], are often limited to a single-scattering approximation. This limits the amount of scattering that such 3D reconstruction algorithms can successfully handle, and therefore effectively limits the sample thickness that can be successfully reconstructed. More recent methods such as 3D Fourier ptychographic microscopy (FPM) have used intensity-only images combined with multiple-scattering models in order to reconstruct 3D volumes [2]. In practice, however, continuous biological samples on the order of 100 um thick are not well-reconstructed by 3D FPM, due to a lack of diverse information across the volume which creates an ill-posed inverse problem. To mitigate this, we introduce simultaneous detection coding in the form of pupil control to the 3D FPM capture scheme. Simple pupil coding schemes enabled us to capture diverse information across our volume. In concert with a beam propagation model that takes into account multiple scattering, this combination of illumination- and detection-side coding allows us to more stably reconstruct 3D phase for larger-scale biological samples. [1] E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). [2] L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica, 2, 104-111 (2015).
Vessel segmentation with deep learning (Conference Presentation)
Xiaojun Cheng, Sreekanth Kura, David Boas
Vessel segmentation, which is to distinguish blood vessels from the surrounding tissue in images, is a pre-processing step that is often required for the analysis of a vascular network. Three-dimensional segmentation is often challenging in the presence of noise, and a simple thresholding method usually does not work well. Here we have integrated features extracted from 3D images obtained from two-photon in-vivo microscopy with deep learning to do vessel segmentation. The inputs are eigenvalues of the Hessian matrix for each voxel for three different Gaussian filters of width 2, 3, 4 μm and the intensity normalized within the x-y plane. The network is composed of 3-5 layers and each with 3-6 hidden units and is trained for two mouse brain vasculature networks and tested on a third one. The results show a significant improvement compared to a simple thresholding method and are going to be compared with other segmentation methods such as particle filters and enhancement filters. Preliminary results of segmenting OCT data are also obtained.
Refractive index properties of the retina accessed by multi-wavelength digital holographic microscopy
Álvaro Barroso Peña, Steffi Ketelhut, Peter Heiduschka, et al.
The refractive index (RI) of the retina and its dispersion are essential parameters in ophthalmologic imaging. However, the spatial RI distribution in retinal tissue is difficult to access. We explored the capabilities of multispectral quantitative phase imaging (QPI) with digital holographic microscopy (DHM) for label-free refractive index characterization of dissected murine retina. The retrieved tissue refractive indices are in agreement with previously reported values for living cells and dissected tissues. Moreover, the detected spatial refractive index distributions correlate with results from complementary conducted OCT investigations. In summary, multispectral DHM is a promising tool for label-free characterization of optical retina properties.
Modeling cell-matrix interactions in ovarian cancer through image based 3D biomimetic scaffolds created by multiphoton excited fabrication (Conference Presentation)
A profound remodeling of the collagen in the extracellular matrix (ECM) occurs in human ovarian cancer but it unknown how this affects tumor growth, where this understanding could lead to better diagnostics and therapeutic approaches. Here, we investigate the role of these specific alterations in collagen morphology on cell function by using multiphoton excited (MPE) polymerization to fabricate 3D biomimetic models of the ovarian stroma based on Second Harmonic Generation images. This process is akin to 3D printing except is performed at much higher resolution (~0.5 microns) and with the proteins that comprise the native ECM. We use this technique to create collagen scaffolds with complex, 3D submicron morphology representing the morphology in normal stroma, high risk stroma, benign tumors, and high grade cancer ovarian tissues. The models are seeded with different cancer cell lines and this allows decoupling of the roles of cell characteristics (metastatic potential) and ECM structure and composition (normal vs cancer) on migration dynamics. We found the malignant stromal structure promoted enhanced motility, and also cell and cytoskeletal alignment with respect to fibers. Conversely, normal and cancer cells on the normal stroma had the weakest response to the matrix morphology. While collagen alignment is known to affect cell dynamics, we further found small changes in the collagen fiber morphology (e.g. periodicity) had large effects on the resulting migration dynamics. These models cannot be synthesized by other conventional fabrication methods and we suggest the MPE image-based fabrication method will enable a variety of studies in cancer biology.
Non-destructive real-time monitoring of brain tumor exception by combining high-frequency ultrasounds with infrared spectroscopy
The purpose of the current paper is the development of a non-destructive imaging system for diagnostic purposes, consisting of ultrasonic transducers of high frequency and an infrared spectrometer, enabling the monitoring of the brain spatial variation and the detection of cancerous cells spreading to healthy tissues during the neurosur- gical tumor exception. Ultrasound utilization during the neurosurgery, where a part of the skull is temporarily removed, is able to provide a new perspective on the imaging techniques. The proposed device combines trans- ducers of different center frequencies in order to achieve sufficient penetration depth inside the brain and fine spatial resolution. Moreover, the infrared spectroscopy technique is utilized and combined with the ultrasound to achieve the recognition of the cancerous cells from their infrared fingerprint. The drawback of the poor penetration depth of the infrared electromagnetic waves is overcome by inserting a small diameter probe near the location of the main tumor. The probe is integrated into the same structure as the ultrasound device to receive both signals from the identical spot. Finally, advanced signal processing techniques are used to maximize separately the information of the independent systems, while the data fusion will be attempted.
Advances in Instrument Design
icon_mobile_dropdown
A system calibration protocol for widefield optical microscopy (Conference Presentation)
Sung Yong You, Sreevidhya Ramakrishnan, Jerry Chao, et al.
Widefield fluorescence microscopy has long been an invaluable tool in biomedical research. More recently, application of this technique has further increased with the introduction of new fluorophores and significant advances in optical instrumentation. More specifically, this technique has been used in a broad range of biological studies involving, for example, colocalization analysis, ratiometric imaging, 3D single molecule tracking and multicolor super-resolution imaging. Advanced widefield microscopy setups are generally implemented with many optical components, including mirrors, dichroic filters, excitation/emission filters, beam splitters, objective lenses, and optical cameras. The complexity of such microscopy configurations imposes an inherent risk of optical aberrations and systematic errors which can affect the quality and analysis of the acquired image data. Many methods have been introduced over the years to characterize specific aspects of fluorescence microscopy such as the system’s point spread function and field illumination. However, methods for the assessment of the various optical components, such as the objective lens, optical filter, and other key components required for microscope imaging, are lacking, and in general, there is a shortage of software tools for this analysis. Therefore, we present here a comprehensive system calibration protocol for improving the entire experimental pipeline, starting with image acquisition and ending with automated data analysis and workflow documentation. The protocol details the characterization of optical components, the assessment of data quality and validity, and correction of aberrations to allow for the attainment of the performance limits of the imaging system. This protocol allows researchers ranging from novice microscopists to imaging professionals to implement an optimal widefield imaging system.
Enhanced spectral lightfield fusion microscopy via deep computational optics for whole-slide pathology
Lensfree on-chip microscopy, which harnesses holography principles to capture interferometric light-field encodings without the need of lenses, is an emerging microscopy modality with widespread interest given the large field-of-view (FOV) compared to lens-based microscopy systems. In particular, there is a growing interest on the development of high-quality lensfree on-chip color microscopy. In this study, we propose a multi-laser spectral lightfield fusion microscopy using deep computational optics for achieving lensfree on-chip color microscopy. We will demonstrate that leveraging deep computational optics can enable imaging resolution beyond the diffraction limit without the use of any complex hardware-based super-resolution techniques, such as aperture scanning. The capabilities of the microscope are examined for whole-slide pathology. The superior imaging resolution of the instrument is demonstrated by imaging of a series of biological specimens demonstrating the true color imaging capability of the instrument while showcasing the large FOV of the instrument.
Motion free micro-endoscopic system for imaging in freely behaving animals at variable focal depths using liquid crystal lenses (Conference Presentation)
Arutyun Bagramyan, Tigran Galstian, Armen Saghatelyan
MICRO-ENDOSCOPE : The novel micro-endoscopic system we present was designed and simulated using Zemax optical software in order to predict some key imaging parameters such as the magnification, the field of view, the resolution, the focal shift, etc. Classical epi-fluorescence (reflected light illumination) imaging configuration was considered. SolidWorks engineering software was used for the mechanical approach/simulations. The mechanical parts of the micro-endoscope were mainly printed using a 3D laser printer (hard plastic) at the theoretical resolution of 25µm or directly fabricated and assembled in the mechanical atelier. TUNEABLE LIQUID CRYSTAL LENSES : We used an optimized modal lens approach to design polarization-insensitive optical probe that requires relatively low driving voltages to perform endoscopic depth imaging. For a single LC lens a thin weakly conductive layer of ZnO film sheet resistance was cast over the hole-patterned electrode to form a the control layer used to generate a gradually varying electric field profile along the z-axis that was applied to the NLC layer. Two perpendicularly oriented double-layer NLC “half” lenses were required to form a custom four-layers design of the TLCL (tuneable liquid crystal lens). GRIN PROBE : To enable depth imaging our 4 layer TLCL was optically coupled to the imaging probe composed of 2 different GRIN lenses : imaging GRIN lens (high NA) and a coupling GRIN lens (low NA) which were glued together using index matched optical adhesive. RESULTS : Combination of the TLCL and the GRIN probe enabled a focal shift of approximately 90 ± 3µm while maintaining a constant magnification and a lateral resolution of ≈ 1µm. The potential of our system to visualise and differentiate small neuronal structures at variable focal depth was tested by imaging neurons, dendrites and also spines in thick brain sections and also in a free behaving mouse (Flex-GFP), in deep regions of the brain such as subventricular zone (SVZ) and rostral migratory stream (RMS).
Wave-optical calibration of a light-field fluorescence microscopy
J. Pribošek, J. Steinbrener, M. Baumgart, et al.
In light-field microscopy, a single point emitter gives rise to a complex diffraction pattern, which varies with the position of the emitter in object space. In order to use deconvolution-based wave-optical reconstruction schemes for light-field imaging systems, established methods rely on theoretical estimation of such diffraction patterns. In this paper we propose a novel method for direct experimental estimation of the light-field point spread function. Our approach relies on a modified reversed micro-Hartmann test to acquire a composite light-field point spread function of several thousand point emitters in the object plane simultaneously. By using fiducial markers and a custom image processing algorithm we separate the contributions of individual point emitters directly in raw light-field images and allow the construction of the forward imaging process without any prior assumption about the optical system required. The constructed forward imaging model can finally be applied in the 3D-deconvolution based wave-optical reconstruction scheme.
Quantitative Phase and Holographic Methods
icon_mobile_dropdown
Stereo in-line holographic digital microscope
Biologists use optical microscopes to study plankton in the lab, but their size, complexity and cost makes widespread deployment of microscopes in lakes and oceans challenging. Monitoring the morphology, behavior and distribution of plankton in situ is essential as they are excellent indicators of marine environment health and provide a majority of Earth’s oxygen and carbon sequestration. Direct in-line holographic microscopy (DIHM) eliminates many of these obstacles, but image reconstruction is computationally intensive and produces monochromatic images. By using one laser and one white LED, it is possible to obtain the 3D location plankton by triangulation, limiting holographic reconstruction to only the voxels occupied by the plankton, reducing computation by several orders of magnitude. The color information from the white LED assists in the classification of plankton, as phytoplankton contains green-colored chlorophyll. The reconstructed plankton images are rendered in a 3D interactive environment, viewable from a browser, providing the user the experience of observing plankton from inside a drop of water.
Controlling the depth-of-field in 3D spatial-frequency-projection fluorescence microscopy with holographic refocusing (Conference Presentation)
While fluorescence microscopy has revolutionized a broad range of biological studies, one of several challenges that remain is the need to increase image acquisition rates to image in dynamic specimens. Spatial-frequency-projection techniques, such as CHIRPT and SPIFI, utilize spatiotemporally structured illumination patterns to enable rapid acquisition of multidimensional images with a single-pixel detector. CHIRPT in particular shows promise for enhancing image acquisition rates because it encodes the spatial phase difference between two interfering illumination beams into temporal modulations of the fluorescent light emitted from the specimen. Consequently, the complex-valued, 1D image measured with CHIRPT can be digitally propagated to recover a 2D image of the fluorophore distribution in the specimen. Moreover, the depth-of-field (DOF) in CHIRPT with planar illumination approaches 100x the conventional limit and thus allows for large volumes of the specimen to be imaged simultaneously. Unfortunately, all configurations of CHIRPT reported to date require the use of focused light sheets to form 2D and 3D images – thereby restricting the effective DOF provided by CHIRPT to the conventional limit. In this work, we show that the addition of a confocal slit to the CHIRPT microscope allows one to control the effective DOF with linearly-excited fluorescence. We present experimental data and a complete optical theory that describes the experimental results. Confocal CHIRPT may enable rapid imaging by dramatically reducing the number of axial translations required to form a complete 3D image, particularly when coupled with remote focusing of the confocal filter.
Signal evaluation in chromatic confocal spectral interferometry via k-space phase equality approach
D. Claus, T. Boettcher, L. Ding, et al.
Chromatic confocal spectral interferometry combines the benefits of scanning free acquisition of the axial dimension with interferometrically increased depth accuracy. However, so far it has been difficult to separate the confocal signal from the interferometric signal. It is, of course, possible to apply the established CCM evaluation methods. In that case, the available phase information, that offers a decreased measurement uncertainty and to some degree the removal of disturbing artifacts at steep surface inclinations, is not taken into account. In fact, it is not straight forward to interpret the signal. In comparison to white light interference microscopy, the signal suffers from a chirp. This means that it cannot be associated with a single beating frequency, which corresponds to the interferometrically encoded z-value. However, a modified lock-in technique has in the past successfully been applied, demonstrating a significant advantage in comparison to the conventional CCM procedures. Here, we will introduce the concept of k-space phase equality, which enables the separation of the confocal and the interferometric signal and furthermore offers an extended measurement range. The principle is based on signal modification in the z-space, which corresponds to the Fourier domain of the recorded spectral signal. The evaluation is then performed in the spectral domain, where the phase signals for all z-positions with respect to the corresponding wavelength are evaluated. As a result, a phase signal with reduced aberration terms, similar to an interferometric signal, is obtained, which can hence be evaluated using established techniques.
Projection multiplexing for enhanced acquisition speed in holographic tomography
Arkadiusz Kuś, Maria Baczewska, Michał Ziemczonok, et al.
In holographic tomography (HT), the 3D refractive index distribution within weakly-scattering, phase-only biological object is retrieved. This key property of the technique is one of its most significant strengths compared to labelling-based methods of cell analysis such as fluorescence microscopy. As a consequence, however, it is required to acquire a set of holograms at several viewing directions, which hinders the measurement speed. In this paper we explore the prospect of multiplexing projections in order to decrease the number of scanning positions required for the full measurements. The presented analysis is based on experimental data acquired in a limited-angle holographic tomography system and emulates the performance of a spatial-light-modulator-based system in which multiple projections may be acquired simultaneously by generating a distribution of multiple point sources in the Fourier plane of the condenser lens. For this reason, the increase in acquisition speed strictly depends on the number of multiplexed holograms and results in decreased reconstruction quality. The performance of the system is demonstrated and analyzed with biological objects - human keratinocyte cells.
Nonlinear and Fluorescence Microscopy
icon_mobile_dropdown
Single- and multi-photon shaped illumination for light-sheet fluorescence microscopy (Conference Presentation)
The use of exotic optical modes is becoming increasingly widespread in microscopy. Particularly, propagation-invariant beams, such as Airy and Bessel beams and optical lattices, have been particularly useful in light-sheet fluorescence microscopy (LSFM) as they enable high-resolution imaging over a large field-of-view (FOV), possess a resistance to the deleterious effects of specimen induced light scattering, and can potentially reduce photo-toxicity. Although these propagation-invariant beams can resist the effects of light scattering to some degree, and there has been some interest in adaptive-optical methods to correct for beam aberrations when they cannot, scattering and absorption of the illuminating light-sheet limit the penetration of LSFM into tissues and results in non-uniform intensity across the FOV. A new degree of control over the intensity evolution of propagation-invariant beams can overcome beam losses across the FOV, restoring uniform illumination intensity and therefore image quality. This concept is compatible with all types of propagation-invariant beams and is characterised in the context of light-sheet image quality. Another property to control is the wavelength of light used. Optical transmission through tissue is greatly improved at longer wavelengths into the near-infrared due to reduced Rayleigh scattering and two-photon excitation has proved beneficial for imaging at greater depth in LSFM. Three-photon excitation has already been demonstrated as a powerful tool to increase tissue penetration in deep brain confocal microscopy, and when combined with beam shaping can also be a powerful illumination strategy for LSFM. Recent progress in shaping optical fields for LSFM will be presented.
Hyperspectral imaging fluorescence excitation scanning (HIFEX) microscopy for live cell imaging
In the past two decades, spectral imaging technologies have expanded the capacity of fluorescence microscopy for accurate detection of multiple labels, separation of labels from cellular and tissue autofluorescence, and analysis of autofluorescence signatures. These technologies have been implemented using a range of optical techniques, such as tunable filters, diffraction gratings, prisms, interferometry, and custom Bayer filters. Each of these techniques has associated strengths and weaknesses with regard to spectral resolution, spatial resolution, temporal resolution, and signal-to-noise characteristics. We have previously shown that spectral scanning of the fluorescence excitation spectrum can provide greatly increased signal strength compared to traditional emission-scanning approaches. Here, we present results from utilizing a Hyperspectral Imaging Fluorescence Excitation Scanning (HIFEX) microscope system for live cell imaging. Live cell signaling studies were performed using HEK 293 and rat pulmonary microvascular endothelial cells (PMVECs), transfected with either a cAMP FRET reporter or a Ca2+ reporter. Cells were further labeled to visualize subcellular structures (nuclei, membrane, mitochondria, etc.). Spectral images were acquired using a custom inverted microscope (TE2000, Nikon Instruments) equipped with a 300W Xe arc lamp and tunable excitation filter (VF- 5, Sutter Instrument Co., equipped with VersaChrome filters, Semrock), and run through MicroManager. Timelapse spectral images were acquired from 350-550 nm, in 5 nm increments. Spectral image data were linearly unmixed using custom MATLAB scripts. Results indicate that the HIFEX microscope system can acquire live cell image data at acquisition speeds of 8 ms/wavelength band with minimal photobleaching, sufficient for studying moderate speed cAMP and Ca2+ events.
Three-dimensional fluorescence imaging with an automated light-field microscope
J. Steinbrener, J. Pribošek, M. Baumgart, et al.
In addition to the two-dimensional intensity distribution in the image plane, light field microscopes capture information about the angle of the incident radiation. This information can be used to extract depth information about the object, calculate all-in-focus images and perform three-dimensional reconstructions from a single exposure. In combination with automated microscopy setups, this makes the technique a promising tool for high-throughput, three-dimensional cell assay evaluation which could substantially improve drug development and screening. To this end, we have developed a novel generalized calibration and three-dimensional reconstruction scheme for a lightfield fluorescence microscope setup. The scheme can handle Keplerian and Galilean light field camera configurations added to infinity corrected microscopes configured to be telecentric as well as non-telecentric or hypercentric. The latter provides a significant advantage over the state of the art as it allows for an application specific optimization of lateral and axial resolution, field-of-view, and depth-of-focus. The reconstruction itself is performed iteratively using an expectation maximization algorithm. Super-resolved reconstructions can be achieved by including experimentally measured pointspread- functions. To reduce the required computational power, sparsity and periodicity of the system matrix relating object space to light field space is exploited. This is particularly challenging for the non-telecentric cases, where the voxel size of the reconstructed object space depends on the axial coordinate. We provide details on the experimental setup and the reconstruction algorithm, and present results on the experimental verification of theoretical performance parameters as well as successful reconstructions of fluorescent beads and three-dimensional cell spheroids.
Temporal focusing-based multiphoton excitation fluorescence images with background noise cancellation via Hilbert-Huang transform
Yvonne Yuling Hu, Yuan-Rong Lo, Chun-Yu Lin, et al.
Temporal focusing multiphoton excitation microscopy has wide field-of-view and optical sectioning. By using digital micromirror device, it provides patterned illumination. However, without filling the back aperture of objective lens, the axial confinement is limited to micron-meters, leading the out-of-focus fluorophores excited and image blurred. In this study, Hilbert-Huang transform is proposed to reduce the background noise. The empirical mode decomposition is first applied to disassemble the image into intrinsic mode functions and then reconstruct by Hilbert transform after diminishing background residues. The axial confinement can be enhanced from 2.79 μm to 0.73 μm with structure frequency in 1.06 μm-1.
Poster Session
icon_mobile_dropdown
Spatial encoded polarization dependent nonlinear optical analysis for local tensors imaging of collagenous tissue
Changqin Ding, James R. W. Ulcickas, Fengyuan Deng, et al.
Rapid local hyperpolarizability tensor imaging of collagenous tissue was achieved with spatially encoded polarization dependent nonlinear optical measurements. Second harmonic generation (SHG) is sensitive to polarization-dependent measurements due to its unique symmetry requirement, providing rich information of local structures for protein crystals and biological tissues. Fast polarization-dependent measurements reduce 1/f noise and suppress motion blur for in vivo imaging. In this work, spatially encoded polarization dependent SHG was used for local hyperpolarizability tensor imaging of z-cut quartz and collagenous tissue by using a single patterned microretarder array (μRA). The μRA was designed with a pattern of half-wave retardance varying spatially in the azimuthal orientation of the fast-axis. When placed in the rear conjugate plane of a beam scanning microscope, the μRA enabled spatial modulation of incident light with polarization states varied at different positions in the field of view. The ‘snapshot’ approach was available for the polarization dependent measurements of a uniform sample so that one image included a complete set of polarization modulation from different pixels. Combining with sample translation, this method was able to recover local hyperpolarizability tensor of non-uniform samples. This strategy was successfully used to extract local nonlinear optical tensors for z-cut quartz and collagenous tissue with good agreements with traditional polarization dependent measurements, providing an alternate approach for fast polarization analysis of collagen tissue with minimal modifications on current beam scanning nonlinear optical systems.
Compressed ultrafast transmission electron microscopy: a simulation study
Bringing ultrafast temporal resolution to transmission electron microscopy (TEM) has historically been challenging. Despite significant recent progress in this direction, it remains difficult to achieve sub-nanosecond temporal resolution with a single electron pulse imaging. To address this limitation, here, we propose a methodology that combines laserassisted TEM with computational imaging methodologies based on compressed sensing (CS). In this technique, a twodimensional (2D) transient event [i.e. (x, y) frames that vary in time] is recorded through a CS paradigm. The 2D streak image generated on a camera is used to reconstruct the datacube of the ultrafast event, with two spatial and one temporal dimensions, via a CS-based image reconstruction algorithm. Using numerical simulation, we find that the reconstructed results are in good agreement with the ground truth, which demonstrates the applicability of CS-based computational imaging methodologies to laser-assisted TEM. Our proposed method, complementing the existing ultrafast stroboscopic and nanosecond single-shot techniques, opens up the possibility for single-shot, spatiotemporal imaging of irreversible structural phenomena with sub-nanosecond temporal resolution.
A real-time driver monitoring system using a high sensitivity camera
Leyi Tan, Masashi Hakamata, Chen Cao, et al.
Traffic accidents and mental stress are strongly correlated. Drivers under pressure are more easily to cause accidents. A system which could describe the mental state of a driver would be helpful to avoid such accidents. Multiple indices derived from analysis of heart rate variability (HRV) could be used in the estimation of mental state in humans; moreover, recent years, methods of non-contact heart rate estimation have been well studied and reached high accuracy. Based on both, we developed a real-time driver monitoring system which could not only estimate heart rate of the driver, but also indicate whether he is under pressure or not. This system delivers 2 outputs: heart rate(HR) and mental stress level (stress index). We utilized an 18-bit camera to grab frontal facial frames and independent component analysis (ICA) to extract haemoglobin signal from each frame. After temporal filtering and peak detection, R-R interval(RRI) will be obtained and HR measured. Mental stress estimation will start 30 seconds after we get the first RRI data, then a power spectrum analysis method will be applied to all of the HRV data within 30 seconds to generate powers of Low-Frequency and High-Frequency band data. The ratio of the powers in both bands, so called LF-HF ratio (LF/HF), will be delivered as a stress index to quantify the degree of mental stress. Finally, the validity of stress index is verified over arithmetic calculation and a number of driving-simulating scenarios.
Morphological cell image analysis for real-time monitoring of stem cell culture
The capability of mesenchymal stem cells (MSCs) to self-renew is reflected by their morphological phenotype. Cells that rapidly self-replicate (RS) are spindle-shaped and fibroblastic, while cells that slowly replicate (SR) are flattened and cuboidal. In addition to slow replication, SR cells lose most of their ability to differentiate into multiple cell lineages and promote tissue repair. Morphological evaluation can be used as a rapid screening technique to monitor culture viability in real-time and minimize the need for time consuming validation assays during expansion. We have developed an image analysis algorithm to quantitatively determine the morphological features with the goal of non-invasive and automated prediction of culture viability. The algorithm includes cell segmentation and classification. Following initial thresholding for cell localization, individual cells are segmented using region-based edge detection while clustered cells are segmented using a marker-based watershed method. In addition, classification of cell phenotype as RS or SR is accomplished using a logistic regression model. Results were validated via visual inspection from twenty individuals trained to evaluate the morphological phenotypes of MSCs. The segmentation algorithm demonstrated an accuracy of 94.03% and a mean Dice-Sorensen score of 0.71 across 15 images containing 67 cells. The classification results for the test dataset demonstrated an accuracy of 83.33%, an AUC of 0.87 +/- 0.08, and an F-measure of 0.87.