Proceedings Volume 9403

Image Sensors and Imaging Systems 2015

cover
Proceedings Volume 9403

Image Sensors and Imaging Systems 2015

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 3 April 2015
Contents: 7 Sessions, 24 Papers, 0 Presentations
Conference: SPIE/IS&T Electronic Imaging 2015
Volume Number: 9403

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9403
  • High-Performance Sensors
  • Sensors, Color, and Spectroscopy
  • Sensor Performance and Modeling
  • Smart Sensors
  • Noise
  • Interactive Paper Session
Front Matter: Volume 9403
icon_mobile_dropdown
Front Matter: Volume 9403
This PDF file contains the front matter associated with SPIE Proceedings Volume 9403 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
High-Performance Sensors
icon_mobile_dropdown
2.2um BSI CMOS image sensor with two layer photo-detector
H. Sasaki, A. Mochizuki, Y. Sugiura, et al.
Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on light-absorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.
A compact THz imaging system
Aleksander Sešek, Andrej Švigelj, Janez Trontelj
The objective of this paper is the development of a compact low cost imaging THz system, usable for observation of the objects near to the system and also for stand-off detection. The performance of the system remains at the high standard of more expensive and bulkiest system on the market. It is easy to operate as it is not dependent on any fine mechanical adjustments. As it is compact and it consumes low power, also a portable system was developed for stand-off detection of concealed objects under textile or inside packages. These requirements rule out all optical systems like Time Domain Spectroscopy systems which need fine optical component positioning and requires a large amount of time to perform a scan and the image capture pixel-by-pixel. They are also almost not suitable for stand-off detection due to low output power. In the paper the antenna - bolometer sensor microstructure is presented and the THz system described. Analysis and design guidelines for the bolometer itself are discussed. The measurement results for both near and stand-off THz imaging are also presented.
Signal conditioning circuits for 3D-integrated burst image sensors with on-chip A/D conversion
R. Bonnard, J. Segura Puchades, F. Guellec, et al.
Ultra High Speed (UHS) imaging is at the forefront of the imaging technology for some years now. These image sensors are used to shoot high speed phenomenon that require about hundred images at Mega frame-per-seconds such as detonics, plasma forming, laser ablation… At such speed the data read-out is a bottleneck and CMOS and CCD image sensors store a limited number of frames (burst) on-chip before a slow read-out. Moreover in recent years 3D integration has made significant progresses in term of interconnection density. It appears as a key technology for the future of UHS imaging as it allows a highly parallel integration, shorter interconnects and an increase of the fill factor. In the past we proposed an idea of 3D integrated burst image sensor with on-chip A/D conversion that overcome the state of the art in term of frame-per-burst. This sensor is made of 3 stacked layers respectively performing the signal conditioning, the A/D conversion and the burst storage. We present here different solutions to implement the analogue front-end of the first layer. We will describe three circuits for three purposes (high frame rate, power efficiency and sensitivity). To support our point, we provide simulation results. All these front-ends perform global shutter acquisition.
A 4MP high-dynamic-range, low-noise CMOS image sensor
Cheng Ma, Yang Liu, Jing Li, et al.
In this paper we present a 4 Megapixel high dynamic range, low dark noise and dark current CMOS image sensor, which is ideal for high-end scientific and surveillance applications. The pixel design is based on a 4-T PPD structure. During the readout of the pixel array, signals are first amplified, and then feed to a low- power column-parallel ADC array which is already presented in [1]. Measurement results show that the sensor achieves a dynamic range of 96dB, a dark noise of 1.47e- at 24fps speed. The dark current is 0.15e-/pixel/s at -20oC.
Multi-camera synchronization core implemented on USB3 based FPGA platform
Ricardo M. Sousa, Martin Wäny, Pedro Santos, et al.
Centered on Awaiba’s NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Sensors, Color, and Spectroscopy
icon_mobile_dropdown
Compressed hyperspectral sensing
Grigorios Tsagkatakis, Panagiotis Tsakalides
Acquisition of high dimensional Hyperspectral Imaging (HSI) data using limited dimensionality imaging sensors has led to restricted capabilities designs that hinder the proliferation of HSI. To overcome this limitation, novel HSI architectures strive to minimize the strict requirements of HSI by introducing computation into the acquisition process. A framework that allows the integration of acquisition with computation is the recently proposed framework of Compressed Sensing (CS). In this work, we propose a novel HSI architecture that exploits the sampling and recovery capabilities of CS to achieve a dramatic reduction in HSI acquisition requirements. In the proposed architecture, signals from multiple spectral bands are multiplexed before getting recorded by the imaging sensor. Reconstruction of the full hyperspectral cube is achieved by exploiting a dictionary of elementary spectral profiles in a unified minimization framework. Simulation results suggest that high quality recovery is possible from a single or a small number of multiplexed frames.
Acousto-optic imaging with a smart-pixels sensor
K. Barjean, K. Contreras, J.-B. Laudereau, et al.
Acousto-optic imaging (AOI) is an emerging technique in the field of biomedical optics which combines the optical contrast allowed by diffuse optical tomography with the resolution of ultrasound (US) imaging. In this work we report the implementation, for that purpose, of a CMOS smart-pixels sensor dedicated to the real-time analysis of speckle patterns. We implemented a highly sensitive lock-in detection in each pixel in order to extract the tagged photons after an appropriate in-pixel post-processing. With this system we can acquire images in scattering samples with a spatial resolution in the 2mm range, with an integration time compatible with the dynamic of living biological tissue.
Design, fabrication and characterization of a polarization-sensitive focal plane array
Dmitry Vorobiev, Zoran Ninkov
Measurement of polarization is a powerful yet underutilized technique, with potential applications in remote sensing, astronomy, biomedical imaging and optical metrology. We present the design, fabrication and characterization of a CCD-based polarization-sensitive focal plane array (FPA). These devices are compact permanently aligned detectors capable of determining the degree and angle of linear polarization in a scene, with a single exposure, over a broad spectral range. To derive the polarization properties, we employ a variation of the division-of-focal plane modulation strategy. The devices are fabricated by hybridizing a micropolarizer array (MPA) with a CCD. The result is a "general-purpose" polarization-sensitive imaging sensor, which can be placed at the focal plane of a wide number of imaging systems (and even spectrographs). We present our efforts to date in developing this technology and examine the factors that fundamentally limit the performance of these devices.
High dynamic, spectral, and polarized natural light environment acquisition
Philippe Porral, Patrick Callet, Philippe Fuchs, et al.
In the field of image synthesis, the simulation of material's appearance requires a rigorous resolution of the light transport equation. This implies taking into account all the elements that may have an influence on the spectral radiance, and that are perceived by the human eye. Obviously, the reflectance properties of the materials have a major impact in the calculations, but other significant properties of light such as spectral distribution and polarization must also be taken into account, in order to expect correct results. Unfortunately real maps of the polarized or spectral environment corresponding to a real sky do not exist. Therefore, it seemed necessary to focus our work on capturing such data, in order to have a system that qualifies all the properties of light and capable of powering simulations in a renderer software. As a consequence, in this work, we develop and characterize a device designed to capture the entire light environment, by taking into account both the dynamic range of the spectral distribution and the polarization states, in a measurement time of less than two minutes. We propose a data format inspired by polarimetric imaging and fitted for a spectral rendering engine, which exploits the "Stokes-Mueller formalism."
A high-sensitivity 2x2 multi-aperture color camera based on selective averaging
Bo Zhang, Keiichiro Kagawa, Taishi Takasawa, et al.
To demonstrate the low-noise performance of the multi-aperture imaging system using a selective averaging method, an ultra-high-sensitivity multi-aperture color camera with 2×2 apertures is being developed. In low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible, which greatly degrades the quality of the image. To reduce these kinds of noise as well as to increase the number of incident photons, the multi-aperture imaging system composed of an array of lens and CMOS image sensor (CIS), and the selective averaging for minimizing the synthetic sensor noise at every pixel is utilized. It is verified by simulation that the effective noise at the peak of noise histogram is reduced from 1.44 e- to 0.73 e- in a 2×2-aperture system, where RTS noise and dark current white defects have been successfully removed. In this work, a prototype based on low-noise color sensors with 1280×1024 pixels fabricated in 0.18um CIS technology is considered. The pixel pitch is 7.1μm×7.1μm. The noise of the sensor is around 1e- based on the folding-integration and cyclic column ADCs, and the low voltage differential signaling (LVDS) is used to improve the noise immunity. The synthetic F-number of the prototype is 0.6.
Sensor Performance and Modeling
icon_mobile_dropdown
Analysis of pixel gain and linearity of CMOS image sensor using floating capacitor load readout operation
S. Wakashima, F. Kusuhara, R. Kuroda, et al.
In this paper, we demonstrate that the floating capacitor load readout operation has higher readout gain and wider linearity range than conventional pixel readout operation, and report the reason. The pixel signal readout gain is determined by the transconductance, the backgate transconductance and the output resistance of the in-pixel driver transistor and the load resistance. In floating capacitor load readout operation, since there is no current source and the load is the sample/hold capacitor only, the load resistance approaches infinity. Therefore readout gain is larger than that of conventional readout operation. And in floating capacitor load readout operation, there is no current source and the amount of voltage drop is smaller than that of conventional readout operation. Therefore the linearity range is enlarged for both high and low voltage limits in comparison to the conventional readout operation. The effect of linearity range enlargement becomes more advantageous when decreasing the power supply voltage for the lower power consumption. To confirm these effects, we fabricated a prototype chip using 0.18um 1-Poly 3-Metal CMOS process technology with pinned PD. As a result, we confirmed that floating capacitor load readout operation increases both readout gain and linearity range.
Addressing challenges of modulation transfer function measurement with fisheye lens cameras
Brian M. Deegan, Patrick E. Denny, Vladimir Zlokolica, et al.
Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.
Smart Sensors
icon_mobile_dropdown
A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement
The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.
Neuro-inspired smart image sensor: analog Hmax implementation
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
A 12-bit 500KSPS cyclic ADC for CMOS image sensor
Zhaohan Li, Gengyun Wang, Leli Peng, et al.
At present, single-slope analog-to-digital convertor (ADC) is widely used in the readout circuits of CMOS image sensor (CIS) while its main drawback is the high demand for the system clock frequency. The more pixels and higher ADC resolution the image sensor system needs, the higher system clock frequency is required. To overcome this problem in high dynamic range CIS system, this paper presents a 12-bit 500-KS/s cyclic ADC, in which the system clock frequency is 5MHz. Therefore, comparing with the system frequency of 2N×fS for the single-slope ADC, where fS, N is the sampling frequency and resolution, respectively, the higher ADC resolution doesn’t need the higher system clock frequency. With 0.18μm CMOS process, the circuit layout is realized and occupies an area of 8μm×374μm. Post simulation results show that Signal-to-Noise-and-Distortion-Ratio (SNDR) and Efficient Number of Bit (ENOB) reaches 63.7dB and 10.3bit, respectively.
14-bit pipeline-SAR ADC for image sensor readout circuits
Gengyun Wang, Can Peng, Tianzhao Liu, et al.
A two stage 14bit pipeline-SAR analog-to-digital converter includes a 5.5bit zero-crossing MDAC and a 9bit asynchronous SAR ADC for image sensor readout circuits built in 0.18um CMOS process is described with low power dissipation as well as small chip area. In this design, we employ comparators instead of high gain and high bandwidth amplifier, which consumes as low as 20mW of power to achieve the sampling rate of 40MSps and 14bit resolution.
Noise
icon_mobile_dropdown
Power noise rejection and device noise analysis at the reference level of ramp ADC
Peter Ahn, JiYong Um, EunJung Choi, et al.
Sources of noise that corrupt the reference level VREF during a ramp ADC operation are identified and analyzed. For power noise analysis, PSR of bandgap reference and current generator are investigated through small signal circuits. For device noise appearing at the reference level, noise contribution from each device is expressed in terms of design variables. The identified design variables are arranged in a table to serve as a guide for low noise CMOS imager design.
The effect of photodiode shape on dark current for MOS imagers
Steven Taylor, Bruce E. Dunne, Lihong Jiao
The effect of photodiode (PD) shape was studied in the attempt to reduce the dark current in MOS imagers. In such imaging systems, each pixel ideally produces a voltage directly proportional to the intensity of light incident on the PD. Because of non-idealities, the PD performance is compromised by the presence of dark current, which becomes the most significant source of noise degrading overall image quality, particularly for low light environments. Unfortunately, due to the statistical variability of dark current, it is not possible to simply correct the readout voltage error via subtraction. To minimize the effect, recent research suggests that PD shape and features have an influence on dark current levels. We test that assertion by considering PDs with different corners while maintaining high fill-factor rates, along with rectangular and triangular shapes to exploit charge transfer characteristics. In all, five PD geometries were built to test the influence of PD shape on the dark current signal including a traditional square shape, two square shapes with increasingly rounded corners (135 and 150 degrees), a triangular design with sharp corners and finally, a triangular design with 120 degree corners. Results indicate the PDs with a square shape and 90 degree corners exhibit the lowest dark current and highest readout voltage. Furthermore, the triangular shape suggests improved charge transfer characteristics; however, this improvement appears to be negated by an increase in dark current response. Therefore, our findings indicate that the traditional PD square shape is the preferred design.
High-speed binary CMOS image sensor using a high-responsivity MOSFET-type photodetector
In this paper, a complementary metal oxide semiconductor (CMOS) binary image sensor based on a gate/body-tied (GBT) MOSFET-type photodetector is proposed. The proposed CMOS binary image sensor was simulated and measured using a standard CMOS 0.18-μm process. The GBT MOSFET-type photodetector is composed of a floating gate (n+- polysilicon) tied to the body (n-well) of the p-type MOSFET. The size of the active pixel sensor (APS) using GBT photodetector is smaller than that of APS using the photodiode. This means that the resolution of the image can be increased. The high-gain GBT photodetector has a higher photosensitivity compared to the p-n junction photodiode that is used in a conventional APS. Because GBT has a high sensitivity, fast operation of the binary processing is possible. A CMOS image sensor with the binary processing can be designed with simple circuits composed of a comparator and a Dflip- flop while a complex analog to digital converter (ADC) is not required. In addition, the binary image sensor has low power consumption and high speed operation with the ability to switch back and forth between a binary mode and an analog mode.
Design considerations for a low-noise CMOS image sensor
Ana González-Márquez, Alexandre Charlet, Alberto Villegas, et al.
This paper reports a Low-Noise CMOS Image Sensor. Low-noise operation is achieved owing to the combination of a noise-enhanced pixel, the use of a two-step ADC architecture and the analysis, and the optimization thereof, of the noise contributed by the readout channel. The paper basically gathers the sensor architecture, the ADC converter architecture, the outcome of the noise analysis and some basic characterization data. The general low-noise design framework is discussed in the companion presentation.
Interactive Paper Session
icon_mobile_dropdown
Short wave infrared hyperspectral imaging for recovered post-consumer single and mixed polymers characterization
Giuseppe Bonifazi, Roberta Palmieri, Silvia Serranti
Postconsumer plastics from packing and packaging represent about the 60% of the total plastic wastes (i.e. 23 million of tons) produced in Europe. The EU Directive (2014/12/EC) fixes as target that the 60%, by weight, of packaging waste has to be recovered, or thermally valorized. When recovered, the same directive established that packaging waste has to be recycled in a percentage ranging between 55% (minimum) and 60% (maximum). The non-respect of these rules can produce that large quantities of end-of-life plastic products, specifically those utilized for packaging, are disposed-off, with a strong environmental impact. The application of recycling strategies, finalized to polymer recovery, can represent an opportunity to reduce: i) not renewable raw materials (i.e. oil) utilization, ii) carbon dioxide emissions and iii) amount of plastic waste disposed-off. Aim of this work was to perform a full characterization of different end-of-life polymers based products, constituted not only by single polymers but also of mixtures, in order to realize their identification for quality control and/or certification assessment. The study was specifically addressed to characterize the different recovered products as resulting from a recycling plant where classical processing flow-sheets, based on milling, classification and separation, are applied. To reach this goal, an innovative sensing technique, based on the utilization of a HyperSpectral[b] I[/b]maging (HSI) device working in the SWIR region (1000-2500 nm), was investigated. Following this strategy, single polymers and/or mixed polymers recovered were correctly recognized. The main advantage of the proposed approach is linked to the possibility to perform “on-line” analyses, that is directly on the different material flow streams, as resulting from processing, without any physical sampling and classical laboratory “off-line” determination.
Designing and construction of a prototype of (GEM) detector for 2D medical imaging application
Abdulrahman S. Alghamdi, Mohammed S. AlAnazi, Abdullah F. Aldosary, et al.
Due to the limited resolution and accuracy of several technologies that are able to get a digital X-ray image with a good performance in the very high rates, micro-pattern technology can achieve these features by using the most effective example of which is gas electron multiplier (GEM). The main objective of this project is to develop a two dimensions imaging that can be used in medical imaging purposes. The project consists of the theoretical parts of the process, including simulating the best detector dimensions, geometry, and the energy range of the applied radiation. Furthermore, constructing a large active area of triple GEM detector, and preparing the necessary setup parts for medical imaging system assumed. This paper presents the designing and construction of a prototype of triple-GEM detector (10cm x10 cm) that can achieve the goals as a first step toward attaining this project. In addition, the preliminary results from X-ray and some gamma sources as a testing of the prototype detector will be presented, added to that the discussions of outlined tasks and achievements. This paper will show the future plane of the whole project and more details about the next stages.
Enhanced correction methods for high density hot pixel defects in digital imagers
Glenn H. Chapman, Rahul Thomas, Rohit Thomas, et al.
Our previous research has found that the main defects in digital cameras are “Hot Pixels” which increase at a nearly constant temporal rate. Defect rates have been shown to grow as a power law of the pixel size and ISO, potentially causing hundreds to thousands of defects per year in cameras with <2 micron pixels, thus making image correction crucial. This paper discusses a novel correction method that uses a weighted combination of two terms - traditional interpolation and hot pixel parameters correction. The weights are based on defect severity, ISO, exposure time and complexity of the image. For the hot pixel parameters component, we have studied the behavior of hot pixels under illumination and have created a new correction model that takes this behavior into account. We show that for an image with a slowly changing background, the classic interpolation performs well. However, for more complex scenes, the correction improves when a weighted combination of both components is used. To test our algorithm’s accuracy, we devised a novel laboratory experimental method for extracting the true value of the pixel that currently experiences a hot pixel defect. This method involves a simple translation of the imager based on the pixel size and other optical distances.