Proceedings Volume 7536

Sensors, Cameras, and Systems for Industrial/Scientific Applications XI

cover
Proceedings Volume 7536

Sensors, Cameras, and Systems for Industrial/Scientific Applications XI

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 January 2010
Contents: 8 Sessions, 27 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2010
Volume Number: 7536

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: 7536
  • Color and Multispectral Techniques
  • Single Photon Detection
  • Low-light Level
  • Applications
  • Modeling
  • Novel Imaging Devices and Applications
  • Interactive Paper Session
Front Matter: 7536
icon_mobile_dropdown
Front Matter: Volume 7536
This PDF file contains the front matter associated with SPIE Proceedings Volume 7536, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Color and Multispectral Techniques
icon_mobile_dropdown
Stacked color image sensor using wavelength-selective organic photoconductive films with zinc-oxide thin film transistors as a signal readout circuit
Hokuto Seo, Satoshi Aihara, Masakazu Namba, et al.
Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.
Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE color filter pattern
James DiBella, Marco Andreghetti, Amy Enge, et al.
The KODAK TRUESENSE Color Filter Pattern has technology that for the first time is applied to a commercially available interline CCD. This 2/3" true-HD sensor will be described along with its performance attributes, including sensitivity improvement as compared to the Bayer CFA version of the same sensor. In addition, an overview of the system developed for demonstration and evaluation will be provided. Examples of the benefits of the new technology in specific applications including surveillance and intelligent traffic systems will be discussed.
Single Photon Detection
icon_mobile_dropdown
Development of FOP-HARP imaging device
Kazunori Miyakawa, Yuji Ohkawa, Tomoki Matsubara, et al.
The high-gain avalanche rushing amorphous photoconductor (HARP) camera tube achieves ultrahigh-sensitivity by using the avalanche multiplication. The applications of this tube extend beyond broadcasting into other fields. It is attracting a great deal of attention especially for radiation diagnosis, such as synchrotron radiation microangiography, because it can obtain high-resolution and high-contrast images with a low dose of radiation. However, in the present system, a fluorescent screen and the photoconductive film of the HARP tube are connected optically by a lens-coupling method, and low light throughput remains a big problem. To improve the light throughput by using a fiber-coupling method, we applied a fiber-optic plate (FOP) to the substrate of a HARP tube. The FOP consists of three types of glass that have differing hardnesses and elastic coefficients that make it difficult to flatten the FOP surface enough to form the HARP film. We thus introduced a new mechanical polishing method and succeeded in realizing avalanche multiplication in the FOP-HARP tube. The results of shooting experiments by applying the FOP-HARP to the microangiography showed that a spatial resolution of over 20 line pairs/mm was obtained. Moreover, rat femoral arteries of 150-200 μm in diameter could be visualized as motion pictures with a one-fourth lower concentration of contrast material than that needed for ordinary microangiography. Another potential application of the FOP-HARP is an ultrahigh-sensitivity nearinfrared (NIR) image sensor made by fiber-coupling with an image intensifier (I.I.). The image sensor provides highquality images and should be a powerful tool for NIR imaging.
Single-photon camera for high-sensitivity high-speed applications
We present a high-speed Single-Photon Camera for demanding applications in biology, astrophysics, telecommunications, 3D imaging and security surveillance. The camera is based on a 32-by-32 array of "smart pixels" processed in a standard high-voltage technology. Every pixel is a completely independent photon-counting channel. Sensitivity is at the single-photon level and no readout noises affect the measure. The camera has high Photon-Detection Efficiency (PDE) in the blu/green visible spectrum (45% at 450 nm) and low Dark-Counting Rate (DCR) even at room temperature (usually lower than 2 kcps). The use of microlenses makes it possible to further increase the effective pixel fill-factor. The camera can be configured by means of a cross-platform user-friendly software that communicates with the camera through a fast USB link. The integration time window may range from few tens of nanoseconds to milliseconds. The maximum frame rate for the whole 1,024 pixels is about 100 kframe/s, while the minimum 20 ns dead-time between frames boosts the sensor dynamic range. The camera is equipped with a standard C-Mount connector. A gating input pin can be used to quickly gate on/off the integration. The camera works in One-Shot mode for the maximum acquisition speed, Real-Time mode for very long measurements and Live mode for setups alignment purposes.
Photon counting with an EMCCD
In order to make faint flux imaging efficient with an EMCCD, the Clock Induced Charges (CIC) must be reduced to a minimum. Some techniques were proposed to reduce the CIC but until now, neither commercially available CCD controller nor commercial cameras were able to implement them and get satisfying results. CCCP, the CCD Controller for Counting Photons, has been designed with the aim of reducing the CIC generated when an EMCCD is read out. It is optimized for driving EMCCDs at high speed (≥ 10MHz), but may be used also for driving conventional CCDs (or the conventional output of an EMCCD) at high, moderate, or low speed. This new controller provides an arbitrary clock generator, yielding a timing resolution of ~20 ps and a voltage resolution of ~2mV of the overlap of the clocks used to drive the EMCCD. The frequency components of the clocks can be precisely controlled, and the inter-clock capacitance effect of the CCD can be nulled to avoid overshoots and undershoots. Using this controller, CIC levels as low as 0.001 - 0.002 ¯e per pixel per frame were measured on a 512×512 CCD97 operating in inverted mode, at an EM gain of ~2000. This is 5 to 10 times less than what is usually seen in commercial EMCCD cameras using the same EMCCD chip.
Low-light Level
icon_mobile_dropdown
A 5.5Mpixel 100 frames/sec wide dynamic range low noise CMOS image sensor for scientific applications
Boyd Fowler, Chiao Liu, Steve Mims, et al.
In this paper we describe a 5.5Mpixel 100 frames/sec wide-dynamic-range low-noise CMOS image sensor (CIS) designed for scientific applications. The sensor has 6.5μm pitch 5T pixels with pinned photodiodes and integrated microlenses. The 5T pixel architecture enables low noise rolling and global shutter operation. The measured peak quantum efficiency of the sensor is greater than 55% at 550nm, the Nyquist MTF is greater than 0.4 at 550nm, and the linear full well capacity is greater than 35ke-. The measured rolling and global shutter readout noise are 1.28e- RMS and 2.54e- RMS respectively at 30 f/s and 20°C. The pinned photodiode dark current is less than 3.8pA/cm2 at 20°C. The sensor achieves an intra-scene linear dynamic range in rolling shutter operation of greater than 86dB (20000:1) at room temperature. In global shutter readout the shutter efficiency is greater than 1000:1 with 500nm illumination.
StarCam SG100: a high-update rate, high-sensitivity stellar gyroscope for spacecraft
Anup Katake, Christian Bruccoleri
A typical configuration of attitude estimation system for a spacecraft consists of multiple star trackers, coupled with gyroscopes and coarse sensors such as Sun or Earth sensors. The combination of these sensors results in high mass, volume and power requirements. StarVision has been actively developing an unique sensor that is capable of providing attitude and angular rate information from star measurements at update rates of 100Hz. The SG100 stellar gyroscope is built around an image intensified imaging core that is sensitive to stars as faint as visual magnitude 10 at very small exposure times. In this paper we report the development of the SG100 sensor. Results from sensitivity analysis as well as night sky tests at angular rates as high as 20deg/s will be presented. Test data shows that the SG100 far exceeds the requirements of star sensitivity and noise levels even under non-ideal imaging conditions.
Applications
icon_mobile_dropdown
Using the EMVA1288 standard to select an image sensor or camera
Selecting an image sensor or a camera for an industrial application is a difficult task. Data sheets usually provide incomplete performance information. Even if detailed performance information is provided, each supplier has its own test and data reporting methods. Many customers will then chose to evaluate each candidate sensor or camera in their lab. This approach is time consuming and requires appropriate lab equipment. EMVA1288 is a measurement and reporting standard developed to
High-speed document sensing and misprint detection in digital presses
Guillaume Leseur, Nicolas Meunier, Georgios Georgiadis, et al.
We design and analyze a high-speed document sensing and misprint detection system for real-time monitoring of printed pages. We implemented and characterized a prototype system, comprising a solid-state line sensor and a high-quality imaging lens, that measures in real time the light reflected from a printed page. We use sensor simulation software and signal processing methods to create an expected sensor response given the page that is being printed. The measured response is compared with the predicted response based on a system simulation. A computational misprint detection system measures differences between the expected and measured responses, continuously evaluating the likelihood of a misprint. We describe several algorithms to identify rapidly any significant deviations between the expected and actual sensor response. The parameters of the system are determined by a cost-benefit analysis.
Fake fingerprint detection based on image analysis
Sang-il Jin, You-suk Bae, Hyun-ju Maeng, et al.
Fingerprint recognition systems have become prevalent in various security applications. However, recent studies have shown that it is not difficult to deceive the system with fake fingerprints made of silicon or gelatin. The fake fingerprints have almost the same ridge-valley patterns as ones of genuine fingerprints so that conventional systems are unable to detect fake fingerprints without a particular detection method. Many previous works against fake fingers required extra sensors; thus, they lacked practicality. This paper proposes a practical and effective method that detects fake fingerprints, using only an image sensor. Two criteria are introduced to differentiate genuine and fake fingerprints: the histogram distance and Fourier spectrum distance. In the proposed method, after identifying an input fingerprint of a user, the system computes two distances between the input and the reference that comes from the registered fingerprints of the user. Depending on the two distances, the system classifies the input as a genuine fingerprint or a fake. In the experiment, 2,400 fingerprint images including 1,600 fakes were tested, and the proposed method has shown a high recognition rate of 95%. The fake fingerprints were all accepted by a commercial system; thus, the use of these fake fingerprints qualifies the experiment.
Measurement of surface resistivity/conductivity of metallic alloys in aqueous solutions by optical interferometry techniques
Optical interferometry techniques was use for the first time to measure the surface resistivity/conductivity of the pure aluminium (in seawater at room temperature), UNS.No305 stainless steel (in seawater at room temperature), and pure copper (in tab water at room temperature) without any physical contact. This was achieved by applying an electrical potential across the alloys and measuring the current density flow across the alloys, during the cyclic polarization test of the alloys in different solutions. In the mean time, optical iterferometry techniques such as holographic interferometry was used in situ to measure the orthogonal surface displacement of the alloys, as a result of the applied electrical potential. In addition, a mathematical model was derived in order to correlate the ratio of the electrical potential to the current density flow (electrical potential/electronic Current flow=resistance) and to the surface (orthogonal) displacement of the metallic samples. In other words, a proportionality constant (surface resistivity or conductivity=1/ surface resistivity) between the measured electrical resistance and the surface displacement (by the optical interferometry techniques) was obtained. Consequently the surface resistivity (ρ) and conductivity (σ) of the pure aluminium (in seawater at room temperature), UNS.No305 stainless steel (in seawater at room temperature), and pure copper (in tab water at room temperature ) were obtained. Also, electrical resistivity values (ρ) from other source were used for comparison sake with the calculated values of this investigation. This study revealed that the measured value of the resistivity for the pure aluminium (7.7x1010 Ohms-cm in seawater at room temperature) is in a good agreement with the one found in literature for the Aluminium Oxide ,85% Al2O3 (5X10 10 Ohms-cm in air at temperature 30C0). Unfortunately, there is no measured values for the resistivity of cupric oxide (CuO), Cuprous Oxide (Cu2O), or even the oxide of the UNS.No304 stainless steel in literature to compared those values with the measured values in this study.
Carotenoid pixels characterization under color space tests and RGB formulas for mesocarp of mango's fruits cultivars
Ahmed Yahya Hammad, Farid Saad Eid Saad Kassim
This study experimented the pulp (mesocarp) of fourteen cultivars were healthy ripe of Mango fruits (Mangifera indica L.) selected after picking from Mango Spp. namely Taimour [Ta], Dabsha [Da], Aromanis [Ar], Zebda [Ze], Fagri Kelan [Fa], Alphonse [Al], Bulbek heart [Bu], Hindi- Sinnara [Hi], Compania [Co], Langra [La], Mestikawi [Me], Ewais [Ew], Montakhab El Kanater [Mo] and Mabroka [Ma] . Under seven color space tests included (RGB: Red, Green and Blue), (CMY: Cyan, Magenta and Yellow), (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). Addition, nine formula of color space tests included (sRGB 0÷1, CMY, CMYK, XYZ, CIE-L*ab, CIE-L*CH, CIE-L*uv, Yxy and Hunter-Lab) and (RGB 0÷FF/hex triplet) and Carotenoid Pixels Scale. Utilizing digital color photographs as tool for obtainment the natural color information for each cultivar then the result expounded with chemical pigment estimations. Our location study in the visual yellow to orange color degrees from the visible color of electromagnetic spectrum in wavelength between (~570 to 620) nm and frequency between (~480 to 530) THz. The results found carotene very strong influence in band Red while chlorophyll (a & b) was very lower subsequently, the values in band Green was depressed. Meanwhile, the general ratios percentage for carotenoid pixels in bands Red, Green and Blue were 50%, 39% and 11% as orderliness opposite the ratios percentage for carotene, chlorophyll a and chlorophyll b which were 63%, 22% and 16% approximately. According to that the pigments influence in all color space tests and RGB formulas. Band Yellow% in color test (CMYK%) as signature color for carotene. Bands K% and band C were equal zero in almost cells indicted to a mystical induction for chlorophyll (a & b). The results detection two bands regard as numeric chromatic filter. In RGB formulas the digits of carotenoid pixels under the effects of the various bands followed two characters including (Separation and Isotopic) effects these consider numeric chromatography. Digits of carotenoid pixels physically are disparate the trend and features under each band mostly. The RGB formulas present treatment for the symmetrically values in the columns data of the total pigments percentage and color space tests. Our objective physical study for pigments carotene to present standard evolution for pigment estimations. Addition to study the possibility to obtainment numeric chromatography for separation accuracy of the pigments.
Modeling
icon_mobile_dropdown
Analyzing the impact of ISO on digital imager defects with an automatic defect trace algorithm
Jenny Leung, Glenn H. Chapman, Yong H. Choi, et al.
Reliability of image sensors is limited by the continuous development of in-field defects. Laboratory calibration on 21 DSLRs has revealed hot pixels as the main defect type found in all tested cameras, with 78% of the identified defects having a time-independent offset. The expanded ISO range that exists in new cameras enables natural light photography. However, the gain applied to all pixels also enhances the appearance of defects. Analysis of defects at varying ISO levels shows that compared to the number of defects at ISO 400, the number of defects at ISO 1600 is 2-3 times higher. Amplifying the defect parameters helps differentiate faults from noise, thus detecting larger defect sets and causes some hot pixels to become saturated. The distribution of defect parameters at various ISO levels shows that the gain applied to faults with moderate defect magnitude caused 2-10% of the defects to saturate at short exposure times (0.03-0.5s). With our expanded defect collection, spatial analysis confirmed the uniform distribution of defects, indicating a random defect source. In our extended study, the temporal growth of defects is analyzed using our defecttracing algorithm. We introduce an improved defect model which incorporates the ISO gain, allowing the detection of defects even in short exposure images at high ISO and thus providing a wider selection of historical images and more accurate defect tracing. Larger area sensors show more hot pixels, while hot pixel rates strongly grow as the pixel size decreases to 2.2 microns.
Enhanced sensitivity achievement using advanced device simulation of multifinger photo gate active pixel sensors
A 2-dimensional device simulation of Multi finger active pixel sensors is investigated for obtaining enhanced pixel sensitivity. Photo gate APS use a MOS capacitor that can capture incident illumination with a potential well created under the photo gate. The major drawback of such a technology is the absorption of shorter wavelength by the polysilicon gate resulting in a higher sensitivity in the red visible spectrum than in the blue range. In our previous work we implemented 0.18μm CMOS standard and multi fingered photo gate design where the enclosed detection area is divided by 3, 5 and 7 fingers. The experimental results showed that fringing field created potential wells for the 3 and 5 finger photo gate designs have 1.7 times higher collection of photo carriers over the standard photo gate. The device simulation showed that fringing fields from the edges of the poly gates created potential wells that fully covered the open silicon areas allowing light conversion without the optical absorption in the poly silicon gates. Extending simulations to 0.5 μm, 0.25 μm and 0.18 μm multifinger poly gates showed that the fringing fields stayed the same width as the gates shrunk, so that as the number of fingers increased the potential well in the open areas became more uniform. The device sensitivity based on the potential well locations, and previous experimental results, suggested peak efficiencies for the 0.5 μm design as 7 fingers, 0.25 μm at 9 fingers and 0.18 μm at 11 fingers. Peak efficiency was projected to be 2.2 times that of a standard photogate.
Modeling and measurements of MTF and quantum efficiency in CCD and CMOS image sensors
Ibrahima Djité, Pierre Magnan, Magali Estribeau, et al.
Sensitivity and image quality are two of the most important characteristics for all image sensing systems. The Quantum Efficiency (QE) and the Modulation Transfer Function (MTF) are respectively the common metrics used to quantify them, but inter-pixel crosstalk analysis is also of interest. Because of an important number of parameters influencing MTF, its analytical calculation and crosstalk predetermination are not an easy task for an image sensor, particularly in the case of CMOS Image Sensor (CIS). Classical models used to calculate the MTF of an image sensor generally solve the steady-state continuity equation in the case of a sinusoidal type of illumination to determine the MTF value by a contrast calculation. One of the major drawbacks of this approach is the difficulty to evaluate analytically the crosstalk. This paper describes a new theoretical three-dimensional model of the diffusion and the collection of photo-carriers created by a point-source illumination. The model can take into account lightly-doped EPI layers which are grown on highly-doped substrates. It allows us to evaluate with accuracy the crosstalk distribution, the quantum efficiency and the MTF at every needed wavelengths. This model is compared with QE, MTF measurements realized on different pixel types.
Characterization and correction of dark current in compact consumer cameras
Justin C. Dunlap, Erik Bodegom, Ralf Widenhorn
A study of dark current in digital imagers within consumer grade digital cameras is presented. Dark current is shown to vary with temperature, exposure time, and ISO setting. Further, dark current is shown to increase in successive images during a series of images. Consumer cameras are often designed to be as compact as possible and therefore the digital imagers within the camera frame are prone to heat generated by nearby elements within the camera body. It is the scope of this work to characterize the dark current in such cameras and to show that the dark current, in part due to heat generated by the camera itself, can be corrected for by using hot pixels on the imager. This method generates computed dark frames based on the dark current indicator value of the hottest pixels on the chip. We compare this method to standard methods of dark current correction.
Novel Imaging Devices and Applications
icon_mobile_dropdown
Experiment and device simulation for photo-electron overflow characteristics on a pixel-shared CMOS image sensor using lateral overflow gate
Shin Sakai, Yoshiaki Tashiro, Lei Hou, et al.
A wide dynamic range CMOS image sensor with lateral overflow integration capacitor sharing two pixels by using lateral overflow gate (LO-gate) which directly connect the photodiode and the overflow photoelectron integration capacitor (Cs) has been developed. In this paper, the characteristics of the saturated-photoelectrons overflowing to the floating diffusion (FD) and to the Cs have been discussed through the comparison of the results of experiments and device simulations. It is possible to integrate all the saturated photoelectrons in the Cs without leaking to the shared FD by controlling the voltages of the gate electrodes of the transfer transistor and the LO-gate in the pixel which strong light irradiates. The CMOS image sensor consisting of 1/3.3 inch optical format, 3 μm pixel pitch and 1280(H) × 960(V) pixels was fabricated by a 0.18 μm 2P3M CMOS technology with a buried pinned photodiode process and has achieved 84 μV/e- photo-electric conversion gain, 6.9 × 104 e- full well capacity and 90 dB dynamic range in one exposure.
A new function of the optical-multiplex image-acquisition system
We have previously reported an image capturing system called the optical-multiplex system comprised of an image sensor and a multi-lens array, which is compact and light and has a deep depth of field. In the system, light passes through five lenses in an aperture sheet and the object is detected by an image sensor. This system is unique because the object data passing through each of the lenses is a specific range of data, which is coordinated on the pixel array. This is in contrast to the current multi-lens systems, in which the object data from each lens is completely independent. We are now able to report that our system can coordinate data from both an object at a very far distance and an object at a measurable distance, suggesting that information from these two categories of objects can be separated by the opticalmultiplex system using new algorithm.
A 2.2M CMOS image sensor for high-speed machine vision applications
Xinyang Wang, Jan Bogaerts, Guido Vanhorebeek, et al.
This paper describes a 2.2 Megapixel CMOS image sensor made in 0.18 μm CMOS process for high-speed machine vision applications. The sensor runs at 340 fps with digital output using 16 LVDS channels at 480MHz. The pixel array counts 2048x1088 pixels with a 5.5um pitch. The unique pixel architecture supports a true correlated double sampling, thus yields a noise level as low as 13 e- and a pixel parasitic light sensitivity (PLS) of 1/60 000. The sensitivity of the sensor is measured to be 4.64 Vlux.s and the pixel full well charge is 18k e-.
Reducing crosstalk in vertically integrated CMOS image sensors
Orit Skorka, Dileepan Joseph
Image sensors can benefit from 3D IC fabrication methods because photodetectors and electronic circuits may be fabricated using significantly different processes. When fabricating the die that contains the photodetectors, it is desirable to avoid pixel level patterning of the light sensitive semiconductor. But without a physical border between adjacent photodetectors, lateral currents may flow between neighboring devices, which is called "crosstalk". This work introduces circuits that can be used to reduce crosstalk in vertically-integrated (VI) CMOS image sensors with an unpatterned photodetector array. It treats the case of a VI-CMOS image sensor composed of a silicon die with CMOS read-out circuits and a transparent die with an unpatterned array of photodetectors. A reduction in crosstalk can be achieved by maintaining a constant electric potential at all nodes, at which the photodetector array connects with the readout circuit array. This can be implemented by designing a pixel circuit that uses an operational amplifier with a logarithmic feedback to control the voltage at the input node. The work presents several optional circuit configurations for the pixel circuit, and indicates the one that is the most power efficient. Afterwards, it uses a simplified small-signal model of the pixel circuit to address stability and compensation issues. Lastly, the method is validated through circuit simulation for a standard CMOS process.
A CMOS vision system on-chip with multicore sensory processing architecture for image analysis above 1,000F/s
Angel Rodríguez-Vázquez, Rafael Domínguez-Castro, Francisco Jiménez-Garrido, et al.
This paper describes a Vision-System-on-Chip (VSoC) capable of doing: image acquisition, image processing through on-chip embedded structures, and generation of pertinent reaction commands at thousand's frame-per-second rate. The chip employs a distributed processing architecture with a pre-processing stage consisting of an array of programmable sensory-processing cells, and a post-processing stage consisting of a digital microprocessor. The pre-processing stage operates as a retina-like sensor front-end. It performs parallel processing of the images captured by the sensors which are embedded together with the processors. This early processing serves to extract image features relevant to the intended tasks. The front-end incorporates also smart read-out structures which are conceived to transmit only these relevant features, thus precluding full gray-scale frames to be coded and transmitted. The chip is capable to close action-reaction loops based on the analysis of visual flow at rates above 1,000F/s with power budget below 1W peak. Also, the incorporation of processors close to the sensors enables signal-dependent, local adaptation of the sensor gains and hence highdynamic range signal acquisition.
Interactive Paper Session
icon_mobile_dropdown
Electron-multiplying CCD astronomical photometry
Alejandro Ferrero, Riccardo Felletti, Lorraine Hanlon, et al.
Electron Multiplying CCD is a CCD technology sensor reduces read-out noise to less than one electron. We study the way the usage of this technology affects to the astronomical photometry and to the improvement of the temporal resolution of the measurements. We show the effect of this technology on individual celestial sources and in the limiting magnitude. We propose a criterion to choose the optimal EM gain for a specific integration time. We explain a straightforward procedure to characterize the actual EM gain and the readout noise expressed in photo-electrons for every software-displayed gain, and we applied this procedure to the Andor Ixon DU-888E-C00-BV EMCCD.
High-speed charge transfer pinned-photodiode for a CMOS time-of-flight range image sensor
Hiroaki Takeshita, Tomonari Sawada, Tetsuya Iida, et al.
This paper presents a structure and method of range calculation for CMOS time-of-flight(TOF) range image sensors using pinned photodiodes. In the proposed method, a LED light with short pulse width and small duty ratio irradiates the objects and a back-reflected light is received by the CMOS TOF range imager.Each pixel has a pinned photodiode optimized for high speed charge transfer and unwanted charge draining. In TOF range image sensors, high speed charge transfer from the light receiving part to a charge accumulator is essential.It was found that the fastest charge transfer can be realized when the lateral electric field along the axis of charge transfer is constant and this conditon is met when the shape of the diode exactly follows the relationship between the fully-depleted potential and width. A TOF range imager prototype is designed and implemented with 0.18um CMOS image sensor technology with pinned photodiode 4transistor(T) pixels. The measurement results show that the charge transfer time is a few ns from the pinned photodiode to a charge accumulator.
A three-phase time-correlation image sensor using pinned photodiode active pixels
Sangman Han, Tomohiro Iwahori, Tomonari Sawada, et al.
A time correlation (TC) image sensor is a device that produces 3-phase time-correlated signals between the incident light intensity and three reference signals. A conventional implementation of the TC image sensor using a standard CMOS technology works at low frequency and with low sensitivity. In order to achieve higher modulation frequency and high sensitivity, the TC image sensor with a dual potential structure using a pinned diode is proposed. The dual potential structure is created by changing the impurity doping concentration in the two different potential regions. In this structure, high-frequency modulation can be achieved, while maintaining a sufficient light receiving area. A prototype TC image sensor with 366×390pixels is implemented with 0.18-μm 1P4M CMOS image sensor technology. Each pixel with the size of 12μm×12μm has one pinned photodiode with the dual potential structure, 12 transistors and 3capacitors to implement three-parallel-output active pixel circuits. A fundamental operation of the implemented TC sensor is demonstrated.
Dynamic range extension of an active pixel sensor by combining output signals from photodiodes with different sensitivities
Jae-Sung Kong, Sung-Hyun Jo, Kyung-Hwa Choi, et al.
A dynamic range (DR) extension technique based on a 3-transistor (3-Tr.) active pixel sensor (APS) and dual image sampling has been proposed. The feature of the proposed APS was that the APS used two photodiodes with different sensitivities, a high-sensitivity photodiode and a low-sensitivity photodiode. Operation of the proposed APS was simulated by using a 128×128 pixel array. Compared with previously proposed wide DR (WDR) APS, the proposed approach has several advantages; no-external equipments or signal processing for combining images, no-additional timerequirement for additional charge accumulation, adjustable DR extension and no temporal disparity.
An efficient spectral-based calibration method for RGB white-balancing gains under various illumination conditions for cell-phone cameras
Reza Safaee-Rad, Milivoje Aleksic
Significant sensitivity variations among cell-phone camera modules have been observed. As a result, for an effective and reliable white balancing, per-module RGB-ratios calibration/estimation under various illumination conditions is required. Herein, a new technique is proposed which minimizes/simplifies RGB-ratios calibration/estimation process. The proposed method could be based on either direct image capture or spectral numerical processing--the latter is shown to be more flexible and accurate.