Proceedings Volume 8659

Sensors, Cameras, and Systems for Industrial and Scientific Applications XIV

cover
Proceedings Volume 8659

Sensors, Cameras, and Systems for Industrial and Scientific Applications XIV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 March 2013
Contents: 8 Sessions, 28 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2013
Volume Number: 8659

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8659
  • Session 1
  • Smart Sensors
  • High-Performance Sensors
  • Noise and Characterization
  • Technological Improvements
  • Applications
  • Interactive Paper Session
Front Matter: Volume 8659
icon_mobile_dropdown
Front Matter: Volume 8659
This PDF file contains the front matter associated with SPIE Proceedings Volume 8659, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Session 1
icon_mobile_dropdown
Fundamental performance differences of CMOS and CCD imagers: part V
James R. Janesick, Tom Elliott, James Andrews, et al.
Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL’s SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.
Kirana: a solid-state megapixel uCMOS image sensor for ultrahigh speed imaging
J. Crooks, B. Marsh, R. Turchetta, et al.
This paper describes a solid-state sensor for ultra-high-speed (UHS) imaging. The ‘Kirana’ sensor was designed and manufactured in a 180 nm CMOS technology to achieve full-frame 0.7 Megapixel video capture at speeds at 2 MHz. The 30 μm pixels contain a pinned photodiode, a set of 180 low-leakage storage cells, a floating-diffusion, and a source follower output structure. Both the individual cells and the way they are arranged in the pixel are novel. The pixel architecture allows correlated double sampling for low noise operation.

In the fast mode, the storage cells are operated as a circular buffer, where 180 consecutive frames are stored until receipt of a trigger; up to 5 video-bursts per second can be read out. In the ‘slow’ mode, the storage cells act like a pipeline; the sensor can be read out like a conventional sensor at a continuous frame rate of 1,180 fps. The sensor architecture is fully scalable in resolution since memory cells are located inside each pixel. The pixel architecture is scalable in memory depth (number of frames) as a trade-off with pixel size, dependent on application. The present implementation of 0.7 Mpixels has an array focal plane which is optimized for standard 35 mm optics, whilst offering a competitive 180-frame recording depth.

The sensor described has been manufactured and is currently being characterized. Operation of the sensor in the fast mode at 2 million frames per second has been achieved. Details on the camera/sensor operation are presented together with first experimental results.
Back-side-illuminated image sensor with burst capturing speed of 5.2 Tpixel per second
T. Arai, J. Yonai, T. Hayashida, et al.
We have developed a back-side-illuminated image sensor with a burst capturing speed of 5.2 Tpixels per second. Its sensitivity was 252 V/lux·s (12.7 times that of a front-side-illuminated image sensor) in an evaluation. Sensitivity of a camera system was 2,000 lux F90. The increased sensitivity resulted from optical and time aperture ratios of 100% and also by increasing from a higher optical utilization ratio. The ultrahigh-speed shooting resulted from the use of in-situ storage image sensor. Reducing the wiring resistance and dividing the image area into eight blocks increased the maximum frame rate to 16.7 million frames per second. The total pixel count was 760 horizontally and 411 vertically. The product of the pixel count and maximum frame rate is often used as a figure of merit for high-speed imaging devices, and in this case, 312,360 multiplied by 16.7 million yields 5.2 Tpixels per second. The burst capturing speed is thus 5.2 Tpixels per second, which is the highest speed achieved in high-speed imaging devices to date.
Smart Sensors
icon_mobile_dropdown
A custom CMOS imager for multi-beam laser scanning microscopy and an improvement of scanning speed
Multi-beam laser scanning confocal microscopy with a 256 × 256-pixel custom CMOS imager performing focal-plane pinhole effect, in which any rotating disk is not required, is demonstrated. A specimen is illuminated by 32 × 32 diffraction limited light spots whose wavelength and pitch are 532nm and 8.4 μm, respectively. The spot array is generated by a microlens array, which is scanned by two-dimensional piezo actuator according to the scanning of the image sensor. The frame rate of the prototype is 0.17 Hz, which is limited by the actuator. The confocal effect has been confirmed by comparing the axial resolution in the confocal imaging mode with that of the normal imaging mode. The axial resolution in the confocal mode measured by the full width at half maximum (FWHM) for a planar mirror was 8.9 μm, which is showed that the confocality has been achieved with the proposed CMOS image sensor. The focal-plane pinhole effect in the confocal microscopy with the proposed CMOS imager has been demonstrated at low frame rate. An improvement of the scanning speed and a CMOS imager with photo-sensitivity modulation pixels suitable for high-speed scanning are also discussed.
An ultra fast, ultra compact sensor for diffuse wave spectroscopy
K. Barjean, D. Ettori, E. Tinet, et al.
Diffuse Correlation Spectroscopy (DCS) is based on the temporal correlations of the speckle pattern from the light that has diffused through a biological media. Measurements must be made on a small coherence area of the size of a speckle grain. Summing independent measurement increases the SNR as the square root of the number of detectors. We present a bi-dimensionnal pixel CMOS detector array specially designed for this task, with parallel in-pixel demodulation and temporal correlations computation. Optical signals can be processed at a rate higher than 10,000 samples per second with demodulation frequencies in the MHz range.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
Jungsoon Shin, Byongmin Kang, Keechang Lee, et al.
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
High performance 7.4-micron interline transfer CCD platform for applied imaging markets
Douglas A. Carpenter, James A. DiBella, Robert Kaser, et al.
Technology developed for a 5.5 μm pixel interline transfer CCD family has been incorporated into a new family of highperformance 7.4 μm pixel CCDs, providing significant improvements in several key performance parameters compared to both the 5.5 μm family as well as the previous generation of 7.4 μm pixel products. Smear in the new platform has been reduced to -115 dB, and frame rate has been doubled relative to the previous generation of 7.4 μm pixel products. Dynamic range in normal operation has been improved to 70 dB, and the platform supports a new extended dynamic range mode which provides 82 dB when binning 2 × 2. The new family leverages the package and pin-out configurations used in the 5.5 μm pixel family, allowing easy integration into existing camera designs.
High-Performance Sensors
icon_mobile_dropdown
A 33M-pixel wide color gamut image capturing system using four CMOS image sensors at 120 Hz
Takuji Soeno, Kohei Omura, Takayuki Yamashita, et al.
We are currently researching a next-generation broadcasting system called Super Hi-Vision (SHV). We have proposed the following video parameters for SHV: 33-million-pixel (33M-pixel) resolution, 120-Hz frame frequency, and wide gamut color system. In order to capture the SHV images, we investigate a 33M-pixel image-capturing system operable at a frame frequency of 120 Hz. The system consists of four CMOS image sensors that can not only output 33M-pixel at 60-Hz frame frequency but also output half of the 33M-pixel in either odd or even lines at 120-Hz frame frequency. Two image sensors are used for the green channel (G1 and G2), and the other two are assigned to each of the red and blue channels. The G1 sensor outputs the odd lines, while the G2 sensor outputs the even lines. A combination of G1 and G2 outputs 33M-pixel green images. The red and blue sensors scan odd lines and even lines, respectively. Subsequently, the unscanned lines are interpolated in the vertical direction. In addition, we design a prism for wide color gamut reproducibility. We develop a prototype and evaluate its resolution, image lag, and color reproducibility. The performance of the proposed system is found to be satisfactory for capturing 33M-pixel images at 120 Hz.
A 3mpixel ROIC with 10um pixel pitch and 120Hz frame rate digital output
Elad Ilan, Niv Shiloah, Shimon Elkind, et al.
A 1920x1536 matrix ROIC (Readout IC) for 10x10 μm2 P-on-N InSb photodiode array is reported. The ROIC features several conversion gain options implemented at the pixel level. A 2-by-2 pixel binning feature is implemented at the pixel level as well, improving SNR and enabling higher frame rates by a factor of four. A new column ADC is designed for low noise and low power consumption, while reaching 95 kSps sampling rate. Since 3840 column ADCs are integrated on chip, the total conversion rate is over 360Mpxl/sec. The ROIC achieves 120 Hz frame rate at the full format, with power consumption of less than 400mW. A high speed digital video interface is developed to output the required data bandwidth at a reasonable pin count.
Dynamic capability of sensors with nonlinear pixels utilized by security cameras
We have previously proposed a framework containing a typical security camera use case and have discussed how well this is handled by linear image sensors with various characteristics. The findings were visualized graphically, using a simple camera simulator generating images under well-defined conditions. In order to successfully render low-contrast objects together with large intra-scene variations in illuminance, the sensor requirements must include a high dynamic range combined with a comparably high signal-to-noise ratio. In this paper we reuse the framework and extend the discussion by including also sensors with non-linear pixel responses. The obvious benefit of a non-linear pixel is that it generally can cope with a higher scene dynamic range and that in most cases the exposure control can be relaxed. Known drawbacks are, for example, that the noise level can be fairly high. More specifically, the spatial noise levels are high due to variable pixel-to-pixel characteristics and lack of on-chip corrections, like correlated double sampling. In this paper we ignore the spatial noise, since some of the related issues have been addressed recently. Instead we focus on the temporal noise and dynamic resolution issues involved in non-linear imaging on a system level. Since the requirements are defined by our selected use case, and since we have defined a visual framework for analysis, it is straightforward to compare our findings with the results for linear image sensors. As in the previous paper, the image simulations are based on sensor data obtained from our own measurements.
Noise and Characterization
icon_mobile_dropdown
Empirical formula for rates of hot pixel defects based on pixel size, sensor area, and ISO
Glenn H. Chapman, Rohit Thomas, Zahava Koren, et al.
Experimentally, image sensors measurements show a continuous development of in-field permanent hot pixel defects increasing in numbers over time. In our tests we accumulated data on defects in cameras ranging from large area (<300 sq mm) DSLR’s, medium sized (~40 sq mm) point and shoot, and small (20 sq mm) cell phone cameras. The results show that the rate of defects depends on the technology (APS or CCD), and on design parameters like imager area, pixel size (from 1.5 to 7 um), and gain (from ISO100 to 1600). Comparing different sensor sizes with similar pixel sizes has shown that defect rates scale linearly with sensor area, suggesting the metric of defects/year/sq mm, which we call defect density. A search was made to model this defect density as a function of the two parameters pixel size and ISO. The best empirical fit was obtained by a power law curve. For CCD imagers, the defect densities are proportional to the pixel size to the power of -2.25 times the ISO to the power of 0.69. For APS (CMOS) sensors the power law had the defect densities proportional to the pixel size to the power of -3.07 times the ISO raised to the power of 0.5. Extending our empirical formula to include ISO allows us to predict the expected defect development rate for a wide set of sensor parameters.
A statistical evaluation of low frequency noise of in-pixel source follower-equivalent transistors with various channel types and body bias
R. Kuroda, A. Yonezawa, A. Teramoto, et al.
Both static and low frequency temporal noise characteristics were statistically evaluated for in-pixel source followerequivalent transistors with various channel types and body bias conditions. The evaluated transistor types were surface channel (SC) and buried channel (BC) transistors with or without isolated wells. The gate width/length of the evaluated transistors was 0.32/0.32 μm/μm and the gate oxide thickness was 7.6 nm. The BC transistors without isolated well exhibit noise distribution having a much lower noise level and a steeper slope compared to the SC transistors. For the BC transistors with isolated wells without body bias, the noise level increased compared to the BC transistors with body bias. It has been confirmed that the amplitude of random telegraph noise has a correlation to subthreshold swing factor (SS) for both BC and SC transistors. The increase of the noise level of BC transistors without body bias is due to the increase of the SS originated from a stronger short channel effect.
New analog readout architecture for low noise CMOS image sensors using column-parallel forward noise-canceling circuitry
Tsung-Ling Li, Yasuyuki Goda, Shunichi Wakashima, et al.
This paper presents a new analog readout architecture for low-noise CMOS image sensors. A proposed forward noisecanceling circuitry has been developed in our readout architecture to provide a sharper noise-filtering. The new readout architecture consists of a column high-gain amplifier with correlated-double-sampling (CDS), a column forward noisecanceling circuitry, and column sample-and-hold circuits. Through the high-gain amplifier together with the forward noise-canceling circuitry, this readout architecture effectively reduces random noise of in-pixel source follower and column amplifier as well as temporal line noise from power supplies and pulse lines. A prototype 400(H) x 250(V) CMOS image sensor using the new readout architecture has been fabricated in a 0.18 μm 1-Poly 3-Metal CMOS technology with pinned-photodiode. Both the pixel pitch and the column circuit pitch are 4.5 μm. The input-referred noise of the new readout architecture is 37 μVrms, which has been reduced by 23 % compared to that of the conventional readout architecture. The input-referred noise of the pixel with new readout architecture is 72 μVrms, which has been reduced by 24 % compared to that of the pixel with conventional readout architecture.
A novel pixel design with hybrid type isolation scheme for low dark current in CMOS image sensor
Sung Ho Choi, Yi Tae Kim, Min Seok Oh, et al.
New isolation scheme for CMOS image sensor pixel is proposed and its improved dark current performance is reported. It is well known that shallow trench isolation (STI) is one of major sources of dark current in imager pixel due to the existence of interfacial defects at STI/Si interface. On the account STI-free structure over the whole pixel area was previously reported for reducing dark current. As the size of pixel pitch is shrunk, however, it becomes increasingly difficult to isolate in-pixel transistors electrically without STI. In this work, we implemented hybrid type isolation scheme of removing STI around photodiode to suppress the dark current and remaining STI near transistors to guarantee the electrical isolation of transistors in pixel. It was successfully achieved that the dark current was significantly reduced by removing the STI around the photodiode together with normal operation of in-pixel transistors.
Technological Improvements
icon_mobile_dropdown
Continuous fabrication technology for improving resolution in RGB-stacked organic image sensor
Toshikatsu Sakai, Hokuto Seo, Satoshi Aihara, et al.
With the goal of developing a compact, high-resolution color camera, we have been studying about a novel image sensor with three stacked organic photoconductive films: each film is sensitive to only one of the primary color components, and each has a signal readout circuit. In this type of image sensor, the acceptable focal depth is roughly estimated to be shorter than about 20 μm when the pixel pitch of the sensor is several μm. To reduce the total thickness of the stack-type sensor, a continuous fabrication technology that entails stacking continuously all layers from the bottom to the top of the sensor is necessary. In the continuously stacked sensor, the three organic layers separated by interlayer insulators are formed close to each other on a single glass substrate. In this paper, we describe the elemental technologies for the continuous fabrication of a stack-type organic image sensor consisting of improving the heat resistance of organic films and decreasing the fabrication temperature of the interlayer insulators and signal readout circuits. A 150°C heat-resistant organic photoconductive film can be obtained by using organic materials possessing high glass-transition temperatures, and low-temperature fabrication of the interlayer insulator can be accomplished by metal oxides using atomic layer deposition (ALD) at 150°C. The amorphous In-Ga-Zn-O thin-film transistors (TFT) are fabricated at a maximum temperature of 150°C by using Al2O3 gate insulator via ALD and a post-treatment. The resulting TFT has good transfer characteristics. A continuously-stacked organic image sensor can be fabricated by integrating these technologies.
Biological tissue identification using a multispectral imaging system
Céline Delporte, Sylvie Sautrot, Mohamed Ben Chouikha, et al.
A multispectral imaging system enabling biological tissue identifying and differentiation is presented. The measurement of β(λ) spectral radiance factor cube for four tissue types (beef muscle, pork muscle, turkey muscle and beef liver) present in the same scene was carried out. Three methods for tissue identification are proposed and their relevance evaluated. The first method correlates the scene spectral radiance factor with tissue database characteristics. This method gives detection rates ranging from 63.5 % to 85 %. The second method correlates the scene spectral radiance factor derivatives with a database of tissue β(λ) derivatives. This method is more efficient than the first one because it gives detection rates ranging from 79 % to 89 % with over-detection rates smaller than 0.2 %. The third method uses the biological tissue spectral signature. It enhances contrast in order to afford tissue differentiation and identification.
A CMOS image sensor using floating capacitor load readout operation
S. Wakashima, Y. Goda, T.L. Li, et al.
In this paper, a CMOS image sensor using floating capacitor load readout operation has been discussed. The floating capacitor load readout operation is used during pixel signals readout. And this operation has two features: 1. in-pixel driver transistor drives load capacitor without current sources, 2. parasitic capacitor of pixel output vertical signal line is used as a sample/hold capacitor. This operation produces three advantages: a smaller chip size, a lower power consumption, and a lower output noise than conventional CMOS image sensors. The prototype CMOS image sensor has been produced using 0.18 μm 1-Poly 3-Metal CMOS process technology with pinned photodiodes. The chip size is 2.5 mmH x 2.5 mmV, the pixel size is 4.5 μmH x 4.5 μmV, and the number of pixels is 400H x 300V. This image sensor consists of only a pixel array, vertical and horizontal shift registers, column source followers of which height is as low as that of some pixels and output buffers. The size of peripheral circuit is reduced by 90.2 % of a conventional CMOS image sensor. The power consumption in pixel array is reduced by 96.9 %. Even if the power consumption of column source follower is included, it reduced by 39.0 %. With an introduction of buried channel transistors as in-pixel driver transistors, the dark random noise of pixels of the floating capacitor load readout operation CMOS image sensor is 168 μVrms. The noise of conventional image sensor is 466 μVrms; therefore, reduction of 63.8 % of noise was achieved.
A UV Si-photodiode with almost 100% internal Q.E. and high transmittance on-chip multilayer dielectric stack
Y. Koda, R. Kuroda, T. Nakazawa, et al.
In this work, by optimizing the structure and thickness of the on-chip multilayer dielectric stack using SiO2 and low extinction coefficient Si3N4 with the high UV-light sensitivity photodiode technology, high external Q.E. and high stability to UV-light were both successfully obtained. By changing the structure of on-chip multilayer dielectric stack and film thickness, we obtained the photodiode with the high external Q.E. in the desired UV-light region.
Applications
icon_mobile_dropdown
High sensitivity analysis of speckle patterns: a technological challenge for biomedical optics
J.-M. Tualle, K. Barjean, E. Tinet, et al.
Diffuse light in tissue can be a very interesting tool for medical diagnosis, especially if one considers the fluctuations of the speckle pattern. Of course, speckle analysis suffers from the low spatial coherence of speckle patterns, and multipixel detection is required in order to increase the signal to noise ratio. There is therefore a need of a setup with a high sensitivity, capable of outputting a signal from noise through averaging on a high number of pixels, as the signal can be lower than the photon level for one image and one frame. Furthermore, such a processing has to be done at a very high acquisition rate. “Smart-pixels” arrays can represent a major breakthrough in this field.
Gesture recognition on smart cameras
Aziz Dziri, Stephane Chevobbe, Mehdi Darouich
Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.
3DS-colorimeter based on a mobile phone camera for industrial applications
Jari Miettinen, J. Birgitta Martinkauppi, Pekka Suopajärvi
Colour gives an essential finishing touch to many products. Consumers find it as an important factor, for example, when selecting doors, furniture, parquet and coated metal products. Currently, colour evaluation is often carried out by looking at the product. Since people’s memory for an exact colour is poor, this method often produces unsatisfactory results in industrial quality control. In this paper, we discuss how to solve this problem by the use of a colour measurement technology for mobile phones equipped with a suitable accessory. Mobile phones provide a suitable monitor platform even for laymen as people are increasingly using their mobile devices for purposes of entertainment, communication and business, thus making them a familiar device to use. Our 3DS-colorimeter is a new, handheld, low-cost consumer/industrial-level prototype combining both a colorimeter feature and 3D surface measurement feature. In this paper, we describe its colorimeter features shortly and demonstrate its performance in measurement repeatability and colorimetric accuracy. As an application example, we show its usefulness for monitoring the colour appearance of painted doors. This study indicates that the 3DS-colorimeter is applicable to industrial quality control.
A single lens with no moving parts for rapid high-resolution 3D image capture
Dan Gray, Hongquiang Chen, Joseph Czechowski, et al.
There are many visual inspection and sensing applications where both a high resolution image and a depth-map of the imaged object are desirable at high speed. Presently available methods to capture 3D data (stereo cameras and structured illumination), are limited in speed, complexity, and transverse resolution. Additionally these techniques rely on a separated baseline for triangulation, precluding use in confined spaces. Typically, off the shelf lenses are implemented where performance in resolution, field-of-view, and depth of field are sacrificed in order to achieve a useful balance. Here we present a novel lens system with high-resolution and wide field-of-view for rapid 3D image capture. The design achieves this using a single lens with no moving parts. A depth-from-defocus algorithm is implemented to reconstruct 3D object point clouds and matched with a fused image to create a 3D rendered view.
Measurement and description method for image stabilization performance of digital cameras
Norihiro Aoki, Hiroya Kusaka, Hiroyuki Otsuka
Image stabilization functionality is widely acknowledged as an automated function of digital cameras. However, because unified methods of measurement of stabilization performance had not been developed, the Camera and Imaging Products Association (CIPA) standardized the measurement and description methods for image stabilization performance.
Interconnected network of cameras
Mahdad Hosseini Kamal, Hossein Afshari, Yusuf Leblebici, et al.
The real-time development of multi-camera systems is a great challenge. Synchronization and large data rates of the cameras adds to the complexity of these systems as well. The complexity of such system also increases as the number of their incorporating cameras increases. The customary approach to implementation of such system is a central type, where all the raw stream from the camera are first stored then processed for their target application. An alternative approach is to embed smart cameras to these systems instead of ordinary cameras with limited or no processing capability. Smart cameras with intra and inter camera processing capability and programmability at the software and hardware level will offer the right platform for distributed and parallel processing for multi- camera systems real-time application development. Inter camera processing requires the interconnection of smart cameras in a network arrangement. A novel hardware emulating platform is introduced for demonstrating the concept of the interconnected network of cameras. A methodology is demonstrated for the interconnection network of camera construction and analysis. A sample application is developed and demonstrated.
Interactive Paper Session
icon_mobile_dropdown
Creation of North-East Indian face database for human face identification
Kankan Saha, Priya Saha, Mrinal K. Bhowmik, et al.
Due to the various factors like illumination, expression, and pose variation etc., human face seem different in multiple occasions. To determine the efficiency of the different face recognition algorithms, it requires benchmark face images. This paper presents a comprehensive study of the available 2D face databases and also introduces the creation of a visual face database, North-East Indian (NEI) Face Database, which is under development in the Biometrics Laboratory of Tripura University, India. It contains high quality face images of 292 individuals of different tribe and non-tribe people of Mongolian origin collected from the North-Eastern states of India. The database contains four different types of illumination variations, eight different expressions, faces wearing glasses and each of these variations are being clicked concurrently from five different angles to provide pose variation using five CMOS sensor cameras, in a controlled indoor environment. Three different resolutions are being used for capturing the database images. Some baseline face recognition algorithms have also been tested using the Support Vector Machines (SVM) classifier on the NEI face database, which may be used as the control algorithm performance score by other researchers.
Optical characterization parameters by study and comparison of subwavelength patterns for color filtering and multispectral purpose
J. Matanga, Y. Lacroute, P. Gouton, et al.
In the present work, we have analyzed the optical parameters of the thin layers Ge, Ag, and Au prepared by lithography and used as front contact and absorber in the C-MOS device structure. The experimental data concern the reflectance R and transmittance T in the visible and near infrared region. The interpretation of these results are based on the method of analysis of R and T developed by Tomlin and on the Mueller numerical method of resolution of nonlinear equation give by Abelès[1] method. Thus we have chosen by comparing all method to use Lorenz-Mie theory diffusion [2]. The simulation results allow us to choose gold to realize our pattern.
Characterization of a solid state air corona charging device
Michael Young, Baomin Xu, Steve Buhler, et al.
Two new solid state devices which produced an atmospheric air corona discharge for generating and depositing a layer of static charge for Xerographic imaging have been fabricated and characterized. One type had a parallel plate capacitive structure and the other had an interdigitated capacitive structure. It was determined that the interdigitated capacitive structure performed better than the parallel plate capacitive structure in terms of reduced power consumption, charging current stability and device reliability. Several metal electrode material alternatives were investigated and gold electrodes performed the best. The air corona’s light emission peaks were measured to be in the 350 nm to 400 nm range. Ozone gas by-product generation to ~ 13 ppm was detected for an active surface area of 5 cm^2. Charge deposition on to an imaging drum surface with a significant charging current density of 1.6E-4 A/cm^2 has been successfully demonstrated.