Proceedings Volume 8299

Digital Photography VIII

cover
Proceedings Volume 8299

Digital Photography VIII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 29 December 2011
Contents: 7 Sessions, 29 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2012
Volume Number: 8299

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8299
  • Sensors and Optics
  • Image Enhancement
  • Image Quality and Mobile Imaging I: Joint Session with Conference 8293
  • Image Quality and Mobile Imaging II: Joint Session with Conference 8293
  • Multispectral
  • Interactive Paper Session
Front Matter: Volume 8299
icon_mobile_dropdown
Front Matter: Volume 8299
This PDF file contains the front matter associated with SPIE Proceedings Volume 8299, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Sensors and Optics
icon_mobile_dropdown
An objective protocol for comparing the noise performance of silver halide film and digital sensor
Digital sensors have obviously invaded the photography mass market. However, some photographers with very high expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital processing afterwards. Therefore, it is legitimate to evaluate silver halide film "with a digital eye", with the assumption that processing can be applied as for a digital camera. The article will describe in details the operations we need to consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array). We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor response, signal to noise ratio and dynamic range.
Sensor defect probability estimation and yield
Sensor yield is directly related to the probability of defective pixel occurrence and the screening criteria. Assuming a spatially independent distribution of single pixel defects, effective on-the-fly correction of singlepixel defects in a color plane, and effective correction of two-pixel defects in a color plane (couplets) through a defect map, sensor yield can be computed based on the occurrence of three adjacent defective pixels in a color plane (triplets). Closed-form equations are derived for calculating the probability of occurrence of couplets and triplets as a function of the probability of a single pixel being defective. If a maximum of one triplet is allowed in a 5-megapixel sensor, to obtain a 98% yield, the probability of a pixel being defective (p) must not exceed 1.3E-3 (6500 defective pixels). For an 8-megapixel sensor, the corresponding requirement would be p < 1.1E-3 (8900 defective pixels). Numerical simulation experiments have confirmed the accuracy of the derived equations.
Optimum spectral sensitivity functions for single sensor color imaging
Zahra Sadeghipoor, Yue M. Lu, Sabine Süsstrunk
A cost-effective and convenient approach for color imaging is to use a single sensor and mount a color filter array (CFA) in front of it, such that at each spatial position the scene information in only one color channel is captured. To estimate the missing colors at each pixel, a demosaicing algorithm is applied to the CFA samples. Besides the filter arrangement and the demosaicing method, the spectral sensitivity functions of the CFA filters considerably affect the quality of the demosaiced image. In this paper, we propose an algorithm to compute the optimum spectral sensitivities of filters in the single sensor imager. The proposed algorithm solves a constrained optimization problem to find optimum spectral sensitivities and the corresponding linear demosaicing method. An important constraint for this problem is the smoothness of spectral sensitivities, which is imposed by modeling these functions as a linear combination of several smooth kernels. Simulation results verify the effectiveness of the proposed algorithm in finding optimal spectral sensitivity functions, which outperform measured camera sensitivity functions.
A method for the evaluation of wide dynamic range cameras
Ping Wah Wong, Yu Hua Lu
We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.
Active pixels of transverse field detector based on a charge preamplifier
G. Langfelder, C. Buffa, A. F. Longoni, et al.
The Transverse Field Detector (TFD), a filter-less and tunable color sensitive pixel, is based on the generation of specific electric field configurations within a depleted Silicon volume. Each field configuration determines a set of three or more spectral responses that can be used for direct color acquisition at each pixel position. In order to avoid unpredictable changes of the electric field configuration during the single image capture, a specific active pixel (AP) has been designed. In this AP the dark- and photo-generated charge is not integrated directly on the junction capacitance, but, for each color, it is integrated on the feedback capacitance of a single-transistor charge pre-amplifier. The AP further includes a bias transistor, a reset transistor and a follower. In this work the design of such a pixel is discussed and the experimental results obtained on a 2x2 matrix of these active pixels are analyzed in terms of spectral response, linearity, noise, dynamic range and repeatability.
Digital focusing and refocusing with thin multi-aperture cameras
Alexander Oberdörster, Andreas Brückner, Frank Wippermann, et al.
For small camera modules in consumer applications, such as mobile phones or webcams, size and cost are important constraints. An autofocus system increases both size and cost and can degrade optical performance by misalignment. Therefore, a monolithic optical system with a fixed focus is preferable for these applications. On the other hand, the optical system of the camera has to exhibit a very large depth of field, as it is expected to deliver sharp images for all typical working distances. The depth of field of a camera system can be increased by using a larger F-number, but this is undesirable due to light sensitivity considerations. On the other hand, it can also be increased by reducing focal length. Multi-aperture systems use multiple optical channels, each of them with a smaller focal length than a comparable single-aperture system. Accordingly, each of the channels has a large depth of field. However, as the channels are displaced laterally, parallax becomes noticeable for close objects. Therefore, the channel images have to be shifted accordingly when recombining them into a complete image. We demonstrate an algorithm that compensates for parallax as well as chromatic aberration and geometric distortion. We present a very flat camera system without moving parts that is capable of taking photos and video at a wide range of distances. On the demonstration system, object distance can be adjusted in real time from 4 mm to infinity. The focus position can be selected during capture or after the images were taken.
The multifocus plenoptic camera
Todor Georgiev, Andrew Lumsdaine
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
Spatial analysis of discrete plenoptic sampling
Andrew Lumsdaine, Todor G. Georgiev, Georgi Chunev
Plenoptic cameras are intended to fully capture the light rays in a scene. Using this information, optical elements can be applied to a scene computationally rather than physically-allowing an infinite variety of pictures to be rendered after the fact from the same plenoptic data. Practical plenoptic cameras necessarily capture discrete samples of the plenoptic function, which together with the overall camera design, can constrain the variety and quality of rendered images. In this paper we specifically analyze the nature of the discrete data that plenoptic cameras capture, in a manner that unifies the traditional and focused plenoptic camera designs. We use the optical properties of plenoptic cameras to derive the geometry of discrete plenoptic function capture. Based on this geometry, we derive expressions for expected resolution from a captured plenoptic function. Our analysis allows us to define the "focused plenoptic condition," a necessary condition in the optical design that distinguishes the traditional plenoptic camera from the focused plenoptic camera.
Design framework for a spectral mask for a plenoptic camera
Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.
Image Enhancement
icon_mobile_dropdown
Detection thresholds of structured noise in the presence of shot noise
An observer study was run to determine the detection thresholds of several representative examples of column fixed pattern noise, in the presence of varying levels of shot noise, which is known to mask structured noise. The data obtained were fit well at relevant shot noise levels by a simple model based on signal detection theory. Individual metrics of fixed pattern noise and shot noise, used in the masking equation, were computed from one dimensional integrations involving the capture noise power spectra (mapped to CIELAB space); the modulation transfer function of the display; the display pixel pitch; the viewing distance; and the S-CIELAB luminance contrast sensitivity function. The results of this work can be used to predict detection thresholds that can be added to photon transfer curves for the purpose of determining whether fixed pattern noise will be visible.
Reduced-reference image quality assessment based on statistics of edge patterns
Yuting Chen, Wufeng Xue, Xuanqin Mou
Recently, research of Objective Image Quality Assessment (IQA) has gained much attention due to its wide application prospect. Among them, the Reduced-Reference (RR) methods estimate perceptual quality of distorted images with partial information from the reference images. This paper proposes a novel universal RR-IQA metric based on the statistics of edge patterns. Firstly, the binary edge maps of the reference and distorted images are created by the LOG operator and zero-crossing detection. Based on them, 15 groups of typical edge patterns are extracted and then their statistical distributions are calculated respectively for the reference and distortion images. The proposed RR-IQA metric is achieved by computing the L-1 Minkowski distance between those two distributions. We have evaluated this metric on six publicly accessible subjective IQA databases. Experiments shows that the proposed metric featured with typical edge patterns outperform other methods in terms of data volume, accuracy and consistency with human perception. In a way, our work provides a new view to the IQA metric design.
Joint chromatic aberration correction and demosaicking
Mritunjay Singh, Tripurari Singh
Chromatic Aberration of lenses is becoming increasingly visible with the rise of sensor resolution, and methods to algorithmically correct it are becoming increasingly common in commercial systems. A popular class of algorithms undo the geometric distortions after demosaicking. Since most demosaickers require high frequency correlation of primary colors to work effectively, the result is artifact-ridden as Chromatic Aberration destroys this correlation. The other existing approach of undistorting primary color images before demosaicking requires resampling of sub-sampled primary color images and is prone to aliasing. Furthermore, this algorithm cannot be applied to panchromatic CFAs. We propose a joint demosaicking and Chromatic Aberration correction algorithm that is applicable to both panchromatic and primary color CFAs and suffers from none of the above problems. Our algorithm treats the mosaicing process as a linear transform that is invertible if luminance and chrominance are appropriately bandlimited. We develop and incorporate Chromatic Aberration corrections to this model of the mosaicing process without altering its linearity or invertibility. This correction works for both space variant linear filter demosaicking and the more aggressive compressive sensing reconstruction.
Optimal defocus estimates from individual images for autofocusing a digital camera
Johannes Burge, Wilson S. Geisler
Recently, we developed a method for optimally estimating focus error given a set of natural scenes, a waveoptics model of the lens system, a sensor array, and a specification of measurement noise. The method is based on first principles and can be tailored to any vision system for which these properties can be characterized. Here, the method is used to estimate defocus in local areas of images (64x64 pixels) formed in a Nikon D700 digital camera fitted with a 50mm Sigma prime lens. Performance is excellent. Defocus magnitude and sign can be estimated with high precision and accuracy over a wide range. The method takes an integrative approach that accounts for natural scene statistics and capitalizes (but not does depend exclusively) on chromatic aberrations. Although chromatic aberrations are greatly reduced in achromatic lenses, we show that there are sufficient residual chromatic aberrations in a high-quality prime lens for our method to achieve good performance. Our method has the advantages of both phase-detection and contrastmeasurement autofocus techniques, without their disadvantages. Like phase detection, the method provides point estimates of defocus (magnitude and sign), but unlike phase detection, it does not require specialized hardware. Like contrast measurement, the method is image-based and can operate in "Live View" mode, but unlike contrast measurement, it does not require an iterative search for best focus. The proposed approach could be used to develop improved autofocus algorithms for digital imaging and video systems.
Quality versus color saturation and noise
A softcopy quality ruler study involving 12 scenes and 34 observers was performed to quantify the dependence of quality on color saturation, in the absence of noise, with saturation measured using Imatest software. It was found that quality falls off symmetrically with deviation of color saturation from the preferred value of about 110%, with a 20% change in saturation reducing quality by about two just noticeable differences (JNDs). Optimization of noise versus color saturation was investigated using (1) the aforementioned transform of color saturation to JNDs of quality; (2) a previously published objective metric and JND transform for isotropic noise; and (3) the multivariate formalism, for combining JNDs from independent attributes into an overall quality JNDs. As noise increases and signal to noise ratio (SNR) decreases, the optimal color saturation decreases from the 110% position, so that there is less noise amplification by the color correction matrix. A quality contour plot is presented, showing a region of plausible color saturation values, as a function of SNR, for a representative use case. One example of a reasonable strategy is to provide 80% color saturation at SNR = 5, 90% at SNR = 10, 100% at SNR = 20, and 110% at SNR 50 and above.
Bio-inspired framework for automatic image quality enhancement
Andrea Ceresi, Francesca Gasparini, Fabrizio Marini, et al.
We propose a bio-inspired framework for automatic image quality enhancement. Restoration algorithms usually have fixed parameters whose values are not easily settable and depend on image content. In this study, we show that it is possible to correlate no-reference visual quality values to specific parameter settings such that the quality of an image could be effectively enhanced through the restoration algorithm. In this paper, we chose JPEG blockiness distortion as a case study. As for the restoration algorithm, we used either a bilateral filter, or a total variation denoising detexturer. The experimental results on the LIVE database will be reported. These results will demonstrate that a better visual quality is achieved through the optimized parameters over the entire range of compression, with respect to the algorithm default parameters.
An efficient multiple exposure image fusion in JPEG domain
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
A controllable anti-aliasing filter for digital film cameras
In this paper, the theoretical foundation and practical implementation of a controllable anti-aliasing filter for digital film cameras is presented. A prototype of an optical anti-aliasing filter that is based on moving a parallel optical window was designed and built to demonstrate the ability to control the spatial frequency response of an acquisition system. During the image exposure, four spring-preloaded voice coils rapidly change the pitch and yaw of the parallel window, resulting in a displacement of the image content that is projected onto the sensor. The image content displacement during the exposure results in altering the frequency response of the scene that is captured by the sensor. Specifically, during the exposure time, a carefully controlled movement of the parallel optical window results in a circular trajectory of the image content that is projected onto the sensor. By increasing or decreasing the radius of the circular trajectory, the spatial cut-off frequency of the system is dynamically modified. In addition to the circular path, this paper shows theoretical justification and demonstrates the use of more complex trajectories, such as double circular, elliptical, and one-dimensional rectangular trajectories. These trajectories improve the suppression of aliased components in the acquired image.
Image Quality and Mobile Imaging I: Joint Session with Conference 8293
icon_mobile_dropdown
Rethinking camera user interfaces
Stephen Brewster, Christopher McAdam, James McDonald, et al.
Digital cameras and camera phones are now very widely used but there are some issues that affect their use and the quality of the images captured. Many of these issues are due to problem of interaction or feedback from the camera. Modern smartphones have a wide range of sensors, rich feedback mechanisms and lots of processing power. We have developed and evaluated a range of new interaction techniques for cameras and camera phones that improve the picture taking process and allow people to take better pictures first time.
Image Quality and Mobile Imaging II: Joint Session with Conference 8293
icon_mobile_dropdown
On the performances of computer vision algorithms on mobile platforms
S. Battiato, G. M. Farinella, E. Messina, et al.
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Multispectral
icon_mobile_dropdown
A novel adaptive compression method for hyperspectral images by using EDT and particle swarm optimization
Pedram Ghamisi, Lalit Kumar
Hyperspectral sensors generate useful information about climate and the earth surface in numerous contiguous narrow spectral bands, and are widely used in resource management, agriculture, environmental monitoring, etc. Compression of the hyperspectral data helps in long-term storage and transmission systems. Lossless compression is preferred for high-detail data, such as hyperspectral data. Due to high redundancy in neighboring spectral bands and the tendency to achieve a higher compression ratio, using adaptive coding methods for hyperspectral data seems suitable for this purpose. This paper introduces two new compression methods. One of these methods is adaptive and powerful for the compression of hyperspectral data, which is based on separating the bands with different specifications by the histogram and Binary Particle Swarm Optimization (BPSO) and compressing each one a different manner. The new proposed methods improve the compression ratio of the JPEG standards and save storage space the transmission. The proposed methods are applied on different test cases, and the results are evaluated and compared with some other compression methods, such as lossless JPEG and JPEG2000.
Spectral sensitivity evaluation considering color constancy
Digital cameras are used under a wide spectrum of illuminants and should adjust the white point according to the illuminant used (white balance correction). The white balance correction makes "white" object be reproduced as "white" but the reproduction of chromatic objects will not necessarily be appropriate (color constancy error). Three types of sensor models were used for the simulation. The error was reduced by a primary conversion that made overlaps and widths of the channels smaller. Thus two new metrics that evaluated the overlaps and the widths were defined and used to optimize the conversion to a color space suitable for white-balancing and it was shown that the color constancy errors were reduced. It was also shown that the color constancy error was small for the sensor model whose overlaps between channels were small and channel widths were narrow. The narrower widths and smaller overlaps of RGB channels gave less accurate colorimetric reproduction but less noisy images. In view of consumer digital cameras, the narrower widths and smaller overlaps of RGB channels is suitable because it can give less noisy images and consistent color reproduction with simple white balance processing.
Multispectral demosaicking using guided filter
Yusuke Monno, Masayuki Tanaka, Masatoshi Okutomi
Multispectral imaging is highly demanded for precise color reproduction and for various computer vision applications. Multispectral imaging with a multispectral color filter array (MCFA), which can be considered as a multispectral extension of commonly used consumer RGB cameras, could be a simple, low-cost, and practical system. A challenge of the multispectral imaging with the MCFA is multispectral demosaicking because each spectral component of the MCFA is severely undersampled. In this paper, we propose a novel multispectral demosaicking algorithm using a guided filter. The guided filter is recently proposed as an excellent structurepreserving filter. The guided filter requires so-called a guide image. A main issue of the guided filter is how to obtain an effective guide image. In our proposed algorithm, we generate the guide image from the most densely sampled spectral component in the MCFA. Then, ohter spectral components are interpolated by the guided filter. Experimental results demonstrate that our proposed algorithm outperforms other existing demosaicking algorithms both visually and quantitatively.
An LED-based lighting system for acquiring multispectral scenes
Manu Parmar, Steven Lansel, Joyce Farrell
The availability of multispectral scene data makes it possible to simulate a complete imaging pipeline for digital cameras, beginning with a physically accurate radiometric description of the original scene followed by optical transformations to irradiance signals, models for sensor transduction, and image processing for display. Certain scenes with animate subjects, e.g., humans, pets, etc., are of particular interest to consumer camera manufacturers because of their ubiquity in common images, and the importance of maintaining colorimetric fidelity for skin. Typical multispectral acquisition methods rely on techniques that use multiple acquisitions of a scene with a number of different optical filters or illuminants. Such schemes require long acquisition times and are best suited for static scenes. In scenes where animate objects are present, movement leads to problems with registration and methods with shorter acquisition times are needed. To address the need for shorter image acquisition times, we developed a multispectral imaging system that captures multiple acquisitions during a rapid sequence of differently colored LED lights. In this paper, we describe the design of the LED-based lighting system and report results of our experiments capturing scenes with human subjects.
Interactive Paper Session
icon_mobile_dropdown
Fast in-plane translation and rotation estimation for multi-image registration
Xiaoyun Jiang, Haiyin Wang
This document considers the planar motions of camera, that is, the rotation, and the horizontal and vertical translations in the image plane. The approach based on projection including both Cartesian coordinate system and polar coordinate system can estimate the three parameters comparably quickly with simple calcuation. The potential applications cover motion deblurring, noise reduction, super-resolution, image fusion, high dyanmic range image processing, EDOF, 3D imaging or those techniques which require global or local registration.
Multispectral filter wheel cameras: modeling aberrations for filters in front of lens
Julie Klein, Til Aach
Aberrations occur in multispectral cameras featuring filter wheels because of color filters with different optical properties being present in the ray path. In order to ensure an exact compensation of these aberrations, a mathematical model of the distortions has to be developed and its parameters have to be calculated using the measured data. Such a model already exists for optical filters placed between the sensor and the lens, but not for bandpass filters placed in front of the lens. For this configuration, the rays are first distorted by the filters and then by the lens. In this paper, we derive a model for aberrations caused by filters placed in front of the lens in multispectral cameras. We compare this model with distortions obtained with simulations as well as with distortions measured during real multispectral acquisitions. In both cases, the difference between modeled and measured aberrations remains low, which corroborates the physical model. Multispectral acquisitions with filters placed between the sensor and the lens or in front of the lens are compared: the latter exhibit smaller distortions and the aberrations in both images can be compensated using the same algorithm.
Correcting saturated pixels in images
This paper proposes a novel method to correct saturated pixels in images. This method is based on the YCbCr color space and separately corrects the chrominance and the luminance of saturated pixels. Dynamic thresholds are adopted to identify saturated pixels, i.e. the thresholds for different images and different color channels are different. So our method can correct not only RAW images but also processed images. Once the saturated pixels are identified, there are three kinds of saturated pixels: 1-channel saturated pixels, 2-channel saturated pixels and 3-channel saturated pixels. They are denoted as Ω1, Ω2 and Ω3 respectively. Different strategies are implemented to these three kinds of regions. The color of saturated pixels in Ω1 is corrected according to their original color and the color of their neighborhood. And the color of saturated pixels in Ω2 and Ω3 is corrected only according to the color of their neighborhood. The luminance of saturated pixels is corrected using the model proposed in this paper. Experiment results show our method is effective in correcting saturated pixels of RAW images and process images.
Real-time, multidirectional 2D fast wavelet transform and its denoised sharpening application
B. J. Baek, T. C. Kim
This paper presents a real-time multi-directional 2D fast wavelet transform. As a real-time implementation of wavelet, small pixel windows in raster scan order are sequentially used as inputs to the transform instead of full image frame. This approach is corresponding to a recent image sensor releasing image pixels in line-by-line scanning order. The proposed method has high locality property so as to achieve low-latency real-time implementation adequate for recent mobile highly cost-effective application and it uses multi-directional decomposition which enables the transform to reduce the directional artifacts often occurring in conventional two-directional separable decomposition wavelet. As a possible application of the proposed wavelet, denoised sharpening algorithm was devised and the result is presented, representing the enhancement of directional and blocking artifacts problem.
Color transfer using semantic image annotation
In this work we present an automatic local color transfer method based on semantic image annotation. With this annotation, images are segmented into homogeneous regions, assigned to seven different classes (vegetation, snow, water, ground, street, and sand). Our method permits to automatically transfer the color distribution from regions of the source and target images annotated with the same class (for example the class "sky"). The amount of color transfer can be controlled by tuning a single parameter. Experimental results will show that our local color transfer is usually more visually pleasant than a global approach.