Proceedings Volume 6494

Image Quality and System Performance IV

Luke C. Cui, Yoichi Miyake
cover
Proceedings Volume 6494

Image Quality and System Performance IV

Luke C. Cui, Yoichi Miyake
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 January 2007
Contents: 11 Sessions, 29 Papers, 0 Presentations
Conference: Electronic Imaging 2007 2007
Volume Number: 6494

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 6494
  • System Measurement and Modeling: Subjective
  • System Measurement and Modeling: Web-based
  • System Measurement and Modeling: Objective I
  • System Measurement and Modeling: Objective II
  • System Measurement and Modeling: Newer Technologies
  • Image Quality Standards I
  • Image Quality Standards II
  • Image Quality Attributes: Measurement and Modeling
  • Image Quality Attributes: Unique Defects
  • Poster Session
Front Matter: Volume 6494
icon_mobile_dropdown
Front Matter: Volume 6494
This PDF file contains the front matter associated with SPIE Proceedings Volume 6494, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
System Measurement and Modeling: Subjective
icon_mobile_dropdown
Measuring user experience in digital gaming: theoretical and methodological issues
Jari Takatalo, Jukka Häkkinen, Jyrki Kaistinen, et al.
There are innumerable concepts, terms and definitions for user experience. Few of them have a solid empirical foundation. In trying to understand user experience in interactive technologies such as computer games and virtual environments, reliable and valid concepts are needed for measuring relevant user reactions and experiences. Here we present our approach to create both theoretically and methodologically sound methods for quantification of the rich user experience in different digital environments. Our approach is based on the idea that the experience received from a content presented with a specific technology is always a result of a complex psychological interpretation process, which components should be understood. The main aim of our approach is to grasp the complex and multivariate nature of the experience and make it measurable. We will present our two basic measurement frameworks, which have been developed and tested in large data set (n=2182). The 15 measurement scales extracted from these models are applied to digital gaming with a head-mounted display and a table-top display. The results show how it is possible to map between experience, technology variables and the background of the user (e.g., gender). This approach can help to optimize, for example, the contents for specific viewing devices or viewing situations.
Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
Image quality difference modelling of a mobile display
A series of psychophysical experiments using paired comparison method was performed to investigate various visual attribute affecting image quality of a mobile display. An image quality difference model was developed to show high correlation with visual results. The result showed that Naturalness and Clearness are the most significant attributes among the perceptions. A colour quality difference model based on image statistics was also constructed and it was found colour difference and colour naturalness are important attributes for predicting image colour quality difference.
Threshold value for acceptable video quality using signal-to-noise ratio
Noise decreases video quality considerably, particularly in dark environments. In a video clip, noise can be seen as an unwanted spatial or temporal variation in pixel values. The object of the study was to find a threshold value for signal-to-noise ratio (SNR) in which the video quality is perceived to be good enough. Different illumination levels for video shooting were studied using both subjective and objective (SNR measurements) methodologies. Five camcorders were selected to cover different sensor technologies, recording formats and price categories. The test material for the subjective test was recorded in an environment simulator, where it was possible to adjust lighting levels. Double staircase test was used as the subjective test method. The test videos for objective measurements were recorded using an ISO 15739 based environment. There was a correlation found between objective and subjective measurements, between measured SNR and perceived quality. Good enough video quality was reached between SNR values of 15.3 dB and 17.2 dB. With 3CCD and super HAD-CCD technologies, video quality was brighter, less noisy, and the SNR was better in low light conditions compared to the quality with conventional CCDs.
System Measurement and Modeling: Web-based
icon_mobile_dropdown
Color differences without probit analysis
Color science generally considers color differences from the standpoint of distance metrics. These distance metrics are typically experimental and are based on many paired comparisons and probit analysis. The predominant focus is on the derivation of a uniform metric that is optimized for small color differences around the just-noticeable difference limit. Increasingly sophisticated mathematical modeling is then used to fit a range of laboratory data sets. While this work has yielded invaluable industrial applications, it has perhaps left certain aspects of color differences under explored. For example how do non-experts typically describe color differences? What are the natural language characteristics of the description of color difference? This paper considers color differences specifically from the nominal or linguistic perspective.
Web-based versus controlled environment psychophysics experiments
Silvia Zuffi, Paolo Scala, Carla Brambilla, et al.
A recent trend in psychophysics experiments related to image quality is to perform the experiments on the World Wide Web with a large number of observers instead of in a laboratory under controlled conditions. This method assumes that the large number of participants involved in a Web investigation "averages out" the parameters that the experiments would require to keep fixed in the same experiment performed, following a traditional approach, under controlled conditions. In this paper we present the results of two experiments we have conducted to assess the minimum value of color contrast to ensure readability. The first experiment was performed in a controlled environment, the second on the Web. The result emerging from the statistical data analysis is that the Web experiment yields the same conclusions as the experiment done in the laboratory.
System Measurement and Modeling: Objective I
icon_mobile_dropdown
Video quality assesment using M-SVD
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
Method of estimating perceived video quality for mobile multimedia application based on full reference framework
Osamu Sugimoto, Shigeyuki Sakazawa, Atsushi Koike
The authors study a method of estimating perceived picture quality for multimedia applications based on the full reference framework. Since multimedia applications usually have less capability in display and communication channels than television broadcasting, an objective quality model should be developed considering a low-resolution, low-framerate video format and low-bit-rate video coding that applies a high compression ratio. The proposed method therefore applies the blockiness of the picture, time variance of MSE and temporal PSNR degradation as indices of objective picture quality. Computer simulation shows that the proposed method can estimate perceived picture quality at a correlation coefficient of above 0.94.
System Measurement and Modeling: Objective II
icon_mobile_dropdown
Performance evaluation of digital still camera image processing pipelines
Although its lens and image sensor fundamentally limit a digital still camera's imaging performance, image processing can significantly improve the perceived quality of the output images. A well-designed processing pipeline achieves a good balance between the available processing power and the image yield (the fraction of images that meet a minimum quality criterion). This paper describes the use of subjective and objective measurements to establish a methodology for evaluating the image quality of processing pipelines. The test suite contains images both of analytical test targets for objective measurements, and of scenes for subjective evaluations that cover the photospace for the intended application. Objective image quality metrics correlating with perceived sharpness, noise, and color reproduction were used to evaluate the analytical images. An image quality model estimated the loss in image quality for each metric, and the individual metrics were combined to estimate the overall image quality. The model was trained with the subjective image quality data. The test images were processed through different pipelines, and the overall objective and subjective data was assessed to identify those image quality metrics that exhibit significant correlation with the perception of image quality. This methodology offers designers guidelines for effectively optimizing image quality.
Image quality and automatic color equalization
M. Chambah, A. Rizzi, C. Saint Jean
In the professional movie field, image quality is mainly judged visually. In fact, experts and technicians judge and determine the quality of the film images during the calibration (post production) process. As a consequence, the quality of a restored movie is also estimated subjectively by experts [26,27]. On the other hand, objective quality metrics do not necessarily correlate well with perceived quality [28]. Moreover, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their use in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements of the field [29,25]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. The ACE method, for Automatic Color Equalization [1,2], is an algorithm for digital images unsupervised enhancement. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. We present in this paper is the use of ACE as a basis of a reference free image quality metric. ACE output is an estimate of our visual perception of a scene. The assumption, tested in other papers [3,4], is that ACE enhancing images is in the way our vision system will perceive them, increases their overall perceived quality. The basic idea proposed in this paper, is that ACE output can differ from the input more or less according to the visual quality of the input image In other word, an image appears good if it is near to the visual appearance we (estimate to) have of it. Reversely bad quality images will need "more filtering". Test and results are presented.
A unified framework for physical print quality
Ahmed Eid, Brian Cooper, Ed Rippetoe
In this paper we present a unified framework for physical print quality. This framework includes a design for a testbed, testing methodologies and quality measures of physical print characteristics. An automatic belt-fed flatbed scanning system is calibrated to acquire L* data for a wide range of flat field imagery. Testing methodologies based on wavelet pre-processing and spectral/statistical analysis are designed. We apply the proposed framework to three common printing artifacts: banding, jitter, and streaking. Since these artifacts are directional, wavelet based approaches are used to extract one artifact at a time and filter out other artifacts. Banding is characterized as a medium-to-low frequency, vertical periodic variation down the page. The same definition is applied to the jitter artifact, except that the jitter signal is characterized as a high-frequency signal above the banding frequency range. However, streaking is characterized as a horizontal aperiodic variation in the high-to-medium frequency range. Wavelets at different levels are applied to the input images in different directions to extract each artifact within specified frequency bands. Following wavelet reconstruction, images are converted into 1-D signals describing the artifact under concern. Accurate spectral analysis using a DFT with Blackman-Harris windowing technique is used to extract the power (strength) of periodic signals (banding and jitter). Since streaking is an aperiodic signal, a statistical measure is used to quantify the streaking strength. Experiments on 100 print samples scanned at 600 dpi from 10 different printers show high correlation (75% to 88%) between the ranking of these samples by the proposed metrologies and experts' visual ranking.
System Measurement and Modeling: Newer Technologies
icon_mobile_dropdown
Measurement-based objective metric for printer resolution
Jun Hasegawa, Tae-Yoon Hwang, Hyun-Cheol Kim, et al.
There are many possible definitions for the printer resolution because it could include various aspects of exact reproduction of graphic data. This study proposes a metric to evaluate the quality on the resolution of the laser printer in an objective way with a scanner system. First, various classic methods for describing resolution quality are surveyed and their weak points are shown. Second, the needed properties of the method to realize the ideal evaluation system are shown. Third, the calclation method for the printer resolution metric is described in detail. Finally, the experimental results of this method demonstrate the validity of the proposed metric.
Information distance-based selective feature clarity measure for iris recognition
Iris recognition systems have been tested to be the most accurate biometrics systems. However, poor quality images greatly affect accuracy of iris recognition systems. Many factors can affect the quality of an iris image, such as blurriness, resolution, image contrast, iris occlusion, and iris deformation, but blurriness is one of the most significant problems for iris image acquisition. In this paper, we propose a new method to measure the blurriness of an iris image called information distance based selective feature clarity measure. Different from any other approach, the proposed method automatically selects portions of the iris with most changing patterns to measure the level of blurriness based on their frequency characteristics. Log-Gabor wavelet is used to capture the features of the selected portions. By comparing the information loss from the original features to blurred versions of the same features, the algorithm decides the clarity of the original iris image. The preliminary experiment results show that this method is effective.
Image Quality Standards I
icon_mobile_dropdown
Driving color management into the office
In much the same way that the automobile industry develops new technologies in racing cars and then brings them to a broader market for commercial and consumer vehicles, CIE Division 8 is trying to spread color management from the graphic arts market into the broader office and home markets. In both areas, the professional environment is characterized by highly motivated, highly trained practitioners who see their activity as an end in itself and have access to expensive technology, state of the art measurement and calibration equipment, and an environment that, if not as sedate as a research laboratory, is controlled and well-understood. In contrast, the broader market features users who have relatively little training at the imaging tasks and see them as a means to an end, which is where their real attention is focused. These users have mass-market equipment and little or no equipment for measurement and calibration. They use their tools (cars or imaging equipment) in a variety of environments under highly unpredictable conditions. The challenge to the automobile and imaging engineering communities is to design practical solutions to work in these real world environments that are less demanding in terms of strict performance, but more demanding in terms of flexibility and robustness. In the graphic arts, we have standards that tell us how to perform comparisons between printed images (hardcopy) and images displayed on a screen (softcopy). The users are told to use sequential binocular comparisons using memory matching, where they first adapt completely to one viewing condition, study one image, and then adapt to the other viewing condition and compare the second image against their memory of the first. This provides a nicely controlled environment where the observer's state of adaptation is easy to calculate. Unfortunately, in the office and home markets, users insist on comparing the softcopy and hardcopy side by side, and rapidly switching their gaze between the two images. In such a situation, it is much harder to say what the observer's adaptation is. There are two phenomena involved. First is mixed chromatic adaptation, where the white point to which the observer is adapted is a mixture of the white points of the display and the hardcopy. Related to this is what is known as incomplete chromatic adaptation, where the media white point does not appear perfectly white to the observer. The report from TC8-04 provides equations that extend CIECAM02 to account for mixed and incomplete chromatic adaptation. Those who work in the graphic arts know that if you want to make critical color judgments, you need a controlled lighting environment. Most color management systems are designed around a white point of D50, even for displays. This white point is the default setting for most spectrophotometers, for example. In the office environment, the situation is much less clear. There are many different lamps used in office lighting with many different white points. There have been no broad, multi-national studies of office lighting conditions, so we cannot say what typical office lighting is like, or even if there are any conditions that could be called "typical." TC8-10 is designing such a study. We intend to look at the spectral power distributions and illumination levels found in areas of offices where people tend to look at images. Once we have gathered this data, we will analyze it to see if any trends can be found. There may be similarities within geographic regions, job categories, or seasonal variations that would be useful to know. CIE Division 8 members hope that by applying the research results from our technical committees, color engineers will be able to help their customers get pleasing results in the uncalibrated, unmeasured, unpredictable environment that is the office workplace.
Appearance can be deceiving: using appearance models in color imaging
As color imaging has evolved through the years, our toolset for understanding has similarly evolved. Research in color difference equations and uniform color spaces spawned tools such as CIELAB, which has had tremendous success over the years. Research on chromatic adaptation and other appearance phenomena then extended CIELAB to form the basis of color appearance models, such as CIECAM02. Color difference equations such as CIEDE2000 evolved to reconcile weaknesses in areas of the CIELAB space. Similarly, models such as S-CIELAB were developed to predict more spatially complex color difference calculations between images. Research in all of these fields is still going strong and there seems to be a trend towards unification of some of the tools, such as calculating color differences in a color appearance space. Along such lines, image appearance models have been developed that attempt to combine all of the above models and metric into one common framework. The goal is to allow the color imaging research to pick and choose the appropriate modeling toolset for their needs. Along these lines, the iCAM image appearance model framework was developed to study a variety of color imaging problems. These include image difference and image quality evaluations as well gamut mapping and high-dynamic range (HDR) rendering. It is important to stress that iCAM was not designed to be a complete color imaging solution, but rather a starting point for unifying models of color appearance, color difference, and spatial vision. As such the choice of model components is highly dependent on the problem being addressed. For example, with CIELAB it clearly evident that it is not necessary to use the associated color difference equations to have great success as a deviceindependent color space. Likewise, it may not be necessary to use the spatial filtering components of an image appearance model when performing image rendering. This paper attempts to shed some light on some of the confusions involved with selecting the desired components for color imaging research. The use of image appearance type models for calculating image differences, like S-CIELAB and those recommended by CIE TC8-02 will be discussed. Similarly the use of image appearance for HDR applications, as studied by CIE TC8-08, will also be examined. As with any large project, the easiest way to success is in understanding and selecting the right tool for the job.
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
Differential gloss quality scale experiment update: an appearance-based image quality standard initiative (INCITS W1.1)
Yee S. Ng, Chunghui Kuo, Eric Maggard, et al.
Surface characteristics of a printed sample command a parallel group of visual attributes determining perceived image quality beyond color, and they manifest themselves through various perceived gloss features such as differential gloss, gloss granularity, gloss mottle, etc. Extending from the scope of ISO19799 with limited range of gloss level and printing technologies, the objective of this study is to derive an appearance-based differential gloss quality scale ranging from very low gloss level to very high gloss level composed by various printing technology/substrate combinations. Three psychophysical experiment procedures were proposed including the quality ruler method, pair comparison, and interval scaling with two anchor stimuli, where the pair comparison process was subsequently dropped because of the concern of experiment complexity and data consistency after preliminary trial study. In this paper, we will compare the obtained average quality scale after mapping to the sharpness quality ruler with the average perceived differential gloss via the interval scale. Our numerical analysis indicates a general inverse relationship between the perceived image quality and the gloss variation on an image.
Image Quality Standards II
icon_mobile_dropdown
Recent progress in the development of INCITS W1.1: appearance-based image quality standards for printers
Theodore Bouk, Edul N. Dalal, Kevin D. Donohue, et al.
In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard.(1),(2) The resulting W1.1 project is based on a proposal(4) that perceived image quality can be described by a small set of broad-based attributes. There are currently five ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Color Rendition, Text and Line Quality and Micro-Uniformity.
Scanners for analytic print measurement: the devil in the details
Eric K. Zeise, Don Williams, Peter D. Burns, et al.
Inexpensive and easy-to-use linear and area-array scanners have frequently substituted as colorimeters and densitometers for low-frequency (i.e., large area) hard copy image measurement. Increasingly, scanners are also being used for high spatial frequency, image microstructure measurements, which were previously reserved for high performance microdensitometers. In this paper we address characteristics of flatbed reflection scanners in the evaluation of print uniformity, geometric distortion, geometric repeatability and the influence of scanner MTF and noise on analytic measurements. Suggestions are made for the specification and evaluation of scanners to be used in print image quality standards that are being developed.
Image Quality Attributes: Measurement and Modeling
icon_mobile_dropdown
Paper roughness and the color gamut of color laser images
J. S. Arney, Michelle Spampata, Susan Farnand, et al.
Common experience indicates the quality of a printed image depends on the choice of the paper used in the printing process. In the current report, we have used a recently developed device called a micro-goniophotometer to examine toner on a variety of substrates fused to varying degrees. The results indicate that the relationship between the printed color gamut and the topography of the substrate paper is a simple one for a color electrophotographic process. If the toner is fused completely to an equilibrium state with the substrate paper, then the toner conforms to the overall topographic features of the substrate. For rougher papers, the steeper topographic features are smoothed out by the toner. The maximum achievable color gamut is limited by the topographic smoothness of the resulting fused surface. Of course, achieving a fully fused surface at a competitive printing rate with a minimum of power consumption is not always feasible. However, the only significant factor found to limit the maximum state of fusing and the ultimate achievable color gamut is the smoothness of the paper.
Investigation of two methods to quantify noise in digital images based on the perception of the human eye
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
Effective pictorial information capacity as an image quality metric
The measurement of the MTF of JPEG 6b and similar compression systems has remained challenging due to their nonlinear and non-stationary nature. Previous work has shown that it is possible to estimate the effective MTF of the system by calculating an 'average' MTF using a noise based technique. This measurement essentially provides an approximation of the linear portion of the MTF and has been argued as being representative of the global pictorial effect of the compression system. This paper presents work that calculates an effective line spread function for the compression system by utilizing the derived MTFs for JPEG 6b. These LSFs are then combined with estimates of the noise in the compression system to yield an estimate for the Effective Pictorial Information Capacity (EPIC) of the system. Further modifications are made to the calculations to allow for the size and viewing distances of the images to yield a simple image quality metric. The quality metric is compared with previous data generated by Ford using Bartens' Square Root Integral with Noise and Jacobson and Topfers' Perceived Information Capacity. The metric is further tested against subjective results, derived using categorical scaling methods, for a number of scenes that are subjected to various amounts of photographic-type distortion. Despite its simplicity, EPIC is shown to provide correlation with results from subjective experimentation. Further improvements are also considered.
Objective video quality assessment method for evaluating effects of freeze distortion in arbitrary video scenes
Keishiro Watanabe, Jun Okamoto, Takaaki Kurita
With the development of the broadband IP networks, video distribution, streaming, and communication services over networks are rapidly becoming common. To provide these services appropriately, we must design, monitor, and manage their quality based on subjective video quality. Therefore, we need an objective quality assessment method that enables video quality to be evaluated from video characteristics easily and quickly. We have already proposed an objective video quality assessment method for freeze distortion in network video services by considering the perceptual characteristics of short freeze distortion. In this previous method, we derived the objective video quality under the assumption that all freeze lengths in the video are constant. However, it was impossible to derive objective video quality when arbitrary freeze lengths occur because of network degradation in real services. We propose an objective video quality assessment method for arbitrary freeze lengths by extending the previous method and confirm that it can estimate subjective video quality with good accuracy for real applications. The correlation coefficient between subjective and objective quality is 0.94.
Image Quality Attributes: Unique Defects
icon_mobile_dropdown
Comparison of vision-based algorithms for hiding defective sub-pixels
The potential use of vision-based algorithms for hiding defective display pixels is quite appealing. Two prior approaches utilized either the point spread function (PSF) or contrast sensitivity functions to represent effects of the human visual system. A third approach proposed in this paper includes a simple model of human visual masking characteristics to improve theoretical defect hiding effectiveness. A visual experiment indicated all three methods provided significant improvement over uncompensated sub-pixel defects across all color patches and images tested. The masking-based method and an empirically optimized PSF method were more effective due to the masking-type patterns generated. Hiding effectiveness was linearly related to the inverse of the lightness error generated by a defect. For moderate lightness errors, both the PSF and masking-based methods completely hid the sub-pixel defects, with decreasing effectiveness for larger lightness errors. Similar results were found for images and corresponding color patches, though some dependency on the image content was observed for two of the five images. With the addition of a simple visual masking effects model, the iCAM Image Difference Model was found to predict the general performance trends of the three methods with reasonable accuracy.
Scanner motion error detection and correction
Chengwu Cui
By design, a typical desktop scanner scans a document, line by line. Therefore, in addition to scanner lens distortion, motion distortion can also be a problem if not controlled well. In particular, when a paper document is fed via an automatic document feeder at high speed, such motion errors can be large and may cause unpleasant visible artifacts in the form of jaggy oblique edges, uneven compressed horizontal lines, unpleasant moire patterns, local color misregistration, and etc. In this paper, we report a method to measure and characterize scanner motion errors. Motion errors are categorized into two types: slow drift errors and sudden change errors. We attempt to investigate their respective impact on perceived image quality. We further report the principle and method to correct such errors via software.
Poster Session
icon_mobile_dropdown
Accurate and cost-effective MTF measurement system for lens modules of digital cameras
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Quality improvement by selective regional slice coding implementation in H.264/AVC
In this paper, we propose a new selective regional slice coding method for H.264 that can improve the quality of decoded video. We can get better performance when we compare to conventional coding methods. And it also serves better shapes in a picture by removing adjacent to macroblock edge. In this paper, flexible macroblock ordering (FMO) is used for transmitting the slice. FMO is used for sliced coding in packet loss environment. We propose the way to modify to improve slice coding in the sequence. The experiment result of the proposed method shows the improvement of quality and also error robustness. The result of implementation about such system is improved 0.11dB to 1.52dB when we compare to non-slice coding in the experiment.
Quality evaluation of the halftone by halftoning algorithm-based methods and adaptive method
Xiaoxia Wan, Dehong Xie, Jinglin Xu
Digital halftoning algorithm is a operation, converting the captured con-tone images to the corresponding binary images supported by most output devices, which makes the tow kinds of images similar as possible. In order to evaluate the halftoning algorithms and the corresponding halftones, a criterion must be needed. In the literature, MSE (the Mean Square Error), SNR (Signal to Noise Ratio) and WSNR(Weight Signal to Noise Ratio) were often used to evaluate the common con-tone images and the halftones. But these methods do not suit to evaluating the quality of the halftones because of the special properties of the halftones by different halftoning algorithms and limitation of assumption of these methods themselves according to many researches. So a series of halftonig algorithm-based methods are proposed, which adapt to the special properties of halftoning algorithms. All of those methods were not adaptive. In the last part of this paper, an adaptive method was propose to evaluate the halftoning algorithms and the corresponding halftones, which is based on the statistical features of the residual image between the original image and the corresponding halftone on the retinal of human eye.