Proceedings Volume 4663

Color Imaging: Device-Independent Color, Color Hardcopy, and Applications VII

cover
Proceedings Volume 4663

Color Imaging: Device-Independent Color, Color Hardcopy, and Applications VII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 December 2001
Contents: 10 Sessions, 39 Papers, 0 Presentations
Conference: Electronic Imaging 2002
Volume Number: 4663

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Spectral Imaging I
  • Spectral Imaging II
  • Color Management
  • Color Perception
  • Image Processing
  • Color Reproduction
  • Displays
  • Color Management
  • Halftoning I
  • Halftoning II
  • Poster Session
Spectral Imaging I
icon_mobile_dropdown
Innovative method for spectral-based printer characterization
The paper presents an innovative approach to the spectral-based characterization of ink-jet color printers. Our objective was to design a color separation procedure based on a spectral model of the printer, managed here as an RGB device. The printer was a four-ink device, and we assumed that the driver always replaced the black completely when converting from RGB to CMYK amounts of ink. The color separation procedure, which estimates the RGB values given a reflectance spectrum, is based on the inversion of the Yule-Nielsen modified Neugebauer model. To improve the performance of the direct Neugebauer model in computing the reflectance spectrum of the print, given the amounts of ink, we designed a method that exploits the results of the numerical inversion of the Neugebauer model to estimate a correction of the amount of black ink computed on RGB values. This correction can be considered a first step in optimization of the Neugebauer model; it accounts for ink-trapping and the lack of knowledge on how the black is actually replaced by the printer driver.
Spectral Imaging II
icon_mobile_dropdown
Spectrum recovery from colorimetric data for color reproductions
Colorimetric data can be readily computed from measured spectral data, however, as illustrated by metameric pairs, the mapping from spectral data to colorimetric values is many-to-one and therefore typically not invertible. In this paper, we investigate inversions of the spectrum-to-colorimetry mapping when the input is constrained to a single color reproduction medium. Under this constraint, accurate recovery of spectral data from colorimetric data is demonstrated for a number of different color reproduction processes. Applications of the spectrum reconstruction process are discussed and demonstrated through examples.
Color image reproduction based on multispectral and multiprimary imaging: experimental evaluation
Masahiro Yamaguchi, Taishi Teraji, Kenro Ohsawa, et al.
Multispectral imaging is significant technology for the acquisition and display of accurate color information. Natural color reproduction under arbitrary illumination becomes possible using spectral information of both image and illumination light. In addition, multiprimary color display, i.e., using more than three primary colors, has been also developed for the reproduction of expanded color gamut, and for discounting observer metamerism. In this paper, we present the concept for the multispectral data interchange for natural color reproduction, and the experimental results using 16-band multispectral camera and 6-primary color display. In the experiment, the accuracy of color reproduction is evaluated in CIE (Delta) Ea*b* for both image capture and display systems. The average and maximum (Delta) Ea*b* = 1.0 and 2.1 in 16-band mutispectral camera system, using Macbeth 24 color patches. In the six-primary color projection display, average and maximum (Delta) Ea*b* = 1.3 and 2.7 with 30 test colors inside the display gamut. Moreover, the color reproduction results with different spectral distributions but same CIE tristimulus value are visually compared, and it is confirmed that the 6-primary display gives improved agreement between the original and reproduced colors.
Recording and rendering for art paintings based on multiband data
Shoji Tominaga, Norihiro Tanaka, Toshinori Matsumoto
The present paper proposes a method for recoding and rendering of art paintings using only spectral reflectance data of the object surfaces. A multiband camera system with six spectral channels of fixed wavelength bands is used for spectral imaging. No range finder is used for measuring the surface shape. We show that it is possible to render realistic images of the object for different directions of illumination, without using the 3D shape data. First, a method for estimating spectral reflectance for body reflection component of a rough surface is described. Next a method is proposed for practical image rendering. The method is based on interpolation among the images reproduced in the known illumination directions. The color signal for an arbitrary illumination direction is estimated from the color signals observed for three illumination directions. As a result, the image of an art painting object illuminated from any direction is rendered using the reflectance data obtained for three illumination directions. We present algorithms for estimating surface-spectral reflectances of an object and rendering the image for any lighting conditions. An experiment using an oil painting is executed for demonstrating the feasability of the proposed method.
Color Management
icon_mobile_dropdown
Color: an exosomatic organ?
Jaap van Brakel, Barbara Saunders
According to the dominant view in cognitive science, in particular in its more popularized versions, color sensings or perceptions are located in a 'quality space'. This space has three dimensions: hue (the chromatic aspect of color), saturation (the 'intensity' of hue), and brightness. This space is structured further via a small number of primitive hues or landmark colors, usually four (red, yellow, green, blue) or six (if white and black are included). It has also been suggested that there are eleven semantic universals - the six colors previously mentioned plus orange, pink, brown, purple, and grey. Scientific evidence for these widely accepted theories is at best minimal, based on sloppy methodology and at worst non-existent. Against the standard view, it is argued that color might better be regarded as the outcome of a social-historical developmental trajectory in which there is mutual shaping of philosophical presuppositions, scientific theories, experimental practices, technological tools, industrial products, rhetorical frameworks, and their intercalated and recursive interactions with the practices of daily life. That is: color, the domain of color, is the outcome of interactive processes of scientific, instrumental, industrial, and everyday lifeworlds. That is: color might better be called an exosomatic organ, a second nature.
Color Perception
icon_mobile_dropdown
Factors effecting lightness partitioning
Three simultaneous equisection or partitioning experiments were conducted to further investigate the perception of lightness. Previous results derived a series of curves and a function relating the lightness of a uniform background with the perceived lightness of a stimulus. These results were specific to a CRT in a dark surround. This paper extends the previous results by testing three other factors. The first experiment varied the stimuli spacing, the second experiment varied the background uniformity and the third experiment tested the ambient illumination. The results show that the crispening effect and simultaneous contrast are reduced by either changing the stimuli spacing and the background uniformity. The result of changing the ambient illumination is consistent with the surround effect incorporated into various color appearance models.
Refinement of a model for predicting perceived brightness
In this paper, an updated version of a previously proposed model for the prediction of perceived brightness is presented. The model is not only applicable to simple spatial configurations but also to complex scenes and relies entirely on physical and colorimetric data. These are derived from a complex description of the entire scene which eliminates the need for a priori knowledge like the popular reference white concept and others. The model includes an extensive preprocessing stage consisting of a central projection to transform the scene description into a pixel-oriented image, a simple pixel classification to identify the stimulus region and extensive histogram calculations to extract quantiles as characteristic features. Based on the quantiles, which form the output of the preprocessing stage and represent the distribution of luminance levels within the scene, a map has been implemented to calculate a value characterizing the perceived brightness. The development of the model structure was inspired by a series of haploscopic brightness matching experiments, whose experimental data were also used to train and test the model. The results are quite encouraging because the differences between experimental and model-predicted brightness values rarely exceed the range of the natural inter-observer deviations.
3D histograms in color image reproduction
Pei-Li Sun, Jan Morovic
Previous work on color reproduction has shown that all existing solutions perform in ways that are image dependent. A series of experiments has therefore been carried out to systematically study the influence of a range of image characteristics on color reproduction and it was previously shown that neither image gamuts nor one-dimensional color histograms play an important role. The aim of the present paper then is to investigate whether the 3D histogram of image's colors is an image characteristic that significantly influences the performance of gamut mapping algorithms (GMAs). As this is done with the help of sets of artificial images, where each member of the set has different content but the same 3D color histogram, their use in studying color reproduction is also discussed in this paper. The results of a psychophysical experiment evaluating the influence of 3D color histograms on color reproduction are presented and analyzed in detail. These results show clearly that a significant proportion (approximately 2/3 to 3/4) of the variation in GMA performance caused by differences in image characteristics is due to 3D color histograms.
Image Processing
icon_mobile_dropdown
Simple segmentation algorithm for mixed raster contents image representation
Zhigang Fan, Ming Xu
Mixed Raster Contents (MRC) is a powerful image representation concept in achieving high compression ratios while maintaining high reconstructed image quality. By decomposing an image into different planes, each plane of the raster data can be compressed by different coding schemes according to its individual attributes. This paper presents a simple segmentation algorithm for implementing the MRC model. The segmentation is performed block by block. Each image block is segmented according to its local statistics as well as its surrounding context. The latter is determined by a Markov model.
Color document analysis
The amount of documents in electronic formats has dramatically increased because of the ease of information sharing. Hence, it is highly desirable to design an efficient document compression technique. In this paper, a divide-and-conquer technique is propose to classify a local region into uni-level, bi-level and multi-level classes. As a result, various compression approaches can be applied to suitable area to increase compression efficiency. The color sigma filtering technique is adopted as a preprocessing stage to facilitate the following segmentation and cluster validation processes. Experiment results demonstrate that this technique successfully dichotomize a color document into regions with similar characteristics.
Picture/graphics classification using texture features
Natural pictures differ from synthetic graphics in many aspects, both in terms of visual perception and image statistics. As a result, image processing algorithms often behave differently on these two types of images. Classifying images and processing them using the method that is best to the image type may yield optimal results. In this paper, we propose to use image smoothness features to determine whether a scanned image was originally a synthetic graphic, or a natural picture. Synthetic graphics are usually very smooth. On the contrary, natural pictures are often noisier and texture rich. A classifier can therefore be built based on the measurement of the texture energy.
Rate-distortion-based segmentation for MRC compression
Effective document compression algorithms require scanned document images be first segmented into regions such as text, pictures and background. In this paper, we present a document compression algorithm that is based on the 3-layer (foreground/mask/background)MRC (mixture raster content) model. This compression algorithm first segments a scanned document image into different classes. Then, each class is transformed to the 3-layer MRC model differently according to the property of that class. Finally, the foreground and the back-ground layers are compressed using JPEG with customized quantization tables. The mask layer is compressed using JBIG2. The segmentation is optimized in the sense of rate distortion for the 3-layer MRC representation. It works in a closed loop fashion by a lying each transformation to each region of the document and then selecting the method that yields the best rate-distortion trade-off. The proposed segmentation algorithm can not only achieve a better rate-distortion trade-off, but also produce more robust segmentations by eliminating those mis-classifications which can cause severe artifacts. At similar bit rates, our MRC compression with the rate- distortion based segmentation can achieve a much higher subjective quality than state-of-the-art compression algorithms, such as JPEG and JPEG-2000.
Recent progress in automatic digital restoration of color motion pictures
Majed Chambah, Bernard Besserer, Pierre Courtellemont
The motion pictures represent a precious cultural heritage, however the chemical support on which they are recorded becomes unstable with time, unless they are stored at low temperatures. Some defects affecting color movies, such as bleaching, are out of reach of photochemical restoration means, digital restoration is hence unquestionable. We propose an original automatic technique for faded image correction. Bleaching results in damage to one or two chromatic layers, giving a drab image with poor saturation and an overall color cast. Our automatic fading correction technique consists in reviving the colors of the image (color enhancement), then in balancing the colors of the image.
Color Reproduction
icon_mobile_dropdown
Color gamut reduction techniques for printing with custom inks
Sylvain M. Chosson, Roger David Hersch
Printing with custom inks is of interest both for artistic purposes and for printing security documents such as banknotes. However, in order to create designs with only a few custom inks, a general purpose high-quality gamut reduction technique is needed. Most existing gamut mapping techniques map an input gamut such as the gamut of a CRT display into the gamut of an output device such as a CMYK printer. In the present contribution, we are interested in printing with up to three custom inks, which in the general case define a rather narrow color gamut compared with the gamut of standard CMYK printers. The proposed color gamut reduction techniques should work for any combination of custom inks and have a smooth and predictable behavior. When the black ink is available, the lightness levels present in the original image remain nearly identical. Original colors with hues outside the target gamut are projected onto the gray axis. Original colors with hues inside the target gamut hues are rendered as faithful as possible. When the black ink is not available, we map the gray axis G into a colored curve G' connecting in the 3D color space the paper white and the darkest available color formed by the superposition of the 3 inks. The mapped gray axis curve G'(a) is given by the Neugebauer equations when enforcing an equal amount a of custom inks c1, c2 and c3. Original lightness values are mapped onto lightness values along that curve. After lightness mapping, hue and saturation mappings are carried out. When the target gamut does not incorporate the gray axis, we divide it into two volumes, one on the desaturated side of the mapped gray axis curve G' and the other on the saturated side of the G' curve. Colors whose hues are not part of the target color gamut are mapped to colors located on the desaturated side of the G' curve. Colors within the set of printable hues remain within the target color gamut and retain as much as possible their original hue and saturation.
Measurement-based printer models with reduced number of parameters
Kenneth R. Crounse
Printer modeling can been used to inform halftoning algorithms and improve image quality. Unfortunately, obtaining accurate and useful printer parameters can be difficult. One general and powerful approach introduced in is to take average reflectance measurements of various printed patterns and, from these measurements, attempt to infer the reflectance contribution of all possible printed neighborhoods. To keep the number of model parameters small when using this approach, the size of the neighborhood should be kept to a minimum and all appropriate symmetries taken into account. Here the ideas of are generalized to show how to determine the optimal neighborhood of an arbitrary shape and size given some knowledge about the region of dot influence. Another difficulty with the measurement-based approach was noted in - even when using a large number of test patterns for measurement, the approach did not yield enough linearly independent equations to uniquely determine the individual contributions. We have shown that this problem is fundamental for most neighborhood types, regardless of the number, size, or type of patterns printed and measured. Previous efforts have attempted to solve this problem by imposing constraints on the solution space and solving an optimization problem. We introduce a new type of partial-ordering on the solution to further tighten the constraints. Finally, we show that for applications which require estimation of the tone-reproduction curve, it is not actually necessary to determine a full solution to the model parameters, but rather to estimate the achievable solution subspace only. Some methods are also being explored to further reduce the number of parameters needed, as described.
How to ensure consistent color quality in inkjet proofing
Stefan Livens, Marc F. Mahy, Dirk Vansteenkiste
We investigate the factors that determine the consistency of color output on digital proofing systems based on inkjet technology. Because of the multitude of factors involved, only a global solution can prove effective. We develop a complete solution consisting of two main modules. The calibration module contains the tools needed to bring a proofer into a standard condition, for which a predefined tonal response can be guaranteed. The calibration encompasses ink limitation and linearisation. It uses visual quantities, which is most sensible for proofing applications. In the case of multi density inks, the ink mixing is also calibrated in order to remain visually optimal. A convenient procedure is proposed for defining tonal responses, taking into account gamut and bleeding information. The verification module enables the user to monitor the behaviour of the proofer output. It points out problems and also prompts the user to perform suitable actions in order to restore the quality. The intelligence of the module lies in its ability to decide when and how to intervene. This gives the user a practical system that allows to ensure consistency with minimal effort.
Determining the source of unknown CMYK images
This paper presents a method for automatically extracting information about the source of a CMYK file by analyzing the image data. Several features are analyzed. These include an estimation of the type of undercolor removal and gray component replacement used to generate the image, and measures of image saturation and luminance. Together, these features provide a reasonable indication of the device for which the image was intended. While it is difficult to provide the exact source of an arbitrary file, the intention is to identify, with some degree of confidence, a probable class of devices for which the image was prepared (e.g. offset vs. laser vs. inkjet, etc.) so that the burden on the user to make this determination is reduced. Experiments performed to classify CMYK images from xerographic vs offset sources show promising results.
CMYK transformation with black preservation in color management system
A color calibration process and system for CMYK printing with black preservation is described in this paper. This approach is used to create a 4-D lookup table (LUT) off time for a closed-loop workflow or a deviceLink ICC profile at real-time by a smart color management module (CMM) for ICC color management workflow. The output K' in the 4-D LUT or the deviceLink profile is decided by the lightness or density mapping between the input K and the output K' and the black usage of the output printer, and K' is proportional to K. The calibration processes are: 1) to convert the input CMYK (e.g. SWOP CMYK) into a device-independent color space (e.g. CIE CAM97s Jab, CIE L*a*b*, or MLab); 2) gamut mapping; and 3) to convert the in-gamut device-independent color into the output CMYK color space. A major difference of this approach over existing methods is that the input K is carried to the second and the third steps to determine the amount of the output K. Therefore the input K information is not lost during the color transformation. This approach can be applied to both closed-loop color architecture and ICC color management.
Modeling a CMYK printer as an RGB printer
James Zhixin Chang, John C. Dalrymple
In CMYK output devices, many colors are reproducible by more than one combination of CMYK colorants. When a CMYK device is modeled as an RGB device, each RGB combination must produce a unique CMYK combination. Under color removal (UCR) and gray component replacement (GCR) techniques have been traditionally used to calculate these unique combinations. These techniques are simple to implement, but cannot fully utilize the color gamut possible with the additional K colorant. Other brute-force techniques that search the entire CMYK signal space for desirable combinations produce good results but are unsuitable for real-time implementation. In this paper, we introduce a flexible computational structure for converting RGB or CMY signals to CMYK signals. This structure, which can be viewed as an extension of the traditional UCR and GCR techniques, uses multiple sets of 1-D CMYK lookup tables (LUTs) to control the CMYK colorant usage. The LUTs are strategically placed on the center diagonal and boundaries of the input signal cube. By properly designing these LUTs, we obtain a model for RGB-to-CMYK conversion that utilizes most of the available CMYK gamut and also corrects certain non-ideal device behaviors, such as hue shifts along lines from pure colors to black or white.
Displays
icon_mobile_dropdown
Comparative evaluation of color characterization and gamut of LCD versus CRTs
Liquid-crystal-displays (LCDs) and cathode-ray-tubes (CRTs) are compared with regard to color-calibration and color gamut. Applicability of common display calibration models to LCDs and CRTs is experimentally tested. Color-calibration accuracy, ease of calibration, and achievable color gamut are evaluated for the displays. An offset, matrix, and tone-response correction model is found to be suitable for color calibration of LCDs for most applications. The model, however, results in larger calibration error for LCDs than for CRTs, and unlike CRTs a power law tone-response correction is unsuitable for LCDs. A very significant color variation is seen with change in viewing angle for the prototype LCD display employed in the study. The LCD display provides a significantly larger color gamut under typical viewing conditions than CRTs, primarily due to higher luminance.
Color characterization issues for TFTLCD displays
Gabriel G. Marcu, Wei Chen, Kok Chen, et al.
This paper describes few issues related to TFTLCD display color rendition and characterization. The paper points out specific aspects of color rendition on TFTLCD devices that differentiate the traditional CRT self luminous devices from the TFTLCD transmissive devices. Two TFTLCD technologies are discussed, TN (Twisted Nematic) and IPS (In Plane Switching) with and without dual domain improvement. The paper discusses specific aspects of color rendering for TN and IPS technologies, the display primaries, color gamut and brightness, the transfer function, the white point, some viewing angle issues and the color model. The paper explains why the TFTLCD displays offer perceptually a larger gamut than the CRT displays, even if chromatically their gamut triangle is smaller in the CIE chromaticity diagram than the one of a typical CRT. The paper also explains the hue shift effect of the primaries with the input voltage and the effect of luminance variation with the viewing angle, both encountered in some TN TFTLCD displays. The paper confirms that despite their advantages due to high brightness, high contrast, high sharpness, virtually no geometric image distortion, in terms of color capability, with few exceptions, TFTLCD devices have not yet surpassed the CRT devices with larger chromatic gamut and no color variation with the viewing angle.
Color Management
icon_mobile_dropdown
Network color management system using a cluster dividing method
Mutsuko Nichogi, Katsuhiro Kanamori
Recently the color reproduction of the real objects is becoming more and more important in the field of telemedicine and internet shopping. To reproduce the object's color under the various conditions, the surface spectral reflectance has to be estimated. In this paper we present the novel way to estimate it using conventional 3- band digital camera. Usually the precise estimation of the spectra from the 3-band image is very difficult, as it has metameric black and the simple camera model is not suitable. To improve the estimation accuracy, we propose dividing color space in to clusters and estimating spectra using different model parameters at each cluster. Clusters are set corresponding to the major objects in camera images. Next the estimated spectral image is reproduced on the monitor. When luminance and color temperature of the specified viewing illuminant and the monitor are different, the subject hardly perceives the object's real color. Therefore the image is converted using CIECAM97s. This paper shows the results of simulation using the image of enlarged human's mouth assuming remote consulting with dental clinic. In this system, a dentist can perceive the real color of patient's gums and teeth on the monitor.
Primary preservation in ICC color management system
The saturation rendering intent in ICC color management system has never been practiced successfully. One of the reasons is that it is expected to have primary matching for this rendering intent, but the current ICC workflow makes this impossible. In this paper, three approaches to achieve primary matching for printing applications will be presented. We start with building a printer ICC profile that converts selected first and/or secondary primaries of the default monitor RGB color space to the corresponding printer primaries. If a source monitor RGB color space is different from the default RGB color space, a simple approach is to apply the default RGB color space for the saturation rendering intent for primary preservation. This is because the primary matching is more important than color accuracy for the saturation intent, and different source monitor RGB color spaces are not very different. For more accurate color transformation and still to preserve the primary matching, the source monitor RGB color space is adjusted so that the selected primaries are fully adapted to those of the default RGB color space and the adaptation is decreased gradually as the hue of a source color moves off the selected primaries. In a smart CMM environment, the primary matching can be achieved by hue rotation and gamut adaptation during the real-time linking.
Appearance match between hardcopy and softcopy using lightness rescaling with black point adaptation
Kiyotaka Nakabayashi, Mark D. Fairchild
When we view a softcopy image on a CRT display, typical CRT white point color temperature is 9300K and standard ambient light is 5000K. In this case, the black on the CRT screen is lightened and the chromaticity of the black is far from the achromatic axis because of the reflection of ambient light on the CRT displays. Also, when viewing hardcopy in the same environment as a CRT monitor, the chromaticity of the printer black is far from the achromatic axis. If such a printer were the source device and CRT monitor the destination device, dark printer colors cannot be reproduced on the destination device because, for dark colors, the gamut of the destination is smaller than the gamut of the source. Thus, lightness compensation is needed to reproduce dark colors on destination device. Three methods were considered: (1) Simple lightness compression method, (2) Complete black point adaptation method that consists of mapping to the CRT black, (3) Incomplete black point adaptation method that is a compromise between method (1) and (2). Visual experiments were performed to investigate these methods. The results indicated that the appropriate black point adaptation ratio is located between softcopy and hardcopy black points.
Halftoning I
icon_mobile_dropdown
Stochastic dithering and watermarking
Cesar L. Nino, Gonzalo R. Arce
A new screen construction technique, which is described by means of a Maximum likelihood function as constraint of an iterative optimization procedure is introduced. Such stochastic screen function is spatially based, and is defined using the density function of a Markov point process. Further, a new high-capacity data embedding technique that focuses on exploiting the theory available for digital halftoning design, is also introduced. Such halftoning and watermarking scheme can be used in authentication and integrity verification applications as well as for general data embedding, where, digital signatures could be embedded into halftoned photos in a variety of identification documents. Being image independent, this new technique allows for imperceptible embedding such that no a priori knowledge is needed for recovery of inserted data.
Halftone screen encoding methods
Halftone screen encoding method provides a means for seamlessly tiling a digital halftone screen to cover the whole image plane. The encoding method has a major effect on the performance of the digital halftoning. Three encoding methods - Holladay algorithm, PostScript Type 10 halftone dictionary, and single-square encoding - are reviewed. We derive the relationships and develop conversion mechanisms between them. Finally, we compare these encoding methods with respect to the implementation complexity and memory cost. The advantages and disadvantages of these methods are discussed.
Compression of screened halftone image using block prediction
Binary image compression is different from contone image compression. Binary image compression ratio varies greatly with halftoning algorithm as well as image type. Most binary compression methods cannot efficiently compress images halftoned using frequency modulation (FM) screening or error diffusion. The blue noise characteristic of the output pattern makes all run-length based compression algorithms ineffective. In this paper, we describe a method that combines prior information about the halftone screen used in the halftone process with local statistics to improve the prediction of the FM screened halftone image. The binary image is first broken into sub-blocks and the mean of each block is calculated. This block mean and the halftone screen are used to generate a predicted image. A residual image, the difference between the predicted image and the original halftone image, can be constructed by performing an exclusive OR between the original image and the predicted image. Since there is strong correlation between this predicted pattern and the original halftone image, the residual image consists mainly of zeros. The residual image can then be compressed with run-length encoding algorithms. We applied this method to a number of test images with both photo and text content; the compression ratio is improved by up to a factor of 10 as compared to a standard run-length encoding algorithm.
High-speed multilevel halftoning hardware
Michael Thomas Brady, Charles H. Morris III, Joan L. Mitchell
A high speed multilevel color printer required custom halftoning hardware. For this multitone environment we revised traditional halftone thresholding (i.e. turning the output from no intensity to full intensity). Unfortunately, intermediate values did not print large areas reliably. Legacy image data files existed that were already halftoned. To correct these problems, binary halftone methods were modified to produce multi-bit outputs. This was accomplished by using threshold matrices to determine when to allow printing. The input minus the threshhold value was used to index into a lookup table to select the output intensity. Design of the down-loadable threshold matrices solved the print consistency problem. The custom hardware ensured that a zero input value did not print and a maximum value printed as a saturated output. These solutions were implemented using custom high-speed logic capable of outputting 66 MegaPels/sec.
Multilevel screen design using direct binary search
Screening is an efficient halftoning algorithm that is easy to implement. With multilevel devices, there is a potential to improve the overall image quality by using multilevel screening, which allows us to choose among multiple native tones at each addressable pixel. In this paper, we propose a methodology for multilevel screen design using Direct Binary Search. We refer to one period of the screen as a multitone cell. We define a multitone schedule, which for each absorptance level specifies the fraction of each native tone used in the multitone cell. Traditional multitoning uses only one native tone in smooth areas corresponding to absorptance values near the native tones, an approach which introduces contouring artifacts. To reduce contouring, we employ schedules that use more than one native tone at each absorptance level. Based on the multitone schedule, multitone patterns are designed level-by-level by adding native tones under the stacking constraint. At each level, the spatial arrangement of the native tones is determined by a modified DBS search. We explore several different multitone schedules that illustrate the image quality tradeoffs in multitone screen design.
Halftoning II
icon_mobile_dropdown
New methods for digital halftoning and inverse halftoning
Murat Mese, Palghat P. Vaidyanathan
Halftoning is the rendition of continuous-tone pictures on bi-level displays. Here we first review some of the halftoning algorithms which have a direct bearing on our paper and then describe some of the more recent advances in the field. Dot diffusion halftoning has the advantage of pixel-level parallelism, unlike the popular error diffusion halftoning method. We first review the dot diffusion algorithm and describe a recent method to improve its image quality by taking advantage of the Human Visual System function. Then we discuss the inverse halftoning problem: The reconstruction of a continuous tone image from its halftone. We briefly review the methods for inverse halftoning, and discuss the advantages of a recent algorithm, namely, the Look Up Table (LUT)Method. This method is extremely fast and achieves image quality comparable to that of the best known methods. It can be applied to any halftoning scheme. We then introduce LUT based halftoning and tree-structured LUT (TLUT)halftoning. We demonstrate how halftone image quality in between that of error diffusion and Direct Binary Search (DBS)can be achieved depending on the size of tree structure in TLUT algorithm while keeping the complexity of the algorithm much lower than that of DBS.
Error diffusion with blue-noise properties for midtones
Pierre-Marc Jodoin, Victor Ostromoukhov
In this contribution, a new error-diffusion algorithm is presented, which is specially suited for intensity levels close to 0.5. The algorithm is based on the variable-coefficient approach presented at SIGGRAPH 2001. The main difference with respect to the latter consists of the objective function that is used in the optimization process. We consider visual artifacts to be anomalies (holes or extra black pixels) in an almost regular structure such as a chessboard. Our goal is to achieve blue-noise spectral characteristics in the distribution of such anomalies. Special attention is paid to the shape of the anomalies, in order to avoid very common artifacts. The algorithm produces fairly good results for visualization on displays where the dot gain of individual pixels is not large.
Fast error diffusion
New implementations of error diffusion algorithms are proposed. These methods replace the complex, real time computations with a series of table lookups and a summation. They lower the computational cost and increase the processing speed. Implementation details are given in this paper. These implementations make the level-dependent error diffusion possible that an optimal error filter is used for a certain range of tone levels. Other advantages of these methods are also discussed.
Tone-dependent error diffusion
We present an enhanced error diffusion halftoning algorithm for which the filter weights and the quantizer thresholds vary depending on input pixel value. The weights and thresholds are optimized based on a human visual system model. Based on an analysis of the edge behavior, a tone dependent threshold is designed to reduce edge effects and start-up delay. We also propose an error diffusion system with parallel scan that uses variable weight locations to reduce worms.
AM/FM halftoning: a method for digital halftoning through simultaneous modulation of dot size and dot placement
Conventionally, digital halftoning is accomplished by either changing the size of printed dots or changing the relative density of dots on the page. These two approaches are analogous to amplitude modulation (AM) or frequency modulation (FM) used in communications. A typical AM halftoning method, such as cluster dot screening, has very low computational requirements and good print stability. However, it suffers from low spatial resolution and Moire artifacts. Alternatively, popular FM halftoning methods, such as error diffusion, can achieve high spatial resolution and are free of Moire artifacts but lack the print stability required for electro-photographic printing. In this paper, we present a new class of halftoning algorithms that simultaneously modulate both the size and density of printed dots. We call this new class of algorithms AM/FM halftoning. The major advantages of AM/FM halftoning are: (1). Better stability in shadow area than dispersed dot methods through the formation of larger dot clusters. (2). Better Moire resistance than clustered dot screens through irregular dot placement. (3)The ability to systematically optimize dot size and density to produce the best possible print quality at each gray level. A specific implementation of AM/FM halftoning is developed for use with electro-photographic printers having pulse width modulation (PWM) technology. We present results using dot size and dot density curves obtained through measurement-based optimization, and demonstrate that AM/FM halftoning achieves high spatial resolution, smooth halftone textures, good printing stability, and Moire resistances.
Poster Session
icon_mobile_dropdown
Thermo autochrome printer TPH heating compensation method
Yen-Hsing Wu, Hong-Ju Tsai
In a color thermo autochrome (TA) printer, thermal head is used on contact with thermo-sensitive paper for the purpose of recording a full-color image on it. Image is formed through respectively and sequentially recording three-color layers on the paper. In developing the color of each thermo-sensitive color layer, an amount of heating energy, which is called the bias heating energy, is required to initiate the coloring process. By adding an additional amount of heating energy, which is called the image heating energy, the color can be produced with desired density. However, colors of three layers are difficult to be adjusted to obtain good print quality since the heating energy accumulates. In what follows, methods to determine the heating energy of each color layer and to compensate the accumulated heating energy are proposed. From the experimental results, color quality of TA printing can be significantly improved accordingly.
Fundamental study on electromechanics of particles for printing technology
Hiroyuki Kawamoto, Nobuyuki Nakayama
The following basic research is being carried out in our laboratory on electromechanics of particles, because it is a basis of digital printing technology: 1) Experimental, numerical, and theoretical investigations have been conducted on statics of magnetic bead chain in magnetic filed. Chains formed on a solenoid coil were observed and chain lengths and slant angles were measured. Stable configurations of chains were theoretically discussed in point of potential energy minimization. Numerical simulations were also performed using the Distinct Element Method considering magnetic interaction forces and the results were compared with the experimental results. 2) Dynamics of the magnetic bead chain has been also investigated. Chains were vibrated by the sine-wave excitation and an impact testing methods and the resonance frequency was deduced. Experimental results were confirmed by the theoretical consideration and the numerical calculation with the Distinct Element Method. 3) A technique to transport dielectric particles is developed utilizing traveling electrostatic field. A fundamental study is being carried out.
Analysis of capacitance sensitivity distributions and image reconstruction in electrical capacitance tomography
Deyun Chen, Guibin Zheng, Xiaoyang Yu, et al.
This paper describes a tomographic method is based on 8- electrode capacitance sensor. It discusses the application if finite element method in electrical capacitance tomography, and a finite element model of 8-electrode capacitance sensor is established. Capacitance sensitivity distributions can be analyzed with this method. Satisfactory images can be reconstructed by using the capacitance sensitivity distributions as a priori information. It provides powerful support for further application research.
Parallel error diffusion
In this paper, we give a brief introduction of the parallel error diffusion and classify various approaches into three classes - partitioned area, concurrent, and inter-area error diffusions. These approaches, including intra-dot diffusion, space-filling curve traversal, Fibonacci-like sequence, multi-center dot diffusion, dispersed-dot diffusion, concurrent processing, neural network, and inter-dot diffusion are discussed and examples are given. Comparisons are made with other deterministic error filters (e.g. Floyd-Steinberg, Schroeder, Stucki, and Shiau-Fan) and with the corresponding clustered-dot and dispersed-dot ordered dithers. We also provide several extensions to the existing techniques. The conditions for parallel line error diffusions of horizontal, vertical, and diagonal lines are recognized and new parallel error diffusions are developed.
Development of goniophotometric imaging system for recording reflectance spectra of 3D objects
Kazutaka Tonsho, Y Akao, Norimichi Tsumura, et al.
In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.