Proceedings Volume 3648

Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts IV

cover
Proceedings Volume 3648

Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 22 December 1998
Contents: 13 Sessions, 59 Papers, 0 Presentations
Conference: Electronic Imaging '99 1999
Volume Number: 3648

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Modeling for Hardcopy
  • Spectral Imaging and Cameras
  • Gamut Mapping Algorithms
  • Color Mapping Algorithms
  • Workflow
  • Colorimetry I
  • Colorimetry II
  • Color Quantization and Compression
  • Internet Imaging
  • Tone Reproduction and Image Quality
  • Halftoning I
  • Halftoning II
  • Modeling for Hardcopy
  • Posters
Modeling for Hardcopy
icon_mobile_dropdown
Robust and fast numerical color separation for mathematical printer models
Werner Praefcke
Halftone color printers can be mathematically modeled rather accurately using the Neugebauer equations or the Yule- Nielsen equations. However, the color separation which is the inverse of the printer model is not an easy task. For obtaining the printer driving parameters needed for a color the most appropriate methods comprises lookup tables, iterative inversion of the model, or an approximation. In order to avoid the memory requirement of a lookup table we investigate the suitability of the latter two approaches for the inversion of the Neugebauer and the Yule-Nielsen model in three and four color printing. The iterative approach is based on the Newton gradient scheme, the approximate approach consists of a refinement of the very simple method CMY equals 1 - RGB. We compare both the results to the optimally achievable results and demonstrate the only slightly inferior results.
Prediction of colorimetric measurements in newspaper printing using neural networks
Hansjoerg Kuenzli, Anja Noser, Marcel Loher, et al.
For an optimum quality control in newspaper printing, patches printed with different combinations of CMYK should be calorimetrically measured. However, control strips with a large number of color patches cannot be included into the layout of newspapers, due to the additional space they need. Moreover, such control strips require a considerable amount of time for the measurement. To overcome these problems it is preferable to print and to analyze only a few color patches and to obtain most information out of these. A method using neural networks has been developed to predict color values of two and three-color overprints from those of the primary inks, as well as color values of the primary inks from the two and three-color overprints. The feature of the method is that spectra are predicted from which colorimetric values and densitometric values are derived. The accuracy of the predicted CIELab values of the primaries from those of the overprints is typically within (Delta) E*ab equals 1 for the two-color overprints, and within (Delta) E*ab equals 2 for the three-color overprints.
Duplicating the fine art reproduction process: the technology used for guerilla ink-jet printing
Accurate, automatic color reproduction is the goal of much of color technology. However, there is a need to improve reproduction in only the luminous or gray axis. Quadtone reproduction takes advantage of the four device CMYK color planes to provide greater gray-scale depth within the limitations of 8-bit per channel band-width. 'Quadtone' refers to photos reproduced using four tones of the same colorant. It is the printed imposition of four carefully selected shades of ink that result in a greater number of densities. Guerilla printing is a collection of algorithms using the CMYK channels to simulate traditional photography on an inkjet printer. Guerilla printing increases density values, defines detail and produces near continuous-tone screens.
Spectral Imaging and Cameras
icon_mobile_dropdown
Spectral imaging by a multichannel camera
A set of multi-channel camera systems and algorithms is described for recovering both the surface spectral- reflectance function and the illuminant spectral-power distribution from the data of spectral imaging. We show the camera system with six spectral channels of fixed wavelength bands. This system is created by using a monochrome CCD camera, six different color filters, and a personal computer. The dynamic range of the camera is extended for sensing the high intensity level of highlights. We suppose in a scene that the object surface of an inhomogeneous dielectric material is described by the dichromatic reflection model. The process for estimating the spectral information is composed of several steps of (1) the finite- dimensional linear model representation of wavelength functions, (2) illuminant estimation, (3) data normalization and image segmentation, (4) reflectance estimation. The reliability of the camera system and the algorithms is demonstrated in an experiment. Finally a new type of system using liquid crystal filters is briefly introduced.
Characterization of novel three- and six-channel color moire free sensors
Patrick G. Herzog, Dietmar Knipp, Helmut Stiebig, et al.
This paper describes a new type of multichannel color sensor with the special property of having all channels per pixel at the same spatial location one on top of the other. This arrangement is accomplished by stacking three amorphous thin film detectors on a glass substrate. It has the advantage that the color noire effect is avoided which produces large color errors when objects of high spatial frequency are captured with a multi-channel sensor array. The new technique enables the design of a three-channel sensor as well as a six-channel sensor. In the latter case, color is captured in two 'shots' by changing the bias voltages. The colorimetric characterization of the sensors is presented, including multiple polynomial regression both for tristimulus and spectral reconstruction, and the smoothing inverse for spectral reconstruction. The result obtained with different types of regression polynomials, different sensors, and different characterization methods are compared. The results show that the three-channel color moire free sensors are able to produce good accuracy, while the six-channels' performance is striking.
New block-matching-based color interpolation algorithm
In an electronic color image capturing device using a single CCD or CMOS sensor, the color information is usually acquired into three sub-sampled color planes such as Red (R), Green (G) and Blue (B). Full resolution color is subsequently generated from this sub-sampled image using a suitable 'color interpolation' methodology. The color accuracy and appearance of the image is significantly affected by the color interpolation algorithm used to generate the full-resolution color image. In this paper, we present a new block matching based algorithm for color interpolation. The computational complexity of this algorithm is very low and hence suitable for real-time implementation in a portable image capture device e.g. a digital camera. The proposed algorithm produces the similar or better quality color images compared to most of the known color interpolation algorithms in the literature. We have presented results of comparison of the performance of the proposed algorithm with median interpolation and bilinear interpolation which are commonly used in practice.
Gamut Mapping Algorithms
icon_mobile_dropdown
Color gamut measurements and mapping: the role of color spaces
Before embarking on an extensive set of color gamut mapping experiments, we were distracted by wanting to understand uniform color spaces. The CIE L*a*b* is almost universally used for a colorimetric uniform color space. It is found most often in the literature and is used by many instrument makers as colorimetric output. There are a wide variety of papers that imply small errors in L*a*b*. Nevertheless, the sense of these articles is that the errors are small local perturbations about an average response that is essentially correct. Many papers describe small improvements using modified equations. This paper will present measurements and literature data that show surprisingly large discrepancies between CIE L*a*b* and isotropic observation based Color Spaces such as Munsell.
Three-dimensional gamut mapping using various color difference formulae and color spaces
Masahiko Ito, Naoya Katoh
Gamut mapping is a technique to transform out-of-gamut colors to the inside of the output device's gamut. It is essential to develop effective mapping algorithms to realize WYSIWYG color reproduction. In this paper, 3D gamut mapping using various color difference formulae and color spaces are considered. Visual experiments were performed to evaluate which combination of color difference formula and color space for gamut mapping was most preferred for five images. The color difference formula used in the experiments were (Delta) E*ab, (Delta) E*uv, (Delta) E94, (Delta) ECMC, (Delta) EBFD, and (Delta) Ewt. The color spaces used in the experiments were CIELAB, CIELUV, CIECAM97s, IPT and NC-IIIC. A clipping method was used that maps all out- of-gamut colors to the surface of the gamut, and no change was made to colors inside the gamut. It was found that gamut mapping using (Delta) E94, (Delta) ECMC, and (Delta) Ewt were effective in CIELAB color space. For mapping images containing a large portion of blue colors, (Delta) EBFD and (Delta) E*uv were found to be more effective. (Delta) E*ab was least preferred for all images. With respect to color spaces, gamut mapping performed in the CIELUV color space was superior to any other color spaces for the blue region. We conclude that (Delta) E94-LUV and (Delta) EBFD-LAB are the most useful combinations of color difference formula and color pace for gamut mapping, if we are to apply a single combination universally.
Image lightness rescaling using sigmoidal contrast enhancement functions
In color gamut mapping of pictorial images, the lightness rendition of the mapped images plays a major role in the quality of the final image. For color gamut mapping tasks, where the goal is to produce a match to the original scene, it is important to maintain the perceived lightness contrast of the original image. Typical lightness remapping functions such as linear compression, soft compression, and hard clipping reduce the lightness contrast of the input image. Sigmoidal remapping functions were utilized to overcome the natural loss in perceived lightness contrast that results when an image from a full dynamic range device is scaled into the limited dynamic range of a destination device. These functions were tuned to the particular lightness characteristics of the images used and the selected dynamic ranges. The sigmoidal remapping functions were selected based on an empirical contrast enhancement model that was developed for the result of a psychophysical adjustment experiment. The results of this study showed that it was possible to maintain the perceived lightness contrast of the images by using sigmoidal contrast enhancement functions to selectively rescale images from a source device with a full dynamic range into a destination device with a limited dynamic range.
Categorical color mapping for gamut mapping: II. Using block average image
Hideto Motomura
Several improvements of categorical color mapping are introduced in this paper. The categorical color mapping decides the best mapping point to keep categorical relationship between a source device and a destination device. The first improvement is that the Simplex iteration technique was implemented to the categorical color mapping in order to search the best mapping point in the destination space. The second improvement is that the categorical color mapping is designed in the CIELAB space. A block average image was used as color patches for the categorical color naming to developed a new gamut mapping strategy that challenges to carry on an image gamut mapping instead of a device gamut mapping. The above two improvements and the new trial are introduced for gamut mapping of a CRT to a printer.
Using VRML to communicate device gamuts
VRML or the Virtual Reality Markup Language offers a powerful tool for rendering and distributing device gamuts in a 3D format. This format is an open standard and is supported by multiple plug-ins for web browsers. The VRML format also allows interactive exploration of the gamuts, including unusual views such as the interior of the gamut. In addition, standard camera viewpoints can be used to view the gamut at a subset of useful fixed vantage points. This paper covers basic gamut rendering using VRML, including using a sufficient surface sampling rate. Some options to enhance the gamut rendering are also discussed and an example comparing three device gamuts is presented.
Color Mapping Algorithms
icon_mobile_dropdown
Metrication for color imaging devices
Color devices have become commonplace to many users including those at home. The color revolution began about 1990 with the ready availability of color displays and affordable color printers.
Object-to-object color mapping by image segmentation
Hiroaki Kotera, Hung-Shing Chen, Tetsuro Morimoto
An object-to-object color mapping strategy depending on the image color contents is proposed. Pictorial color image is segmented into different object areas with clustered color distributions. Euclidian or Mahalanobis color distance measures, and Bayesian decision rule based on maximum likelihood principle, are introduced to the image segmentation. After the image segmentation each segmented pixels are projected onto principal component space by Hotelling transform and the color mapping s are performed for the principal components to be matched in between the individual objects of original and printed images. Experimental results in automatic color correction for inkjet prints are reported. The paper discusses on the color correction effects by PCA matching and reproduction errors in relation to the segmentation methods.
Photographic prints from digital still cameras
The expansion of digital still cameras into the consumer market has provided opportunities for photographic amateurs to enjoy new digital imaging. Fuji Photo Film Co., Ltd. has started the Fiji Digital Imaging Service (F-DI) from 1997. The F-DI DSC print service is one of the features which provides photographic prints from the data obtained by digital still cameras. Below, the concept of the F-DI DSC print service is described and the algorithm adopted for printing is discussed. Since digital still cameras are developed such that the images will originally be displayed on CRT monitors, it is necessary to achieve new technical development for this particular service. New algorithms including the scene color balance algorithm and the density correction algorithm are described. The conditions necessary to obtain excellent prints are also explained.
Interpolation errors and ripple artifacts of 3D lookup table method for nonlinear color conversions
This paper describes the theoretical estimation of errors caused by 3D table interpolation for nonlinear color conversions. The interpolation accuracy is affected by three factors; geometric interpolation techniques, distance between the lattice points included in the 3D table, and the degree of nonlinearity of the color conversion. This paper focuses its point on the nonlinearity. Two types of error; conventional error and ripple artifact error are defined, and gray gradation is used for input image. Four types of nonlinear color conversions categorized into ripple type or nonripple type are tested using trilinear interpolating technique. The error goes down with the decreasing of d, however local gamma conversion has very large ripple that couldn't go down with decreasing of d.
Workflow
icon_mobile_dropdown
Digital graphic networks
The graphic arts industry is increasingly reliant on telecommunications for the transfer of digital data for media production. There are, however, many other aspects of the business process between customers and supplies that are suited to network-based interaction. The transaction between customer and producer can be separated into four data streams; briefing, content creation and production, and approval. Each of these data streams has specific requirements which lead to a matrix of needs for the different parties in the transaction. Proposals were made for meeting these needs through a network service dedicated to the graphic arts. British Telecom and Scitex are currently offering a service based on these proposals, known as Vio. A network service of this kind can be extended to include a range of other services and third-party interactions, such as automated transfer of media production objects from third-party content providers. The opportunities for users and third-party developers to develop a custom network applications and interfaces between the network and internal production processes and monitoring systems using open IP-based methods are described.
Regulating workflow speed through product architecture: experiences from the Swedish magazine industry
Christopher Rosenqvist, Mats Lindgren
Coordination of content providers, producers and distributors has become increasingly important in the magazine industry when working with short life-cycle products. It is therefore interesting to study the product development and manufacturing process of magazines because of its short lead times. Production difficulties in the workflow process are often consequences of the product architecture. The aim of this research was to establish a concept for magazine architecture which could lead to substantial lead time reduction and improved product quality. In this research we studied magazine productions from Sweden's major magazine producer on both the customer site and from the manufacturing site. Monthly magazines, which had long time records about workload and production workflow were selected. The analysis was based on production data from the graphic arts service providers and from the editorial staff combined with open-ended interviews. The analysis shows that a re-arranged product architecture can increase both productivity and quality. It is clear that the product architecture of magazines regulates the workflow speed. The possibility of changing the product architecture might be difficult due to old traditions and different cultures within the editorial and manufacturing units. A solution to these problems should be sought through common efforts of both sides.
Night of the living color: horror scenarios in color management land
Johan M. Lammens
An ICC-based color management is becoming increasingly feasible and its picking up support from all the major high end design and pre-press applications as well as hardware manufacturers. In addition, the new sRGB standard is emerging as a way to effectively do 'color management for the masses', and is being supported by many leading manufacturers as well. While there certainly remain serious technical issues to address for both ICC and sRGB color management, it seems that the main problem users are facing today is how to integrate all components of their workflow into a seamless system, and how to configured each component to work well with all the others. This paper takes a brief look at the history of color management for a workflow perspective, and attempts to analyze how to compose and configured a quadruple color conversions can become a terrific nightmare. Some of the many ways to get the wrong results are briefly illustrated, as well as a few ways to get the right results. Finally, some technical recommendations are offered for how to improve the situation from a user point of view.
Restoration and enhancement of images captured by a digital camera
Chris Tuijn, Wim Cliquet
The success of the digital cameras in the consumer market can no longer be ignored. An obvious advantage over the conventional photography is the fact that the images taken by a digital camera are readily available. Another important advantage is that there is not additional cost in taking pictures since the images are recorded on magnetic media that can be erased and re-used.
Role of JTAG2 in coordinating standards for imaging technology
In the modern world of image technology, standards are developed by many different groups, each with specific applications in mind. While some of these groups are part of the accredited standards community - ISO, IEC, CIE, ITU, etc. others are industrial organizations, or consortia.
Colorimetry I
icon_mobile_dropdown
Development of multiband color imaging systems for recordings of art paintings
Yoichi Miyake, Yasuaki Yokoyama, Norimichi Tsumura, et al.
We have developed the multiband color imaging system in the frame work of the IPA project as New Color Management System Based on Human Perception. This system consisted of multiband CCD camera, computer, CRT and projection type monitor. Five band images were taken by CCD camera with 2048 X 2048 pixels. Reflectance spectra of the object were estimated by the principle component analysis and Wiener estimation method. Spectral transmittance of separation filter was optimized by simulated annealing method based on Wiener estimation. Color adaption model was introduced for accurate color reproduction of paintings based on human perception. In this paper, estimation of spectral reflectance of paintings, developed multiband camera, various software for image composition and control of multiband camera are introduced demonstrated.
LED-based spectrophotometric instrument
Michael J. Vrhel
The performance of an LED-based, dual-beam, spectrophotometer is discussed. The difficulty with producing an LED based instrument in the past has been the limited choice of LEDs, particularly in the blue/green region. Recent advances in LED technology have made such a device possible. The instrument discussed uses commercially available LEDs and other off-the-shelf electronic components which result in a low cost durable device. A mathematical model of the device is constructed, and the sources of deviations from this model are discussed.
Calibration method for radiometric and wavelength calibration of a spectrometer
A new calibration target or Certified Reference Material (CRM) has been designed that uses violet, orange, green and cyan dyes ont cotton paper. This paper type was chosen because it has a relatively flat spectral response from 400 nm to 700 nm and good keeping properties. These specific dyes were chosen because the difference signal between the orange, cyan, green and purple dyes have certain characteristics that then a low the calibration of an instrument. The ratio between the difference readings is a direct function of the center wavelength of a given spectral band. Therefore, the radiometric and spectral calibration can be determined simultaneously from the physical properties of the reference materials.
Stray darkness: a new error or a previously known error recast?
David L. Spooner
Stray light is a commonly known error in spectrophotometer measurements. In that context, it refers to the instrument response to light that is passed by the monochromator or other spectral selective element which is outside the desired bandpass. 'Stray darkness' is a term coined by Michael Goodwin of the Eastman Kodak Company Corporate Metrology Center. He used this term to explain unusual results observed in a measurement study the conducted in which one of the instrument behaved in a way that gave results just the opposite to what would be expected if stray light was present. Goodwin made measurements of commonly used color reference items with instruments made by two manufacturers. He found that differences of the measurements of a Macbeth ColorChecker chart made with the two instruments were relatively small. However, measurements of a set of BCRA tiles often disagreed significantly. These measurement differences were confirmed by making a similar set of measurements using similar instruments. The differences were found to be due, in large measure, to the interaction of one of the measuring instruments with translucent samples.
Color constancy effects measurement of the Retinex theory
Daniele Marini, Alessandro Rizzi, Caterina Carati
Understanding chromatic adaptation is a necessary step to solve the color constancy problem for a variety of application purposes. Retinex theory justifies chromatic adaptation, as well as other color illusions, on visual perception principles. Based on the above theory, we have derived an algorithm to solve the color constancy problem and to simulate chromatic adaption. The evaluation of the result depends on the kind of applications considered. Since our purpose is to contribute to the problem of color rendering on computer system display for photorealistic image synthesis, we have devised a specific test approach. A virtual 'Mondrian' patchwork has been created by applying a rendering algorithm with a photorealistic light model to generate images under different light sources. Trichromatic values of the computer generated patches are the input data for the Retinex algorithm, which computes new color corrected patches. The Euclidean distance in CIELAB space, between the original and Retinex color corrected trichromatic values, has been calculated, showing that the Retinex computational model is very well suited to solve the color constancy problem without any information on the illuminant spectral distribution.
Colorimetry II
icon_mobile_dropdown
Prediction and compensation of white point shift in CRT displays resulting from screen aging
Richard D. Cappels, Thai Q. La
A previously unpublished model of color CRT display screen degradation as a function of electron beam exposure, derived from Phanls's work but including correction for a second degradation mechanism is presented along with a description of its application in a self-calibrating display to minimize the accuracy drift as the display ages.
Spectroradiometric characterization of the spectral linearity of a conventional digital camera
Francisco Martinez-Verdu, Jaume Pujol, A. Bouzada, et al.
We propose an experimental and theoretical-mathematical methodology to characterize by spectroradiometric methods the spectral linearity of a conventional digital still camera. As the first step to this characterization, we have performed a test to verify the reciprocity law starting from the experimental and theoretic-mathematical characterization of the total spectral dynamic range of the relation between exposure and RGB digital output data. The obtained results show that the reciprocity law in digital photography is not exactly verified.
Relationship between uncertainty in reflectance factor data and computed CIELAB values: some intuitive tools
The use of colorimetric data is increasing exponentially in all area of image technology. This is being driven by both the availability of moderately priced instrumentation that makes acquisition of this type of data economically feasible, and by the increase in computing power that allows images to be processed and transformed using colorimetric tools. Too often the people using colorimetric data, to create color management profiles or other transforms, have little experience in either making spectral reflectance measurements or in computing the colorimetric parameters that are based on these measurements. In addition, although many people working in either graphic arts or photography have an intuitive understanding of process variation expressed in terms of density, few have extrapolated this understanding into colorimetry. This paper will discuss typical levels of uncertainty in measurement of spectral reflectance factor and the associated uncertainty in CIELAB values computed from these data. It will also provide some typical relationships between status density and colorimetry. Included are some generalized relationships that can be used as intuitive guides to assist users in evaluating the significance of colorimetric differences reported in terms of CIE delta-E.
Graphic arts color standards update: 1999
With file formats for graphics arts data exchange in place and being rapidly implemented, increased emphasis is being placed on standards that help define the meaning of the image data being exchanged. Standards relating to color data definition, therefore, play a dominant role in both the US and international graphic arts standards activities. There is a clear understanding of the key role that printing process definition standards, and metrology standards, play in helping define stable process conditions to which color characterization data can be related. In addition, it has been generally accepted that, for data exchange and colorimetric profile definition, only a limited number of printing conditions needs to be defined. The color management process can be separated such that printing aims and individual printing press profiles can be handled as separate issues and not confounded together. The current status of work in support of this new perspective will be summarized. The existing portfolio of standards will also be reviewed, including those standards that have been published to define color measurement and computation requirements, scanner input characterization targets, four-color output characterization, graphic arts applications for both transmission and reflection densitometry. An update will also be provided on the continuing work on standards relating to ink testing, reference ink color specifications, and printing process definition.
Color Quantization and Compression
icon_mobile_dropdown
Optimal display of true color image with color-distortion consideration
Long-Wen Chang, Ching-Yang Wang, Jia-Lun Yang
Most color output devices use the frame buffer architecture. Although the high speed computer with large memory storage can manipulate the true color image, some applications on computer games and the Internet still needs to use K colors. Her, we propose a fast method of color image quantization based on color-distortion theory. It has the best tradeoff between the color number of quantization and the distortion. To utilize the properties of the human visual system, the distortion is also measured in the luminance-chrominance color space. Simulation shows that the proposed algorithm can produce good quantized color image and is much faster than the K-d tree algorithm.
Color representation using scalar chrominance
Maciej Bartkowiak, Marek Domanski
The paper deals with color image and video representation that consists of two components instead of three. The first component is luminance defined in a usual way. The second component is scalar chrominance obtained from two chrominance components to one scalar chrominance is a vector quantization task which can be performed quite efficiently using a binary split algorithm. Experimental results for color images in CIF and QCIF resolution prove that application of a codebook with 20-60 different scalar chrominances leads to a representation that is mostly indistinguishable from the original images. Since the sets of scalar chrominance values are small, scalar chrominance samples need only short binary representations. Moreover scalar chrominance can be subsampled as chrominance usually is. The mapping of the set of chrominance value pairs on a subset of integer numbers defines an order in the codebook. This order is also an important issue as it influences spectrum of the picture of scalar chrominance and exhibits substantial impact on further compression and processing of scalar chrominance.
Trellis-coded color quantization of images
Zixiang Xiong, Jian Qiao Huang, Xiaolin Wu
We examine color quantization of images using trellis coded quantization (TCQ). Together with a simple halftoning scheme, an eight-bit trellis coded color quantizer reproduces images that are visually indistinguishable from the 24-bit originals. The proposed algorithm can be viewed as a predictive trellis coded color quantization scheme. It is universal in the sense that no training or look-up table is needed. The complexity of TCQ is linear with respect to image size, making trellis coded color quantization suitable for interactive graphics and a window-based display environment.
CIEL*a*b*-based near-lossless compression of prepress images
Koen N.A. Denecker, Steven Van Assche, Peter De Neve, et al.
Lossless image compression algorithms used in the prepress workflow suffer from the disadvantage that only moderate compression ratios can be achieved. Most lossy compression schemes achieve much higher compression ratios but there is no easy way to limit difference they introduce. Near- lossless image compression schemes are based on lossless techniques, but they give an opportunity to put constraints on the unavoidable pixel loss. The constraints are usually expressed in terms of differences within the individual CMYK separations and this error criterion does not match the human visual system. In this paper. we present a near- lossless image compression scheme which aims at limiting the pixel difference such as observed by the human visual system. It uses the subjectively equidistant CIEL*a*b*-space to express allowable color differences. Since the CMYK to CIEL*a*b* transform maps a 4D space onto a 3D space, singularities would occur resulting in a loss of the gray component replacement information; therefore an additional dimension is added. The error quantization is based on an estimated linearization of the CIEL*a*b* transform and on the singular value decomposition of the resulting Jacobian matrix. Experimental results on some representative CMYK test images show that the visual image quality is improved and that higher compression ratios can be achieved before the visual difference is detected by a human observer.
Wavelet coding suited for printer raster images
Ricardo L. de Queiroz
This paper presents a wavelet-based technique for compression of printer raster images. The coder does not require buffering the whole image and partially takes into account visual losses due to the imaging process by (sigma) - filtering high-pass subbands. All wavelet coefficients are generated, filtered, quantized and entropy-coded sequentially, one block at a time. The coder is suitable for pipeline processing and, for typical images, it shows performance improvements against other schemes in the same class of complexity.
Applying the Hamiltonian algorithm to optimize JPEG quantization tables
Kazutaka Hirata, Jun-ichi Yamada, Kazumasa Shinjo
Optimizing JPEG quantization tables is expected to transmit or store JPEG images. The Hamiltonian algorithm is one optimizing method and it can simultaneously optimize numerous parameters. JPEG quantization tables consists of 128 variables. We propose applying the Hamiltonian algorithm for optimizing JPEG quantization tables. The JPEG quantization tables, which are optimized by the Hamiltonian algorithm, can reduce the rate and improve the quality of a decoded JPEG image. This report describes a configuration of a JPEG quantization table optimizing system and the experimental result of using it.
Internet Imaging
icon_mobile_dropdown
JIP: Java image processing on the Internet
Dongyan Wang, Bo Lin, Jun Zhang
In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.
Region-of-interest-based progressive transmission of gray-scale images across the Internet
Boris Rogge, Ignace L. Lemahieu, Wilfried R. Philips, et al.
On the Internet, transmission time of large images is still an important issue. In order to reduce transmission time this paper introduces an efficient method to send 8-bit greyscale images across the Internet. The method allows progressive transmission up to lossless reconstruction. It also allows the user to select a region of interest. This method is particularly useful when image quality and transmission speed are two desired properties. The method uses TCP-IP as a transport protocol.
Implementation of online video-on-demand system over the Internet
Joon-Hyeon Jeon, Heung-Kyu Lee
Over low-bit-rate network such as the Internet, a video service has problems of video representation delay and degradation of QoS because it cannot transmit video data within restricted time. Data compression and transmission techniques have been researched for solving this problems. In this paper, we represent a new real-time VoD service system on the Internet. This VoD which is based on the client/server architecture has been developed using ITU-T H.263 video codec which has high compression ratio and good quality of video data and 8.3kpbs Truespeech audio codec. Our system can provide a real-time video service according to user demand on the Web browser, and support a tool to generate video streams stored at the video server. In this system, for solving the problems of the real-time VoD service over the Internet, we propose new techniques of buffer control to guaranteed audio/video QoS, synchronization between audio and video, video summary service by use of scene change detection, and etc.
Design and implementation of a SMIL player
Jue Xia, Simon Shim, Ying Wang, et al.
Synchronized Multimedia Integration Language (SMIL) is a recommendation developed by the Synchronized Multimedia Working Group in the World Wide Web Consortium. SMIL is a simple and standard way to specify a timeline based synchronized multimedia presentation over the Internet. It is a declarative authoring language based on a Extensible Markup Language to define language-specific data types and tags. A SMIL player schedules presentation in a SMIL file, and retrieves media objects on the Web using URLs described in the field. The SMIL file is a plain text file and can be edited using a simple text editor. We present the approaches of implementing a SMIL player within the desktop system constraints. The reference implementation is a Java applet, which follows a platform-neutral programming paradigm and makes use of Java Media Framework. The applet can run in any main stream web browser.
Tone Reproduction and Image Quality
icon_mobile_dropdown
More about the factors determining color print quality
The proliferation of the color printers in the computer world today introduced a huge number of prints available for everyone with good quality/cost performance ratio. This paper emphasizes the most important factors that can improve the print quality in color prints.
Automatic processing of color images for user preference
Reiner Eschbach, William Fuss
The constant increase in computer power has led to the demand to incorporate color images into more and more every day applications, from text documents to identification cards. Even more important than the increased use of color images is that his increase is predominantly in situations where the user either is no expert in color imaging, or where a single expert has to handle extremely large numbers of images. In both these situations, it is important that the majority of image processing tasks, like adjusting, exposure, contrast, color and sharpness are done in an automatic fashion.
Enhancement of improperly exposed photographic color negatives
Genevieve Dardier, Jon Yngve Hardeberg, Hans Brettel
This paper addresses digital techniques used to automatically correct those color photographs whose range of transmittance densities is too large for visually acceptable image reproduction. The first step consists in calibrating the image acquisition devices, and results are shown for two different models. Then, we present a method inspired by photographic techniques that uses modified binary masks to enhance negatives of too high contrast. This method can be applied in an industrial environment such as in photographic mini-laboratories.
Designing method for tone and color reproducibility of digital still cameras and digital prints
Hitoshi Yamashita
A private standard and procedure for designing tone and color reproducibility of digital still cameras are described. We defined a 'hub' color space which interconnect input devices and output devices. As the images taken by digital still cameras are usually displayed 'as is' on CRTs and often exchanged as digital image files, the color space employed in compliant to the international standard, Recommendation ITU-R BT.709. We also defined a procedure for designing image processing parameters of digital still cameras to achieve proper exposure level, tone reproduction curve, color hue and saturation. The basis of this procedure is to reproduce subject color correctly in the hub color space. Every our digital still camera product is designed and evaluated following this procedure to minimize the difference of the image characteristics between products. Objective and colorimetric evaluation is performed by analyzing the image data hot in a studied where illuminants are strictly controlled. Subjective evaluation is performed by taking pictures under many different conditions and print them by a photographic color printer. Also described are some analysis on image characteristics of digital still camera when its images are printed on silver halide paper. Some limitations and difficulties to produce 'photographic' print from 'CRT' oriented image are presented and discussed.
Nonlinear resampling for both moire suppression and edge preservation
Dimitri Van De Ville, Koen N.A. Denecker, Wilfried R. Philips, et al.
Moire formation is often a major problem in the printing applications. These artifacts introduce new low frequency components which are very disturbing. Some printing techniques, e.g. gravure printing, are very sensitive to moire. The halftoning scheme used for gravure printing can basically be seen as a 2D non-isotropic subsampling process. The more problem is much more important in gravure printing than in conventional digital halftoning since the degree of freedom in constructing halftone dots is much more limited due to the physical constraints of the engraving mechanism.
White is green
Hal Glicksman
Green is the center of the visible spectrum and the hue to which we are most sensitive. In RGB color, green is 60 percent of white. When we look through a prism at a white square, as Goethe did, we see white between yellow and cyan, just where green appears in the spectrum of Newton. Additional arguments were published previously and appear at www.csulb.edu/-percept, along with the Percept color chart of the hue/value relationships. A new argument, derived from the perception of leaves, is presented here. The Percept color chart transformed into a color wheel is also presented.
Halftoning I
icon_mobile_dropdown
Evolution of error diffusion
As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.
Color diffusion: error diffusion for color halftones
Doron Shaked, Nur Arad, Andrew E. Fitzhugh, et al.
Error Diffusion is a high-performance halftoning method in which quantization errors are diffused to 'future' pixels. Originally intended for grayscale images, it is traditionally extended to color images by Error-Diffusing each of the three color planes independently. In this paper we show that augmenting the Error Diffusion paradigm with a simple design rule based on certain characteristics of human color perception, results in a novel color halftoning algorithm named color diffusion. The output of color diffusion is of considerable higher quality compared to separable error diffusion. The algorithm presented requires no additional memory and entails a reasonable increase in run-time.
Semi-vector error diffusion for color images
Color error diffusion can be classified into two types, namely, vector error diffusion and scalar error diffusion, according to the underlying quantization methods. Compared to scalar error diffusion, vector error diffusion is superior in image quality. However, it requires significantly more computation, and can introduce artifacts due to accumulation of the errors in output device space. In this paper, we propose a new quantization algorithm for CMY color error diffusion. The algorithm, which we call semi- vector quantization, has a low computational complexity and a high stability as similar to scalar error diffusion, but yields superb quality images close to those generated from vector error diffusion.
Evaluation of digital halftones image by vector error diffusion
Masahiro Kouzaki, Tetsuya Itoh, Takayuki Kawaguchi, et al.
The vector error diffusion (VED) method is applied to proudce the digital halftone images by an electrophotographic printer with 600 dpi. Objective image quality of those obtained images is evaluated and analyzed. As a result, in the color reproduction of halftone image by the VED method, it was clear that there are large color difference between target color and printed color typically in the mid-tone colors. We consider it is due to the printer properties including dot-gain. It was also clear that the color noise of the VED method is larger compared with that of the conventional scalar error diffusion method in some patches. It was remarkable that ununiform patterns are generated by the VED method.
Boundary artifacts reduction in vector error diffusion
Color error diffusion can be classified into two types, namely, vector error diffusion and scalar error diffusion, according to the underlying quantization methods. Compared to scalar error diffusion, vector error diffusion is typically superior in overall image quality. However, it may occasionally introduce boundary artifacts. A color band, at a width of a few pixels to tens of pixels may appear along the edges. The artifacts are referred in literature to slow response for leading edges and smear for trailing edges, respectively. In this paper, we present a simple yet effective method for reducing the boundary artifacts.
Optimal parallel error diffusion dithering
Panagiotis Takis Metaxas
Error diffusion dithering is a technique that is used to represent a grayscale image on a printer, a computer monitor or other bi-level displays. For a number of year it was believed that error diffusion algorithms can not be parallelized. On this paper we present a simple error- diffusion parallel algorithm that can be easily implemented on parallel computers that contain linear arrays of processing elements. It can also be implemented easily on specialized hardware. One of the advantages of our algorithm is its low implementation cost, its scalability, and its ability to benefit from standard fault-tolerance techniques.
Halftoning II
icon_mobile_dropdown
Stochastic clustered-dot dithering
Victor Ostromoukhov, Roger David Hersch
A new technique for building stochastic clustered-dot screens is being proposed. A large dither matrix comprising thousands of stochastically laid out screen dots is constructed by first laying out the screen dot centers. Screen dot centers ar obtained by placing discrete disks of a chosen radius at free cell locations when traversing the dither array cells according to either a discretely rotated Halberd space-filling curve or a random space-filling curve. After Delauney triangulation of the screen dot centers, the maximal surface of each screen dot is computed and iso- intensity regions are created. This iso-intensity map is converted to an anti-aliased grayscale image, i.e. to an array of preliminary threshold values. These threshold values are renumbered to obtain the threshold values of the final dither threshold array. By changing the disk radius, the screen dot size can be adapted to the characteristics of particular printing devices. Larger screen dots may improve the tone reproduction of printers having important dot gain.
Blue noise dither matrix design parameters and image quality of halftone prints
Appasaheb N. Madiwale, Kevin E. Spaulding
Blue noise dither halftoning methods have been found to produce images with pleasing visual characteristics. Results similar to those generated by error-diffusion algorithms can be obtained using an image processing algorithm that is comparatively much simpler to implement. The blue noise dither matrix design method used in this study is based on minimization of a visual cost function. The visual cost function combines the frequency spectrum of the spatial modulation of the halftone pattern with the frequency response of the human visual system to define a visual cost metric. A sequential optimization approach using stochastic annealing is used. The design parameter associated with this method are viewing distance, print resolution, size of the dither matrix, and from of visual cost function. The effect of these design parameters on the resulting image quality of halftone prints is the topic of this paper. Blue noise dither matrices were designed using a variety of viewing distances for a 200 dpi printing system. Test images were generated and the prints were visually examined for texture artifacts. A preferred viewing distance parameters value of 10-20 inches was indicated. The effect of the dither matrix size and the form of the visual cost function will be reported sometime in future.
Segmentation and automatic descreening of scanned documents
Alejandro Jaimes, Frederick C. Mintzer, A. Ravishankar Rao, et al.
One of the major challenges in scanning and printing documents in a digital library is the preservation of the quality of the documents and in particular of the images they contain. When photographs are offset-printed, the process of screening usually takes place. During screening, a continuous tone image is converted into a bi-level image by applying a screen to replace each color in the original image. When high-resolution scanning of screened images is performed, it is very common in the digital version of the document to observe the screen patterns used during the original printing. In addition, when printing the digital document, more effects tend to appear because printing requires halftoning. In order to automatically suppress these moire patterns, it is necessary to detect the image areas of the document and remove the screen pattern present in those areas. In this paper, we present efficient and robust techniques to segment a grayscale document into halftone image areas, detect the presence and frequency of screen patterns in halftone areas and suppress their detected screens. We present novel techniques to perform fast segmentation based on (alpha) -crossings, detection of screen frequencies using a fast accumulator function and suppression of detected screens by low-pass filtering.
Low-memory low-complexity inverse dithering
Shiufun Cheung, Robert A. Ulichney
Dithering an image decreases the pixel bit depth and reduces the storage space required in the image buffer while largely preserving the perceptual quality. In some application it is desirable to reconstruct the original image; that is, restore the dithered image to its original bit-depth, for further processing or display. In this paper, we present a new color inverse dithering system designed for low-cost implementation. Our algorithms is based on edge-sensitive adaptive low-pass filtering. In order to prevent excessive blurring from low pass filtering, the system uses edge detection methods so that the filters are applied only to regions of constant color or gray level in the original image. One such method explores the fact that pixel values are only one level away from each other in a constant color region of a dithered image. Another method exploits a priori knowledge of the dither masks. By limiting the number of possible filters, and by restricting the region of support of the filters in to single image line, tremendous implementation advantages can be gained. Our prototype system uses a set of five filters, including a pair that are asymmetric about the origin specifically for application to object edges. In our implementation, the need for multipliers is eliminated by using bit replication for up- multiplication, and by using lookup tables with relatively small numbers of entries for filtering. We have found that our inverse dithering system can restore to a significantly degree a dithered image to its original form. It is especially effective for graphics and synthetic images.
Modeling for Hardcopy
icon_mobile_dropdown
Expanded Neugebauer model for printer color formation
A model to predict colorimetric value for color printers is presented. The Neugebauer narrow-band color mixing model was applied with modifications. While sixteen primaries are used for four-color printing process in Neugebauer mode, we used two data sets in our model, one with eighty-one CMYK primaries and the other with one hundred twenty-five CMY primaries. Two Yule-Nielsen factors were applied to optimize the CMYK set and the CMY set separately. The Yule-Nielsen factors were optimized by minimizing (Delta) E*L*a*b* or (Delta) E*94. The Neugebauer calorimetric quality factor was applied as a weighting function to optimize dot areas. By optimizing primaries and applying the CQF weighting function, the average color error and the maximum color error decrease significantly.
Posters
icon_mobile_dropdown
Expanded nonlinear order dithering and modified error diffusion for an ink-jet color printer
Chae-Soo Lee, Cheol-Hee Lee, Yang-Woo Park, et al.
New methods are proposed for printing a full resolution image on a limited output device. These methods include expanded nonlinear ordered dithering (ENOD) and modified error diffusion (MED). ENOD benefits from simple processing that reduces the computational time of the ordered dither, and an aperiodic and uncorrelated structure that avoids the low-frequency graininess of blue noise masking. The proposed ENOD also uses a nonlinear function that can accommodate the overlapping phenomena of neighborhood dots. The MEd adjusts the diffusion of a quantization error according to the characteristics of the input image. Consequently, the proposed algorithm can produce high quality images while using low-cost color devices.
Gamut mapping using variable anchor points
Chae-Soo Lee, Kyeong-Man Kim, Eung-Joo Lee, et al.
Recently, a variety of imaging devices are being used to represent electronic color images the reproduced color, however, is different from the original color because of the difference of producible colors on the devices. The range of producible colors offered by a device is referred to as its gamut. In his paper, a gamut-mapping algorithm (GMA) is proposed that can maintain device-independent color. Categorized as a parametric GMA, this algorithm utilizes variable anchor points to both reduce a sudden color change on the gamut boundary of the printer and to maintain a uniform color change during the mapping process. Accordingly, the proposed algorithm can reproduce high quality images with low-cost color devices.
Measurement and control of color image quality
Eric Schneider, Kate Johnson, David Wolin
Color hardcopy output is subject to many of the same image quality concerns as monochrome hardcopy output. Line and dot quality, uniformity, halftone quality, the presence of bands, spots or deletions are just a few by both color and monochrome output. Although measurement of color requires the use of specialized instrumentation, the techniques used to assess color-dependent image quality attributes on color hardcopy output are based on many of the same techniques as those used in monochrome image quality quantification. In this paper we will be presenting several different aspects of color quality assessment in both R and D and production environments. As well as present several examples of color quality measurements that are similar to those currently being used at Hewlett-Packard to characterize color devices and to verify system performance. We will then discuss some important considerations for choosing appropriate color quality measurement equipment for use in either R and D or production environments. Finally, we will discuss the critical relationship between objective measurements and human perception.