Proceedings Volume 3814

Mathematics of Data/Image Coding, Compression, and Encryption II

cover
Proceedings Volume 3814

Mathematics of Data/Image Coding, Compression, and Encryption II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 16 December 1999
Contents: 4 Sessions, 14 Papers, 0 Presentations
Conference: SPIE's International Symposium on Optical Science, Engineering, and Instrumentation 1999
Volume Number: 3814

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Encoding and Encryption
  • Noise and Error in Transmission and Compression
  • Image Compression
  • Algorithm Mapping and Evaluation
  • Noise and Error in Transmission and Compression
  • Algorithm Mapping and Evaluation
Encoding and Encryption
icon_mobile_dropdown
Adaptive algorithm for generating optimal bases for digital images
David Dreisigmeyer, Michael J. Kirby
We propose an adaptive basis algorithm which transforms the Karhunen-Loeve (KL) procedure (which optimizes the mean- square-error) to a new method which minimizes the maximum error of the projected data. This idea is extended and applied to a noisy KL procedure. It is seen that significant dimensionality reduction may be obtained via this adaptive procedure. Four numerical experiments are presented using a data set consisting of digital images of human faces.
Truncated Baker transformation and its extension to image encryption
Masaki Miyamoto, Kiyoshi Tanaka, Tatsuo Sugimura
This paper presents a new truncated Baker transformation with a finite precision and extends it to an efficient image encryption scheme. The truncated Baker transformation uses the quantization error as a secret key, which is always produced by contraction mechanism in the mapping process. The original dynamics by Baker transformation is globally preserved but a random level rotation operator is incorporated between two neighbor elements in the mapping domain in order to keep the same precision. Such perturbations are local and small in each mapping, however, as the mapping process goes on they will gradually accumulate and affect the whole dynamics. Consequently, generated binary sequences (the dynamics of elements) have statistically good features on ergodicity, mixing and chaotic properties. The extended image encryption scheme efficiently shuffle the input gray level image making difficult for a third party to decode the ciphered data to the original image without knowing the proper secret key.
Information hiding using random sequences
Jang-Hwan Kim, Kyu-Tae Kim, Eun-Soo Kim
During the past few years a variety of techniques have emerged to hid specific information within multimedia data for copyright protection, tamper-proofing and secret communication. The schemes for information hiding that have been proposed so far used either digital signal processing software or hardware. So they inevitably have a problem in some applications like automatic copyright control system which need fast data-extracting scheme. In this paper, we show that the newly proposed optical correlator-based information hiding system has an advantage in that sense. In this scheme it is possible to simultaneously extract all the data hidden in one stego image and furthermore it is also possible to simultaneously extract all the data hidden in several stego images using optical correlators such as matched spatial filter and joint transform correlator.
Noise and Error in Transmission and Compression
icon_mobile_dropdown
Transmission of digital chaotic and information-bearing signals in optical communication systems
A new proposal to have secure communications in a system is reported. The basis is the use of a synchronized digital chaotic systems, sending the information signal added to an initial chaos. The received signal is analyzed by another chaos generator located at the receiver and, by a logic boolean function of the chaotic and the received signals, the original information is recovered. One of the most important facts of this system is that the bandwidth needed by the system remain the same with and without chaos.
Unequal error protection for H.263 video over indoor DECT channel
Several techniques have been proposed to limit the effect of error propagation in video sequences coded at a very low bit rate. The best performance is achieved by combined FEC and ARQ coding strategies. However, retransmission of corrupted data frames introduces additional delay which may be critical either for real-time bidirectional communications, or when the round-trip delay of data frames is high. In such cases, only a FEC strategy is feasible. Full reliable protection of the H.263 stream would produce a significant increase of the overall transmission bit rate. In this paper, an unequal error protection (UEP) FEC coding strategy is proposed. The proposed technique operates by protecting only the most important bits of an H.263 coded video with periodically INTRA refreshed GOB's. ARQ techniques are not considered to avoid delays and simplify the receiver structure. Experimental tests are carried out by simulating a video transmission over a DECT channel in an indoor environment. The results, in terms of PSNR and overall bit rate, prove the effectiveness of the proposed UEP FEC coding.
Image Compression
icon_mobile_dropdown
Method for JPEG standard progressive operation mode definition script construction and evaluation
Julian Minguillon, Jaume Pujol
In this paper we present a novel method for JPEG standard progressive operation mode definition scripts construction and evaluation. Our method allows the user to construct and evaluate several definition scripts without having to encode and decode a given image to test their validity, with a reduced cost.
EBLAST: efficient high-compression image transformation: I. Background and theory
In this paper, the first of a two-part series, a high- compression image transformation (called EBLAST for Enhanced Blurring, Local Averaging, and Thresholding) is presented that facilitates image transmission along low-bandwidth channels such as acoustic modems, and exhibits low mean- squared error with spatially uniform reconstruction error. Although initially applied to tactical communication problems, this innovative transform Blurring, Local Averaging, and Thresholding), can be used in commercial applications such as videotelephony. EBLAST's compression ratio (CR), which usually ranges from 100:1 to as high as 250:1 on underwater imagery, can in some cases be increased by follow-on Huffman encoding to yield CR approximating 300:1 with no additional information loss. Additionally, in previous work the authors showed that restriction of the source image to typical targets of interest, followed by compression, could increase CR to 20,000:1 or greater. In such cases, background information is characterized, then discarded prior to compression. In the decompression step, the background is approximately reconstructed from statistical parameters, which can provide verisimilitude for human target cueing applications.
Trends in lossless image compression: adaptive vs. classified prediction and context modeling for entropy coding
This paper discusses the most recent trends in the reversible intraframe compression of grayscale images. With reference to a spatial DPCM scheme, prediction, either linar or nonlinear, may be accomplished in a space-varying fashion following two main strategies: adaptive, i.e., with predictors recalculated at each pixel position, and classified, in which image blocks, or pixels are preliminarily labeled into a number of statistical classes, for which optimum MMSE predictors are calculated. A trade- off between the above two strategies is proposed. It relies on a classified linear-regression prediction obtained through fuzzy techniques, followed by context-based modeling of the outcome prediction errors, to enhance entropy coding. The present scheme is a reworking of a fuzzy encode previously presented by the authors. Now, predictors, instead of pixel intensity patterns, are fuzzy-clustered to find out optimized MMSE prediction classes, and a novel membership function measuring the fitness of prediction is adopted. A thorough performances comparison with the most advanced methods in the literature highlights advantages, and drawbacks as well, of the fuzzy approach.
Algorithm Mapping and Evaluation
icon_mobile_dropdown
Mapping of image compression transforms to reconfigurable processors: simulation and analysis
This paper summarizes techniques for mapping blockwise compression transforms such as EBLAST, JPEG, Visual Pattern Image Coding, and vector quantization to reconfigurable architectures such as field-programmable gate arrays. The distinguishing feature of this study is that computational precision is varied among different operations within a given transform. For example, we found that a four-bit block sum is sufficient to compute EBLAST accurately using encoding blocks as large as 10 X 10 pixels. In contrast, image-template convolutions constituent to EBLAST compression and decompression require seven- to eight-bit fixed-point operations.
Performance analysis of tabular nearest-neighbor encoding for joint image compression and ATR: I. Background and theory
In this series of two papers, a high-level overview of the Tabular Nearest Neighbor Encoding (TNE) algorithm is presented. The performance of TNE is analyzed using training images having different size, statistical properties, and noise level than the source image. TNE is compared with several published algorithms such as visual pattern image coding, JPEG, and EBLAST. The latter is a relatively new, high-compression image transform that has compression ratio CR approximately equals 200:1 that can be consistently achieved with low MSE. Analysis focuses on the ability of TNE to provide low to moderate compression ratios at high computational efficiency on small- to large-format text and surveillance images.
MTF as a quality measure for compressed images transmitted over computer networks
Ofer Hadar, Adrian Stern, Merav Huber, et al.
One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.
Comparison of wavelet and Karhunen-Loeve transforms in video compression applications
Yurij S. Musatenko, Olexandr M. Soloveyko, Vitalij N. Kurashov
In the paper we present comparison of three advanced techniques for video compression. Among them 3D Embedded Zerotree Wavelet (EZW) coding, recently suggested Optimal Image Coding using Karhunen-Loeve (KL) transform (OICKL) and new algorithm of video compression based on 3D EZW coding scheme but with using KL transform for frames decorrelation (3D-EZWKL). It is shown that OICKL technique provides the best performance and usage of KL transform with 3D-EZW coding scheme gives better results than just usage of 3D-EZW algorithm.
Noise and Error in Transmission and Compression
icon_mobile_dropdown
Results using an alternative approach to channel equalization using a pattern classification strategy
Frank M. Caimi, Gamal A. Hassan
This paper describes the approach and preliminary results obtained from the use of pattern recognition techniques to select a set of `best choice' equalizer coefficients and to decode a signal sequence directly. The method does not rely on the application of any adaptive algorithm for estimation of the equalizer coefficients during the actual data transmission or reception. Expectations are that performance benefits may be gained in those cases where adaptive algorithms fail to select the optimal filter coefficients due to computational complexity or other factors.
Algorithm Mapping and Evaluation
icon_mobile_dropdown
Performance analysis of tabular nearest-neighbor encoding for joint image compression and ATR: II. Results and analysis
Vector quantization (VQ) is a well-established signal and image compression transform that exhibits several drawbacks. First, the VQ codebook generation process tends to be computationally costly, and can be prohibitive for high- fidelity compression in adaptive real-time applications. Second, codebook search complexity varies as a function of image statistics, codebook formation technique, and prespecified matching error. For large codebooks, search overhead can be prohibitive for VQ compression having stringent constraints on matching error. A third disadvantage of VQ is codebook size, which can be reduced at the cost of fidelity of reproduction in the decompressed image. Such issues were discussed in Part 1 of this series of two papers.