Proceedings Volume 0207

Applications of Digital Image Processing III

cover
Proceedings Volume 0207

Applications of Digital Image Processing III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 December 1979
Contents: 1 Sessions, 41 Papers, 0 Presentations
Conference: 23rd Annual Technical Symposium 1979
Volume Number: 0207

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Iterative Method Applied To Image Reconstruction And To Computer-Generated Holograms
J. R. Fienup
This paper discusses an iterative computer method that can be used to solve a number of problems in optics. This method can be applied to two types of problems: (1) synthesis of a Fourier transform pair having desirable properties in both domains, and (2) reconstruction of an object when only partial information is available in any one domain. Illustrating the first type of problem, the method is applied to spectrum shaping for computer-generated holograms to reduce quantization noise. A problem of the second type is the reconstruction of astronomical objects from stellar speckle interferometer data. The solution of the latter problem will allow a great increase in resolution over what is ordinarily obtainable through a large telescope limited by atmospheric turbulence. Experimental results are shown. Other applications are mentioned briefly.
Image Restoration Using A Norm Of Maximum Information
B. Roy Frieden
Image interpreters often express the desire to extract a "maximum of information" from a given picture. We have devised a new norm of restoration that, in fact, realizes this aim. The image data are forced to contain a maximum of information about the object, through variation of the object estimate. This maximum information (MI) norm restores the ideal object which, had it existed, would have maximized the throughput of information from object to image planes. Or, the object estimate achieves the "channel capacity" of the image-forming medium. The following simple model for image formation is used. The imaging system is regarded as a transducer of photon position, from x in the object plane to y in the image plane. Then the conditional probability p(y|x) is just s(y-x), the PSF for the imagery, plus an unknown noise probability law n(y) independent of x (signal) for those transitions to y that are due to noise. The average information per photon transition x → y may then be calculated, using the correspondence of probability law p(x) with the object and p(y) with the image. When the image law p(y) is constrained to equal the data, the only set of unknowns remaining is the object, which may be varied to maximize the information. Restorations by this method are compared with corresponding ones by maximum entropy and show some advantage over the latter.
Environmental Change Detection In Digitally Registered Aerial Photographs
Werner Frei, Tsutomu Shibata, Gerold C. Huth
Digital image matching permits to analyze aerial photographs for subtle changes that are not visible to the unaided eye. These changes can be portrayed in pictorial form - a "change image" - which provides a cost effective early indicator of impending environmental problems. The digital image matching problems encountered in low altitude aerial photographs are studied here, and examples are shown of this method applied to enviromental assessement studies.
Wavefront Sensing By Phase Retrieval
Robert A. Gonsalves, Robert Chidlaw
Phase retrieval implies extraction of a wavefront θ(f) at one spatial plane based on the intensity p(x) in a conjugate plane. For example, θ(f) might be the phase distortion at the entrance pupil of an imaging system when a distant point source is imaged through a turbulent atmosphere; p(x) is the real, non-negative point spread function measured in the image plane. In this paper we describe the mathematics of the technique and show computer simulations.
Linear Regression Formula For Dimensionality Reduction
D. F. Mix, R. A. Jones, F. Rahbar
Linear regression is a powerful procedure with wide areas of application. We show in this paper that a very fruitful area of application is in the area of dimensionality reduction in pattern recognition. The dimensionality reduction is accomplished by preselecting cluster centers in the range and using regression techniques to derive the transformation. Experimental results are presented that compare this procedure to the Karhunen-Loeve procedure for several data sets.
Heuristic Approach For Video Image Enhancement
James K. Chan, Curtis L. May, Andrew M. Chao
Video Image Enhancement through adaptive noise filtering and edge sharpening is presented. The basic concepts behind this technique are the fact that with some kind of image segmentation, noise filtering can be performed in the nearly uniform region and edge sharpening only near an edge. The resulting algorithm is nonlinear and adaptive. It adapts globally to the input SNR and locally to the gradient magnitude. Implementation is quite simple. Performance is nonlinear and depends on the SNR of the original image. Effective video signal-to-noise ratio can be improved with minimal observable contouring effect, degradation in spatial resolution, and other artifacts.
Effect Of Image Aliasing On Image Alignment To Subpixel Accuracy
S. M. Jaffey, M. W. Millman
Image aliasing (undersampling) can cause significait errors in attempts to align two small size imagus to subpixel accuracy. This problem arises in applications such as focal plane stabilization, target detection, angular velocity updates to inertial navigation, and image resolution improvement, in which a small number of detectors is preferable from a cost standpoint. This paper compares the sensitivity of four registration algorithms to a sequence of increasingly aliased images, ranging in size from 8 x 8 to 32 x 32 pixels. The algorithms are: minimum sum of differences on interpolated images (MSD), normalized cross-correlation (NCC), phase correlation (PC), and normalized mean absolute difference (NMAD). The results show that the MSD and NCC methods are least sensitive to aliasing. Attempts to make the 1188 and PC methods more robust against aliasing are also discussed. The main conclusion is that aliasing should be considered as an effect when choosing the system modulation transfer function and the number of detectors.
Digital And Optical Methods Of Image Differencing For Track Assembly
Robert B. Asher, Leonard Wald, Donald Hines, et al.
The problem of target identification and track assembly from successive image frames from a satellite based infrared mosaic detection is considered. The wide variety of digital and electronic algorithms for bulk filtering, target identification, and track assembly are described. Optical pattern recognition techniques are also described.
Fan-To Parallel-Beam Conversion In CAT By Rubber Sheet Transformation
G. W. Wecksung, R. P. Kruger, R. A. Morris
A technique for converting fan-beam projections to parallel-beam projections for use in computed tomography is presented. The problem is approached by use of a rubber sheet transformation. Since the data is discretized, an interpolation step is necessary. For densely sampled data this approach appears satisfactory and a significant reduction in photon noise is observable in computer simulations.
Nonlinear Restoration Of Filtered Images With Poisson Noise
C. M. Lo, A. A. Sawchuk
A model for photon resolved low light level image signals detected by a counting array is developed. Those signals are impaired by signal dependent Poisson noise and linear blurring. An optimal restoration filter based on maximizing the a posteriori probability density (MAP) is developed. A suboptimal overlap-save sectioning method using a Newton-Raphson iterative procedure is used for the solution of the high dimensionality nonlinear estimation equations for any type of space-variant and invariant linear blur. An accurate image model with a nonstationary mean and stationary variance is used to provide a priori information for the MAP restoration filter. Finally, a comparison between the MAP filter and a linear space-invariant minimum mean-square error (LMMSE) filter is made.
Speedup Of Radiographic Inspection Algorithms Using An Array Processor
W. G. Eppler, O. Firschein, G. McCulley, et al.
Algorithms for the automatic detection of defects, such as cracks and cavities in radiographs of artillery shells have been previously described. An array processor mechanization of these algorithms is now described that allows a 200 x 300 pixel array to be analyzed in less than 10 seconds. The algorithms were restructured to take advantage of the vector orientation of the array processor.
Real-Time Digital Image Filtering And Shading Correction
Roger C. Dahlberg
A system has been designed to filter a television image in real-time or near real-time for the purpose of enhancing high spatial frequencies and attenuating low spatial frequencies. The system makes innovative use of a two-dimensional linear filter, coded memories, and high speed digital multipliers to provide powerful linear and homomorphic image filtration. The architecture is useful in correcting and compensating linear and multiplicative shading, a common artifact in television images. The filter can also be used to extract the very low spatial frequencies generally associated with illumination. Real time filtration helps overcome the dynamic range limitations of most television systems by redistributing the image power spectrum for optimum viewing of edge information.
Comparison Of Digital Image Filters And A Hybrid Smoother
H. A. Titus, J. L. Pereira
In the recent past considerable attention has been devoted to the application of Kalman filtering to smoothing out observation noise in image data. Optimal two-dimensional Kalman filtering algorithms require large amounts of storage and computation. Thus, the study of suboptimum estimators that require less computation is of importance. A comparison of some suboptimum image filters against the optimum non-recursive interpolator is accomplished. A new semi-causal (hybrid) filter is proposed that compensates the suboptimality of a simple two-dimensional recursive filter by means of an optimal combination of its estimate and a few non-causal observations.
New Technique For Blind Deconvolution
S. C. Pohlig, J. S. Lim, A. V. Oppenheim, et al.
Frequently an image may be blurred by a point spread function whose details are not known exactly. In such a case it is necessary to estimate the point spread function before deconvolving the blurred image. This paper presents a new technique for estimating a zero phase blurring function when its optical transfer function is smooth. The estimate is obtained by smoothing the spectral magnitude of the image and comparing it to an average magnitude that is also smoothed. The average magnitude is obtained by averaging over an ensemble of similar images. The estimation can be extended to degradations such as a defocused lens by thresholding the estimated magnitude to obtain zero crossings and adjusting the phase accordingly. In particular, this technique can be applied to a circularly symmetric Gaussian or a defocused lens with a circular aperture.
Automatic Change Detection For Synthetic Aperture Radar Images
Janmin Keng
The detection of changes between two images is of interest in a wide range of applications. An important example is side-looking Synthetic Aperture Radarl (SAR) imagery taken at different times. A method called Symbolic Matching with Confidence Evaluation is proposed to perform atuomatic change detection for SAR images. The results of the preliminary experiments have been very promising and will be presented.
Data Handling Recording System
Andrew R. Pirich
A variety of new and improved sensors are evolving from advanced development programs. (Systems such as ESSWACS, (Electronic Solid-State Wide-Angle Camera System), the LOREORS (Long Range Electro/Optic Reconnaissance System), and the 2ND Generation FLIR, IR System. The time has come to combine these and other sensor capabilities into a tactical reconnaissance operation which includes an effective real-time capability. A general approach to real-time reconnaissance is employing several airborne sensors and including both airborne and ground data management devices and procedures. Automatic (digital) data processing information will help minimize the amount of irrelevant data presented to human observers. The human observer represents the final and essential filtering agent required to reduce the information rate to and level suitable for dissemination over data links for rapid (real-time) access to tactical commanders.
Real-Time Image Enhancement Using 3 x 3 Pixel Neighborhood Operator Functions
Joseph E. Hall, James D. Awtrey
A new type of silicon charge coupled device (CCD) imager which provides nine simultaneous video outputs representing a 3 x 3 pixel block that scans the imaging array has been used to emphasize edges and fine detail in various images. The device can also compensate for nonuniform scene illumination. Experimental results indicate that the device can be used to combine real-time analog image processing with subsequent digital processing to form a powerful image acquisition and processing system.
Median Masking Technique For The Enhancement Of Digital Images
Robert T. Gray, Dennis G. McCaughey, Bobby R. Hunt
A nonlinear masking technique has been developed which characterizes digital images by local measures of the median and the median absolute deviation (MAD). Space-variant enhancement is elicited by modifying the local MAD as calculated over a moving window in the original image. The method is found to be effective in edge enhancement and noise cleaning operations.
The LAE 980: A Multipurpose Digital Image Processing System
P. Colin, A. Hourani, B. Keith, et al.
This paper describes a multi-purpose image-processing system. This system was designed for different applications, for example, medical image processing (thermographic imaging, computer assisted diagnosis, etc...), remote sensing (multispectral analysis and classification, thermal mapping of rivers, etc...) and electron microscope image processing (T.E.M., noise filtering, pattern recognition, geometrical measurements). The system can be connected on-line to any kind of input and output image-peripheral. The peripherals used now are : TV camera, thermographic camera, flying-spot scanner, flying-spot film recorder, mechanical scanner coupled to an optical processor, refreshed B&W and color display, graphic tablet, magnetic tape and disc. The user does not need to have an in-depth knowledge of the whole system, the IMAGE 4 software package takes over the housekeeping functions and permits easy FORTRAN programming, whereas the LATIN interactive program package enables anyone not having computer knowledge to use the system. In the conclusion, a comparison is made with the major image processing systems and software packages published in the literature. The appendix gives illustrations of the previously mentioned applications.
Fast Median Filter Implementation
Gregory J. Wolfe, James L. Mannos
The Tukey median filter is widely used in image processing for applications ranging from noise reduction to dropped line replacement. However, implementation of the median filter on a general-purpose computer tends to be computationally very time-consuming. This paper describes a new median filter implementation suitable for use on the video-rate "pipeline processors" provided by several commercially-available image display systems. The execution speed of the new implementation is faster than the best software implementations, depending on the median filter window size, by up to an order of magnitude. It is also independent of the image dimensions up to a 512x512 pixel size.
Applications Of Programmable Logic Arrays To Video Rate Image Processing
Mitchell W. Millman
Programmable Logic Arrays (PLA's) are digital electronic devices capable of performing complex logic functions at very high rates - up to 20 million operations per second. They are available in many configurations including several types that can be field programmed using simple and inexpensive equipment. They are thus ideal devices for implementing several types of video rate image processing algorithms, particularly those algorithms that involve a high degree of adaptability or binary decision making. This paper describes the technology and operation of PLA's and details several representative image processing applications, including an adaptive differential signal compression algorithm, a gradient generator, and an edge continuity detector.
Digital Television Parts Measurement System
Eugene V. Price, Rolan Vickery
An interactive teaching program has been developed to allow an operator to define critical measurements in a manufactured part, both at the keyboard and with the joystick, from an image-processing terminal. The digitized input from a television scanner is smoothed and edges are located by routines set up and called from the teaching program. When acceptable results are returned, the parameters are incorporated in an automatic production measurement set referenced to a particular part. New parts can be defined and added to the system with brief training sessions, or repetitive measurements can be made of one type of part for production quality control.
Multiple Window Display Techniques In Computed Tomography
W. J. Nalesnik, J. B. Shields
A simple technique has been developed to simultaneously display regions of a CT image which have large differences in CT numbers, such as lung and soft tissue. The CT image is considered to be the sum of two unimodal distributions of CT numbers and the CT numbers associated with one region are mapped into the other using a simple linear transformation. The significance of this technique is that it permits the entire CT image to be visualized with optimum contrast either on the CT display monitor or on a single photograph. Examples of a body section and a head section are presented.
Precision Computer Display Techniques In Nuclear Medicine
Brent S. Baxter
Providing the clinician with an accurate visual representation of digital nuclear medicine data requires 1) interpolation to fill in the intensity field between grid points, 2) correction for grayscale nonlinearities inherent in the display and film, and 3) sufficiently fine graylevel resolution to avoid generating artificial contours. Results from preliminary experiments using a precision computer display/film system have been encouraging, indicating improved image interpretation is frequently possible compared with both conventional analog scintigrams and commonly available computer displays. A wider range of count rate data was visible in the digital images giving better identification of low count rate areas, display artifacts due to regularly spaced data samples were eliminated as were contour artifacts caused by too few graylevels, and the relevant anatomy and or pathology was frequently demonstrated with greater clarity. Clinical examples will be presented which illustrate the benefits to be gained by using these techniques.
Global Local Edge Coincidence Segmentation For Medical Images
J. J. Hwang, C. C. Lee, E. L. Hall
Object location in computed tomography images is a preliminary step required for automated measurements which may be useful in many diagnostic procedures. Most object location, image processing techniques are either globally based such as histogram segmentation or locally based such as edge detection. The method described in this paper uses both local and global information for object location. The technique has been applied to the location of suspected tumors in CT lung and brain images. Sorting and merging steps are required for eliminating noise regions but all suspected tumor regions have been located. Measurements such as boundary roughness or density statistics may also be made on the objects and used to identify suspicious regions for further study by the radiologists. Algorithms for chain-encoding the object boundaries and locating the vertices on the boundaries is also presented and compared. These methods are useful for shape analysis of the regions. The significance of this technique is that it demonstrates important additional capability which could be added to the software libraries of most CT systems.
Imaging Techniques Implemented On Pipeline Minis And An Array Processor With Application To Nerve Fiber Counting
Harold G. Rutherford, Gary K. Frykman, S. Andrew Yakush, et al.
A pipeline approach to processing of digitized picture data using multiple mini computers and an array processor is presented. Picture size can be up to 512 by 512 pixels. Implementing of many heuristic algorithms such as edge detection, edge enhancin2', convolution/ correlation of template, peak detection, fast fourier transforms, filtering, summing two pictures, registration, histogram equaiization, thresholding and object counting, is determining and application is made to a nerve fiber counting project. The goals of the medical project are to provide data to be used in determing optimal surgical repair of injured or severed nerve and time scale as well as a percent of recovery of function following surgical repair. A simple operating system is described to invoke specific routines in the needed order in designated processors. Three approaches are discussed to the problem of image enhancement, pattern recognition and display of a picture consisting of multiple scenes or an object which is captured in the form of multiple "slices" through the object.
Computer-Controlled Video Subtraction Procedures For Radiology
G. W. Seeley, M. P. Capp, H. D. Fisher, et al.
Over the past five years, our group at the Arizona Health Sciences Center has been developing a system for photoelectronic radiology. One of the projects in which we are involved is intravenous angiography, which Dr. Paul Capp reported on in Session 2 of Recent and Future Developments in Medical Imaging II. The purpose of this paper is to show some of the procedures of manipulation and measurements that have been developed to obtain better subtracted images.
Applications Of Digital Processing In Computed Radiography
S. R. Amtey, Charles Morgan, Gerald Cohen, et al.
Computed radiography (CR) is a recent development in diagnostic radiology which yields digital radiographs. Digital image enhancement of CR images in the form of smoothing the noise and enhancing the edges of anatomic boundaries has been used as a means to aid the physician in extracting clinical information from the radiograph. Details of the smoothing and edge enhancing function are discussed along with potential diagnostic applications.
Boundary-Finding Scheme And The Problem Of Complex Object Localization In Computed Tomography Images
Peter G. Selfridge, Judith M. S. Prewitt
The problems of intelligent image processing by computer, especially the processing of medical images like computed tomography scans, is examined in light of current image segmentation techniques. It is concluded that part of the problem lies in the lack of knowledge about how to guide low-leveiprocesses from higher level goals. An iterative boundary-finding scheme is presented which may aid in this guidance, and results from using specific criteria in the general framework to locate kidneys in abdominal computed tomography scans are presented and discussed. The problem of complex object localization in images is discussed, and some avenues for further research are indicated.
Reduction Of Edge Position Uncertainty On Computed Tomographic (CT) Scans
Joseph A. Horton, Charles Kerber, John M. Herron, et al.
The perception of edges on computed tomographic (CT) scans appears easy but in fact is difficult. Such perception is important because it is necessary to make quantitative determinations. Diagnosis of such entities as spinal stenosis (narrowing of the spinal canal with encroachment on spinal cord and nerve roots) hinges upon an accurate knowledge of cross-sectional areas.
Dual-Mode Hybrid Compressor For Facsimile Images
Wen-Hsiung Chen, John L. Douglas, William K. Pratt, et al.
A dual mode facsimile data compression technique, called Combined Symbol Matching (CSM) , is presented. The CSM technique possesses the advantages of symbol recognition and extended run-length coding methods. In operation, a symbol blocking operator isolates valid alphanumeric characters and document symbols. The first symbol encountered is placed in a library, and as each new symbol is detected, it is compared with each entry of the library. If the comparison is within a tolerance, the library identification code is transmitted along with the symbol location coordinates. Otherwise, the new symbol is placed in the library and its binary pattern is transmitted. A scoring system determines which elements of the library are to be replaced by new prototypes once the library is filled. Non-isolated symbols are left behind as a residue, and are coded by a two-dimensional run-length coding method. Simulation results are presented for CCITT standard documents. With text-predominate documents, the CSM compression ratio exceeds that obtained with the best run-length coding techniques by a factor of two or more, and is comparable for graphics-predominate documents.
Adaptive Hybrid Coding Of Images
Ali Habibi
In Hybrid Coding technique, the sampled data is divided into blocks of NXM samples. Next, each block is transformed to generate a one-dimensional transform of each line in the block. The transform coefficients are then processed by a block of DPCM encoders which uncorrelate the data in the second dimension and quantize the uncorrelated samples using appropriate quantizers. In this study an adaptive Hybrid Coding technique is proposed based on using a single quantizer (A/D Converter) to quantize the transform coefficients and using a variable-rate algorithm for coding the quantized coefficients. The accuracy of the A/D converter (number of bits per sample) determines the fidelity of the system. The buffer-control algorithm controls the accuracy of the A/D converter for each block resulting in a fixed-rate encoder system. Experimental results have shown a stable buffer condition and reconstructed images with a higher fidelity than nonadaptive Hybrid systems.
Walsh-Hadamard Transform/Differential Pulse Code Modulation (WHT/DPCM) Processing Of Intraframe Color Video
E. S. Kim, K. R. Rao
Hybrid processing of NTSC color images for achieving bandwidth compression is simulated. The processing involves combination of transform and predictive coding of intraframe video. Walsh-Hadamard transform (WHT) along each row followed by differential pulse code modulation (DPCM) along each column is implemented in (16 x 16) and (32 x 32) blocks. Also 2d-WHT of (4 x 4) blocks together with prediction of adjacent blocks is investigated. Based on the histogram of the difference signal, quantizers are optimized for minimum mean square error between the original and processed images. Variable bit allocation reflecting the variance of the error signal is adopted for maintaining a specified bit rate. The processing schemes are evaluated in terms of both subjective and objective criteria.
Practical Universal Noiseless Coding
Robert F. Rice
Discrete data sources arising from practical problems are generally characterized by only partially known and varying statistics. This paper provides the development and analysis of some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources. Specifically, algorithms are developed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms is obtained because most real world problems can be simply transformed into this form by appropriate preprocessing.
Conditional Replenishment Using Motion Prediction
David N. Hein, Harry W. Jones Jr.
Conditional Replenishment is an interframe video compression method that uses correlation in time to reduce video transmission rates. This method works by detecting and sending only the changing portions of the image and by having the receiver use the video data from the previous frame for the non-changing portion. The amount of compression that can be achieved through this technique depends to a large extent on the rate of change within the image, and can vary from 10 to 1 to less than 2 to 1. An additional 3 to 1 reduction in rate is obtained by the intraframe coding of data blocks using a 2-dimensional variable rate Hadamard transform coder. A further additional 2 to 1 rate reduction is achieved by using motion prediction. Motion prediction works by measuring the relative displacements of a subpicture from one frame to the next. The subpicture can then be transmitted by sending only the value of the 2-dimensional displacement. Computer simulations have demonstrated that data rates of 2 to 4 Mega-bits/second can be achieved while still retaining good fidelity in the image.
Bandwidth Compression: Its Effect On Observer Performance
Joseph E. Swistak
A study was conducted which examined the effects of simultaneous spatial and temporal bandwidth compression on observer detection and recognition performance of military targets. Five levels of temporal (frame rate) and four levels of spatial (bits per pixel) were co-varied using a factorially designed experiment. Of special interest was any interaction effect between the two main variables. A total of 48 observers were divided in four groups of 12. Each group was presented a single spatial reduction level at all five temporal reduction levels. Statistical analysis revealed no significant differences in subjects' detection or recognition performance due to changes in the temporal rate at which information was presented. Changes in the spatial levels (resolution) did have a significant effect on both detection and recognition performance. Although significant differences in subject performance were noted due to the interaction of the two main variables, in depth analysis revealed the interaction effect to be anomalous. The single most critical element of bandwidth compression appears to be spatial.
Video Data Processor (VIDAP): A Real-Time Video Data Compressor
Donald J. Spencer
A monolithic digital Hadamard transform device is used to reduce complexity in a real-time video data compression system. The Video Data Processor (VIDAP) system is designed to permit variable frame rates, sampling resolutions, and compression ratios. Versatility is incorporated through a modular design which permits tailoring to meet a wide range of video data link applications. Emphasis has been placed on a low cost, low power design suitable for airborne systems.
Bounded-Error Coding Of Cosine Transformed Images
R. A. Gonsalves, A. Shea
Cosine transform coding captures the major features of an image at bit rates as low as 0.5 bits per pixel (BPP). However, because the coding is done in transform space, spatial edge information is lost and the images appear soft even at 3BPP. Spatial techniques such as DPCM with entropy encoding preserve edges but fail, ungracefully, at about 2BPP. In this paper we combine the two. The reconstruction from transform coding is compared with the original and the spatial error signal is quantized and encoded. The results are compared with conventional DPCM and cosine transform encoding.
Image Coders With Semi-Definite And Definite Decoders
T. G. Marshall Jr.
The special class of convolutional, or nonblock, coding systems which employ both FIR encoders and FIR decoders, or definite decoders, is proposed for real-time processing of sampled data. Fully definite multi-dimensional systems and partially definite, i.e. definite in some dimensions but not in others, are seen to exist. Since such a coding system is necessarily a multi-channel system, for signals of any dimension, some of the properties of multi-channel systems are considered as well as their applications. It is seen that convolutional coders incorporating definite decoders can do several desirable things, normally associated only with block coders, such as noncausal coding; and, in addition, traditional applications of single channel convolutional coders, such as linear prediction and estimation, are also possible in multi-channel systems. Color television image coding is seen to be a natural application for multi-channel coding because of the inherent separation of luminance and chrominance into separate channels. An important property of definite coding systems affecting the economy of 2 and 3 dimensional processing systems, which are used for bandwidth compression, is that the decoder uses only the compressed data, thereby significantly reducing the memory requirements for storage of data corresponding to previous lines and frames. Examples are presented of definite systems which separate color signals into their components, noncausal coders, doders which reduce the visability of noise bursts, and linear predictors with feedback quantizers.
Two-Dimensional Transforms Of The Sampled NTSC Color Video Signal
S. J. Orfanidis, T. G. Marshall
Two-dimensional transforms of the chrominance components of the NTSC color video signal are studied. The effects of interlace and subcarrier modulation on the spatial frequency spectra are treated in detail. A two-dimensional FFT algorithm is proposed and shown to be more efficient than conventional ones.
Reduction Of Quantization Noise In Pulse Code Modulation (PCM) Image Coding
Jae S. Lim
A new technique to reduce the effect of quantization in PCM image coding is presented in this paper. The new technique consists of Roberts' pseudonoise technique followed by a noise reduction system. The technique by Roberts effectively transforms the signal dependent quantization noise to a signal independent additive random noise. The noise reduction system that follows reduces the additive random noise. Some examples are given to illustrate the performance of the new quantization noise reduction system.