Proceedings Volume 0292

Processing of Images and Data from Optical Sensors

William H. Carter
cover
Proceedings Volume 0292

Processing of Images and Data from Optical Sensors

William H. Carter
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 7 December 1981
Contents: 1 Sessions, 34 Papers, 0 Presentations
Conference: 25th Annual Technical Symposium 1981
Volume Number: 0292

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Partially Coherent Optical Processing Of Images
F. T. S. Yu
A relationship between spatial coherence and the source intensity distribution is presented. Since the spatial coherence is dependent upon the image processing operation, a reduced spatial coherence may be used for the processing operation. The advantage of the source encoding is in the relaxation of the constraint of the coherence requirement, enabling the processing operation to be carried out using an extended incoherent source. The constraint of the temperal coherence requirement for partially coherent processing is discussed. Experimental demonstrations for image deblurring and image subtraction are also provided.
Optical Processing Of Photographic Images
F. T. S. Yu, J. L. Horner
Recent advances in coherent and incoherent optical information processing systems have brought into use communication and information theory to analyze their performance. An optical information processing system can be analyzed with many of the same concepts of linear system theory (e.g., spatial impulse response, spatial frequency and spatial domain synthesis, etc.), and the photographic images to be processed can be regarded in the same manner as time signals (e.g., spatial frequency content, spatial amplitude and phase modulation, space-bandwidth product, etc.). Both coherent and incoherent optical processing systems can be treated as linear systems and the processing operations can generally be carried out by communication theory concepts. Although coherent optical information procesing operations have been used for performing complex amplitude operations, complex processing can also be performed with partially coherent or even white-light illumination. The importance of optical information processing operations, either coherent or incoherent, is due to the basic Fourier transform property of lenses. In this paper, we will discuss mostly the partially coherent systems because they are of more recent interest and possess certain advantages, we feel, over the traditional coherent optical processors. Experimental illustrations of the results are provided. In view of the broad area of optical image processing, we will confine ourselves to a few applications that we consider of general interest.
Intra-Class Infrared (IR) Tank Pattern Recognition Using Synthetic Discriminant Functions (SDFs)
Charles F. Hester, David Casasent
A hyperspace description of a matched spatial filter (MSF) correlator is advanced and used to develop a new MSF that is a synthetic discriminant function (SDF). The use of this filter to recognize objects independent of their aspect is described. Terminal homing recognition of tanks from IR imagery is taken as a case study. Bandpass filtering and a new maximum common information filter synthesis concept and issues such as filter energy are described and used to improve the system's intra-class pattern recognition ability and to overcome the intensity differences present in IR imagery. Experimental verification is also included.
Parameter Extraction By Holographic Filtering
J. L. Horner, H. J. Caulfield
Holographic pattern recognition has been plague by too much sensitivity to such parameters as rotation and scale. We show here how to use that sensitivity to make rapid, accurate estimations of those parameters. For the particular case studied, we made signal-magnitude independent measurements of three rotation angles from -15° to +15° using three matched filters at -15°, 0°, and 15° multiplexed onto a single hologram. This lead to obtain a worst-case accuracy (standard deviation) of 1.0° for intermediate angles. The composite filter allows a search to be made electronically in the output plane, thus avoiding any moving parts, i.e., object or matched filter.
Optical Reconstruction Of Phase Images
Peter F. Mueller
Incoherently illuminated phase objects produce no visible modulation. Partially coherent optical systems suitable for reconstructing visible images from phase objects are presented. Carrier modulated phase objects produced from bleached silver halide emulsions are described in terms of their complex amplitude transmittance and the retrieval process analyzed by means of Fourier transformations and spatial frequency filtering. A comparison is made between these and Zernike phase contrast methods. Parameters influencing grey scale reproduction and image polarity are discussed. Reflection retrieval of noncarrier modulated phase objects is also treated and experimental results shown for both transmission and reflection retrieval modes.
Novel Image Duplication Technique Utilizing Fourier Optics
Peter F. Mueller, David J. Cronin, George O. Reynolds
In the normal process of duplicating Photographic images from original negatives, resolutions of 200 cycles/mm have been neasured. lie will present a new duplication technique which has the capability of reducing this resolution loss to a few Percent at resolutions between 300 and 500 cycles/mm while simultaneously maintaining the continuous-tone quality and dynamic range of the original. The technique creates a modulated phase image dupe which produces a high resolution, continuous-tone, speckle free, image when observed through a white light Fourier optical viewing system. Plastic phase relief images with 70mm formats were made by this technique. Resolution transfers of nearly 100% at 228 cycles/mm were measured in images having excellent continuous-tone quality. The dynamic range of the retrieved images is comparable with that achieved in conventional photographic duplicates. Diffraction efficiencies greater than 20% in one diffracted order of the viewing system were also measured in the system.
Evaluation Of Optical And Digital Image Processing Techniques
Philip S. Considine, Bruce M. Radl
A comparative study of image processing techniques examines optical, hybrid and digital methods. System capabilities are evaluated with emphasis on text processing applications. Results show the strengths of each method. Optical processing is best for wide bandwidth parallel processing. Digital processing is the most versatile and interactive processing technique. Hybrid systems show strong potential dependent upon the development of improved image transducers. A discussion of a variety of image processing hardware accompanies the evaluation.
Iterative Design Of Pupil Functions For Bipolar Incoherent Spatial Filtering
Joseph N. Mait, William T. Rhodes
An iterative algorithm is described for the computer generation of complementary pupil functions to be used in hybrid incoherent optical systems for bipolar spatial filtering. Through repeated transformations between the Fourier and spatial domains, and the application of constraints in both, the pupil functions are constructed subject to the ultimate constraint that the difference of the associated PSFs result in a desired bipolar spatial impulse response. Minimization of component PSF bias is a major concern. The case of bandpass spatial filtering will be discussed in detail and preliminary results presented.
White-Light Prefiltering For Real-Time Digital Image Transmission Of Still And Moving Color Video Images
G. Eichmann, R. Stirbl
Signal-dependent noise encountered when sampling video signals at low sampling rate, while using an adaptive video delta modulator both as a source encoder and a video digitizer, is combated by using a white-light reflective transform optical preprocessor. While the white-light preprocessor works on black and white images, its main advantage is that it can simultaneously process, using a single source, many color channels. Further, the relative lack of both spatial and temporal coherence aids in reducing speckle and interference noise effects. Experimental results on color video images, that have been preprocessed using a white-light optical transform preprocessor and digitally encoded by sampling with an adaptive delta modulator close to the Nyquist rate, are presented.
Temporally Adaptive Hybrid Optical/Digital Interframe Compression Scheme
H. N. Ito, B. R. Hunt
The feasibility of optical processing for interframe image compression has been demon-strated by hybrid optical/digital IDPCM/Frame-to-Frame DPCM interframe compression architecture and its simulation [1]. However, the subjective image quality of the reconstructed 14th frame for 1.5 bits/pixel or 1.75 bits/pixel are not excellent enough to meet a broadcasting standard. Also, the transmission rate performances per second with the frame rate: 30 frames/sec and 256 x 256 lines of the images are 2.9 14bits/sec or 3.4 Abits/sec, resoectively, which are not quite competitive against the 1.5 lbits/sec monochrome videoteleohone system performance [2]. In this oa7Der, in order to improve the hybrid system performance, another hybrid optical/digital adaptive IDPCWconditional re plenishment interframe compression architecture is proposed and simulated. As a result of simulating 14 sequences of Walter Cronkite images with 256 x 25, lines and 6 bit intensity, reconstructed images with an excellent image quality are obtained with 1.2 libits/sec for the frame rate 30 frames/sec, and the average compres-sion ratio performance is 18.13, and the average bit rate is 0.67 bits/pixel.
Impulse And Edge Responses In Phase Imagery
J. Oieda-Castarieda, L. R. Berriel-Valdos
The response of an optical system and its quadratic detector to an impulse in phase is expressed in terms of the amplitude PSF. This formulation is employed: Firstly, to indicate explicitly the conditions under which a phase point is rendered visible. Secondly, to extend the treatment to phase discontinuities to obtain an edge response in phase imagery.
Progress In Digital Image Processing And Analysis During The 1970S
Azriel Rosenfeld
Highlights of progress in the processing and analysis of digital images during the past ten years are reviewed. The topics include digitization and coding; filtering, enhancement, and restoration; reconstruction from projections; hardware and software; feature detection; matching; segmentation; texture and shape analysis; and pattern recognition and scene analysis.
Digital Image Processing In Europe: Some Highlights
P. Chavel, J. F. Abramatic
Research in digital image processing has been actively carried out in most of the western European countries over the last ten years or so. A full overview of the work done would not be possible in the framework of this communication, so that we shall mainly bring into light some results obtained in the most active domains of digital image processing. Among those are image coding, filtering techniques, image understanding. Applications to such domains as satellite imagery, medical imaging and image broadcasting will be presented.
Image Processing And Analysis Of Saturn's Rings
Gary M. Yagi, Paul L. Jepsen, Glenn W. Garneau, et al.
Processing of Voyager image data of Saturn's rings at JPL's Image Processing Laboratory is described. A software system to navigate the flight images, facilitate feature tracking, and to project the rings has been developed. This system has been used to make measurements of ring radii and to measure the velocities of the spoke features in the B-Ring. A projected ring movie to study the development of these spoke features has been generated. Finally, processing to facilitate comparison of the photometric properties of Saturn's rings at various phase angles is described.
Determination Of Planetary Photometric Functions
Joel Mosher, Jean J. Lorre
An essential step in the production of planetary mosaics and an important scientific experiment in its own right is the determination and removal of the photometric properties of the Atmosphere and surfaces of planetary objects. This paper reviews the work doen at the Image Processing Laboratory in this area in support of the Voyager Project.
Processing Infrared Images For Fire Management Applications
John R. Warren, William K. Pratt
The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.
Surface Location In Scene Content Analysis
E. L. Hall, J. B. K. Tio, C. A. McPherson, et al.
The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.
The Importance Of Being Positive
B. Roy Frieden
The most important advance in restoring images during the past decade probably was the realization that positive-enforced solutions are a real advance over unconstrained solutions. Surprisingly, the positive constraint both reduces spurious oscillation and enhances resolution simultaneously. This will be shown by a simple graphical argument. The earliest workers in this exotic field realized that by enforcing positivity they were inducing higher-frequency oscillation into their outputs than even is present in the image data. And more importantly, these were real and not artifacts. That is, super-resolution was being produced in real images for the first time. Some examples of these will be shown. The earliest methods for enforcing positivity were ad hoc, e.g., by arbitrarily representing the restoration as the square of a function. Later positivity was given a firm theoretical basis through the route of "maximum entropy," a concept which originated in the estimation of probability densities. A review of such methods will be given. Of late, positivity has also aided in producing real solutions to the "missing phase problem" of Labeyrie interferometry.
Multisensor Image Registration: Experimental Verification
Yair Barniv, David Casasent
A general multisensor image processor and image model are described and experimental verification of their use are provided. The pixel correlation coefficient is shown to be an adequate measure of the statistical independence of the gray levels in a multisensor image pair and to be of use as a guide to selecting various preprocessing operators. Experimental results obtained on the various multisensor image data bases verify our theory Quite well and have contributed considerable practical insight into the processing of imagery with random contrast reversal intensity differences.
Real-Time Digital Processing Of Color Bronchoscopic Images
Heinz Hugli, Werner Frei
In this paper we present possibilities of improving conventional and fluorescence bronchoscopic diagnosis by means of digital image processing methods. After a description of the particularities of the bronchoscopic imaging system on one hand and the state-of-the-art in real-time digital image processing on the other hand, we present a series of methods which improve bronchoscopic diagnosis. These methods are based on real-time processing of the bronchoscopic images in video form. Among them, spatial and temporal filtering have been implemented with good results. For further improvements of the bronchoscopy in general and the fluorescence bronchoscopy in particular, we propose two other methods which are histogram equalization and the generation of an intrinsic fluorescence image.
Clutter Rejection For Infrared Surveillance Sensors
David H. Pollock
Surveillance sensors must operate in a signal environment which is dominated by clutter sources. These sources range from clouds, sun glint and terrain features to manmade objects such as smoke stacks, furnaces and other industrial representations. The task of an infrared surveillance sensor is to distinguish automatically objects of interest (targets) within the clutter. These targets may vary from aircraft and missiles to armored vehicles. Processing methods for clutter rejection have ranged from various thresholding techniques to spatial extent criteria to spectral discrimination. Temporal discrimination involves multiple pulse processing requiring successive looks at the scene. Spatial discrimination includes electrical filtering and measurement of signal characteristics in the focal plane of the sensor. Spectral discrimination utilizes the signature characteristics of targets and clutter to recognize their uniqueness. For best results all of these discriminants are collectively applied to the surveillance sensor's output through various techniques to filter out and reject the clutter-induced signals.
Simulation Of Clutter Rejection Signal Processing For Mid-Infrared Surveillance Systems
M. S. Longmire, A. F. Milton, E. H. Takken
With most wide field of view surveillance systems, extensive clutter rejection signal processing is needed to suppress false alarms. At long range the objects sought appear to be point sources, and spatial discrimination can be used to reject extended sources (clutter) and to ensure a low false alarm rate. In this work a variety of one-dimensional signal processing techniques have been evaluated using high resolution (0.15 mrad) noise data gathered with a scanning IR data collection sensor operating in the 4.0 - 4.8 pm spectral band. The recorded noise contains segments with strong clutter from back-lit clouds and segments from uniformly radiant sky with no clutter. These segments were separated in the simulation. Two ordinary bandpass filters and a spatial filter matched by a least-mean-square (LMS) technique to the detector output from a point source were evaluated. A digital computer was used to pass the noise data through algorithms representing the filters and through two types of threshold algorithms, one having a fixed threshold, the other a threshold that adapts to changes in the noise level. Except in one case, the LMS filter performed better than the bandpass filters with both threshold algorithms. No significant degradation in the performance of an adaptive threshold sensor (151 samples long) was observed due to the use of an LMS filter (7 samples long) at a false alarm rate of one in 1.6 x l0 pixels. A one-dimensional LMS filter and adaptive threshold sensor were shown to be an effective clutter rejection combination at least for sensors with an NEI exceeding 1 x 10-13 W/cm2.
Signal Processing For Staring Infrared Images
K. Chow, J. P. Rode
The signal from a staring infrared imager, initially in the charge domain, is characterized by low contrast and nonuniformities. To achieve high performance, the signal processor should have low noise and operate with background suppression. Fixed pattern noise compensation must also be available. Low noise multiplexing can be achieved with charge coupled devices (CCD). With a hybrid focal plane, background suppression at the Si CCD input can be achieved with charge partitioning, charge skimming, accoupling or fill/spill gain reduction. Fixed pattern noises arising from detector variations, CCD input threshold variations, charge transfer process and other sources, must be compensated electronically using optical calibration. A performance model of the input circuit taking into account the various noise sources will be discussed and compared with experimental data. Data on focal plane fixed pattern noises will be presented. Various techniques to implement the nonuniformity correction, especially in a tactical system, will also be discussed.
Real-Time Nonuniformity Correction For Focal Plane Arrays Using 12-Bit Digital Electronics
P. Mackey, F. R. Barone, N. A. Chu
The element-to-element variations caused by DC offset and responsivity non-uniformities of infrared focal plane arrays used in staring imaging systems must be reduced in order to achieve the desired performance. This paper discusses the design and performance of digital electronics for non-uniformity correction based on a two-temperature calibration technique. Unique features of the system include 12-bit dynamic range, compact size, a single arithmetic processor and microsequencer control with several levels of pipelining to provide flexible operation. The current configuration operates with arrays as large as 64x64 with data rates up to 60 Hz per frame. Data will be presented on noise characteristics and the effectiveness of offset and responsivity corrections. The usefulness of this system for evaluating focal plane arrays is demonstrated using an infrared CID.
Correction Of Pixel Nonuniformities For Solid-State Imagers
To R. Hsing
This paper evaluates a pixel nonuniformity correction technique based upon gain and off-set compensation for CCD imagers. For hardware simplicity, a first-order linear model was examined in this paper. The statistical analysis of the uniformities is discussed. The original and the corrected picture of a gray scale test target are used as the example to evaluate the final results. Major improvements in sensor uniformity and the produced image quality can be achieved.
Spectral Discrimination For Long Range Search/Track Infrared Systems
Louis A. Williams Jr.
One of the areas where future improvements are anticipated in IR search/track systems is in longer range performance based on improved clutter rejection techniques. These long range search/track (LRST) IR systems will be needed in the future to counter advanced threats which are now in the early stages of development. In order to achieve longer range performance, system threshold sensitivity must be increased. Without improved clutter rejection techniques, increasing the system sensitivity will result in an unacceptable false alarm rate due to background clutter. The problem specifically addressed in this paper is clutter rejection techniques for LRST systems used to detect advanced threats targeted on surface ships. The techniques are applicable to other scenarios with minor modifications. Spectral discrimination must be added to classical spatial and temporal discrimination in order to reduce the false alarm rate against sunglint and operate at a low threshold setting in heavy clutter regions. Both two and three color systems have been evaluated and the design of a two-color system studied in detail.
Automatic Classification Of Infrared Ship Imagery
Joseph J. Kovar, John Knecht, Darrell Chenoweth
The Naval Weapons Center (NWC) is currently developing automatic target classification systems for future surveillance and attack aircraft and missile seekers. Target classification has been identified as a critical operational capability which should be included on new Navy aircraft and missile developments or systems undergoing significant modifications. The objective for the Automatic Classification Infrared Ship Imagery System is to provide the following new capablities for surveillance and attack aircraft and antiship missiles: near real-time automatic classification of ships in day and night at long standoff ranges with a wide area coverage imaging infrared sensor. The sensor applies classical pattern recognition technology to automatically classify ships using Forward Looking Infrared (FLIR) images. Automatic Classification of Infrared Ship Imagery is based on the extraction of features which uniquely describe the classes of ships. These features are used in conjunction with decision rules which are established during a training phase. Conventional classification techniques require labeled samples of all expected targets, threats and non-threats for this training phase. To overcome the resulting need for the collection of an immense data base, NWC developed a Generalized Classifier which, in the training phase, requires signals only from the targets of interest, such as high value combatant threats. In the testing phase, the signals from the combatants are classified and signals from other ships, which are sufficiently different from the training data, are classified as "other" targets. This technique provides a considerable savings in computer processing time, in memory requirements and data collection efforts. Since sufficient IIR images of the appropriate quality and quantity were not available for investigating automatic IIR ship classification, TV images of ship models were used for an initial feasibility demonstration. The initial investigation made use of the experience gained with preprocessing and classifying ROR and ISAR data. For this reason, the most expedient method was to collapse the 2-dimensional TV ship images onto the longitudinal axis by summing the amplitude data in the vertical ship axis. The resulting 128 point 1-dimensional profiles show the silhouette of the ship and bear an obvious similarity with the radar data. Based on that observation, a 128 point Fourier transform was computed and the ten low order squared amplitudes of the complex Fourier coefficients were then used as feature vectors for the Generalized Classifier. In contrast to the radar data, the size of TV or IIR images of ships changes as a function of range. It is therefore necessary to develop feature extraction algorithms which are scale invariant. The central moments, which have scale and rotational invariant properties were therefore implemented. This method was suggested in 1962 by M. K. Hu (IRE Transactions on Information Theory). Using the moments alone resulted in unsatisfactory classification performance and indicated that edge enhancement was necessary and that the background needed to be rejected. The images were therefore processed with the Sobel nonlinear edge enhancement algorithm, which also has the desirable property that it works for images with low signal-to-noise ratios and poorly defined edges. Satisfactory results were obtained. In another experiment, the feature vector was composed of the five lower-order invariant moments and the five lower-order FFT coefficient squared magnitudes, excluding the zero frequency coefficient. This paper will describe the data base, the processing and classification techniques, discuss the results and addresses the topic of "Processing of Images and Data Optical Sensors."
Autonomous Ship Classification By Moment Invariants
Budimir Zvolanek
An algorithm to classify ships from images generated by an infrared (IR) imaging sensor is described. The algorithm is based on decision-theoretic classification of Moment Invariant Functions (MIFs). The MIFs are computed from two-dimensional gray-level images to form a feature vector uniquely describing the ship. The MIF feature vector is classified by a Distance-Weighted k-Nearest Neighbor (D-W k-NN) decision rule to identify the ship type. Significant advantage of the MIF feature extraction coupled with D-W k-NN classification is the invariance of the classification accuracies to ship/sensor orienta-tion - aspect, depression, roll angles and range. The accuracy observed from a set of simulated IR test images reveals a good potential of the classifier algorithm for ship screening.
New Algorithm For Detection And Classification Of Targets
Martin Stern, William Driscoll
The problem of automatically recognizing targets in FLIR imagery has been limited by the inability of algorithms to accurately segment the target pixels from the clutter pixels. A new approach to target recognition is presented which uses a priori information about the targets of interest to aid in segmenting target pixels.
Building And Bridge Classification By Moment Invariants
John F. Gilmore, William W. Boyd
A classification algorithm for building and bridge structures, based on the invariant moments, is described. Development of the algorithm was motivated by the concept of a short-range cruise-missile-type weapon, the mission of which is to acquire and hand off high-value fixed targets to a terminal guidance tracker. In every case of applying the algorithm to actual IR images, the target classification score, even for a complex urban scene, has exceeded any competing image segment by 25 percent.
Detection Probability Of An Object Ranking System For An Imaging Missile Seeker
Joel McWilliams
A smart imaging sensor for a missile application is required not only to find objects of a certain class in the field of view, but also to choose one "best" object for further guidance. This paper describes an automatic object ranking system and derives the relations for the probability that the object ranked the highest is of the correct class. The ranking system consists of two phases. First, a screening phase locates several subimages that are approximately the right size and contrast. Second, a classifying and ranking phase measures several features about each subimage and ranks each subimage according to its probability of being a desired target or not. The detection analysis assumes that the image contains two classes of objects - the desired class, and another class which includes any other objects that the screener tends to pass. The screener always finds a fixed number of subimages, N, and order statistics are used to find the probability that k of the N are of the desired class. In the ranking phase, a likelihood ratio is calculated for each subimage using parameters learned from a volume of training imagery, and the subimages are ranked by likelihood ratio. The probability that the top-ranking object is of the correct class can be found, conditioned on the event that k of the N subimages are of the correct class. The performances of the two stages are then combined for the overall detection probability. The analyses allowed the number of subimages found by the screening phase, N, to be a parameter for investigation. It is shown that there is an optimum N for overall detection performance.
Cultural Feature And Syntax Analysis For Automatic Acquisition
Lois Sauer, John Taskett
This paper describes a set of algorithms for application in autonomous acquisition of large targets. These targets are identified by means of a straightforward three-step process. First, the scene is segmented into regions of nearly uniform intensity. Using some simple metrics, each region is then classified into a feature class. Finally, the surrounding feature-classified regions (syntax) are examined to identify the target. Initially, bridges and dams were chosen as the target class, with the acquisition of roads and bodies of water as paths to the target. Additional efforts include using these paths as navigation aids and, optionally, locating objects of interest along the path to a larger target. Extensive simulation and study have been done over a wide variety of bridge and path imagery in training these algorithms. The results of the simulation are presented in support of this syntactic approach, and a brief analysis of the probable hardware implementation for these algorithms is included to emphasize their simplicity and practicality.
Noise Effects For Edge Operators
R. E. Nasburg, Marion Lineberry
Techniques and analyses for improving the signal-to-noise performance of edge detectors are presented. A general edge detection method is developed as a result of the noise analysis, and a wide class of edge detectors is shown to be insensitive to edge orientation. For this class, an optimal design with respect to noise statistics is found and a comparison made between many common edge operators. Edge and noise models characteristic of typical images are presented and used in the analysis of these edge detectors.
Evaluation Of Peak Location Algorithms With Subpixel Accuracy For Mosaic Focal Planes
J. Allen Cox
The ability to achieve subpixel peak location accuracy for point source taraets in the cross-scan direction for scanning sensors and in both directions for staring sensors is examined systematically by means of Monte Carlo experiments. The performance of three peak location algorithms (simple centroid, extended centroid, polynomial least squares fit) is tested for sensitivity to system signal-to-noise ratio, detector deadspace, and blur spot size relative to detector size. A symmetrical, gaussian intensity profile of the blur spot is used in all cases. Computational efficiency, in terms of the number of multiplies and adds required, was considered in selecting the algorithms to be compared. Overall, we found that the simple centroid algorithm provides the optimum performance, giving %1/10 pixel accuracy for S/N=10 and no deadspace. In addition, performance improves for the centroid algorithms and degrades for the least squares fit algorithm as the blur spot size increases. Increasing deadspace markedly degrades the performance of the centroid algo-rithms.