Proceedings Volume 10990

Computational Imaging IV

cover
Proceedings Volume 10990

Computational Imaging IV

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 26 July 2019
Contents: 8 Sessions, 18 Papers, 21 Presentations
Conference: SPIE Defense + Commercial Sensing 2019
Volume Number: 10990

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10990
  • Deep Learning for Imaging in Complex Media
  • Computational Microscopy
  • Novel Computational Imaging System Design I
  • Novel Computational Imaging System Design II
  • Novel Computational Imaging System Design III
  • X-ray Computational Imaging
  • Poster Session
Front Matter: Volume 10990
icon_mobile_dropdown
Front Matter: Volume 10990
This PDF file contains the front matter associated with SPIE Proceedings Volume 10990, including the title page, copyright information, table of contents, and author and conference committee lists.
Deep Learning for Imaging in Complex Media
icon_mobile_dropdown
Learning speckle correlations for imaging through scattering (Conference Presentation)
Light scattering in complex media is a pervasive problem across many areas, such as deep tissue imaging, and imaging in degraded environment. Major progress has been made by using the transmission matrix (TM) framework that characterizes the “one-for-one” input-output relation of a fixed scattering medium as a linear shift-variant matrix. A major limitation of these existing approaches is their high susceptibility to model errors. The phase-sensitive TM is inherently intolerant to speckle decorrelations. Our goal here is to develop a highly scalable imaging through scattering framework by overcoming the existing limitations in susceptibility to speckle decorrelation and SBP. The proposed model is built on a deep learning (DL) frame- work. To satisfy the desired statistical properties, we do not train a convolutional neural network (CNN) to learn the TM of a single scattering medium. Instead, we build a CNN to learn a “one-for-all” mapping by training on several scattering media with different microstructures while having the same macroscopic parameter. Specifically, we show that our CNN model trained on a few diffusers can sufficiently support the statistical information of all diffusers having the same mean characteristics (e.g. ‘grits’). We then experimentally demonstrate that the CNN is able to “invert” speckles captured from entirely different diffusers to make high-quality object predictions. Our method significantly improves the system’s information throughput and adaptability as compared to existing approaches, by improving both the SBP and the robustness to speckle decorrelations.
Artificial neural networks for controlling light delivery through complex media (Conference Presentation)
Ivan Vishniakou, Daniele Faccio, Alex Turpin
Imaging and delivering of light in a controlled manner through complex media such as glass diffusers, biological tissue, or multimode optical fibers, is limited by the scattering of light when it propagates through the material. Different methods based on spatial light modulators can be used to prior shaping the light beam to compensate for the scattering, this including phase conjugation, hall-climbing algorithms, or the so-called transmission matrix approach. Here, we develop a machine-learning approach for light delivery through complex media. Using pairs of binary intensity patterns and intensity measurements we train artificial neural networks (ANNs) to provide the wavefront corrections necessary to shape the beam after the scatterer. Additionally, we show that ANNs pave the way towards finding a functional relationship between reflected and transmitted light through the scatterer that can be used for light delivery in transmission by using reflected light. We expect that our approach showing the versatility of ANNs for light shaping will open new doors towards efficiently and flexibly correcting for scattering, in particular by only using reflected light.
Contrast reductions due to sun scattering and transmittance for detection of objects through a local scattering cloud
Charles W. Bruce, Sharhabeel Alyones, Michael Granado, et al.
The ability to detect images through a scattering cloud of limited extent, such as that from a grenade or continuous dispersion during daylight hours depends on both the transmittance and the contrast reduction due to the scattering into the detector from the sun from the same direction. In this paper the two effects have been defined so as to permit comparison between their relative magnitudes and applied to the results of field measurements. Examples of the data and analytical forms are presented. To compute the result, the aerosol densities throughout the cloud would be required as functions of time and this was not feasible, however, a simplified model assuming constant aerosol density was formed and good agreement with the functional dependencies was found.
A line-of-sight approach for non-line-of-sight imaging (Conference Presentation)
Standard imaging systems, such as cameras, radars and lidars, are becoming a big part of our everyday life when it comes to detection, tracking and recognition of targets that are in the direct line-of-sight (LOS) of the imaging system. Challenges however start to arise when the objects are not in the system’s LOS, typically when an occluder is obstructing the imager’s field of view. This is known as non-line-of-sight (NLOS) and it is approached in different ways depending on the imager’s operating wavelength. We consider an optical imaging system and the literature offers different approaches from a component and recovery algorithm point of view. In our optical setup, we assume a system comprising an ultra-fast laser and a single photon avalanche diode (SPAD). The former is used to sequentially illuminate different points on a diffuser (relay) wall, causing the photons to uniformly scatter in all directions, including the target’s location. The latter component collects the scattered photons as a function of time. In post-processing, back-projection based algorithms are employed to recover the target’s image. Recent publications focused their attention on showing the quality of the results, as well as potential algorithm improvements. Here we show results based on a novel theoretical approach (coined as “phasor fields”), which suggests treating the NLOS imaging problem as a LOS one. The key feature is to consider the relay wall as a virtual sensor, created by the different points illuminated on the wall. Results show the superiority of this method compared to standard approaches.
Computational Microscopy
icon_mobile_dropdown
Deep learning in computational microscopy
We propose to use deep convolutional neural networks (DCNNs) to perform 2D and 3D computational imaging. Specifically, we investigate three different applications. We first try to solve the 3D inverse scattering problem based on learning a huge number of training target and speckle pairs. We also demonstrate a new DCNN architecture to perform Fourier ptychographic Microscopy (FPM) reconstruction, which achieves high-resolution phase recovery with considerably less data than standard FPM. Finally, we employ DCNN models that can predict focused 2D fluorescent microscopic images from blurred images captured at overfocused or underfocused planes.
Optimization of small target sensing in lensfree microscopes (Conference Presentation)
Zhen Xiong, Jeffrey E. Melzer, Jacob Garan, et al.
The ability to sense, count, and size microscopic and nanoscopic particles is important in air quality monitoring, biomedical diagnostics, and nanomaterials synthesis. Lensfree holographic microscopy is an attractive sensing platform due to its ultra-large field of view, compact form factor, and cost-effective components. Although submicron resolution has been previously demonstrated using lensfree holographic microscopy, the ability to detect individual microscale and nanoscale objects can pose a challenge due to limited signal to noise ratio (SNR). Previously, we have used vapor-deposited nanoscale polymer lenses to boost the SNR in sensing experiments, however this adds experimental complexity and is not compatible with all types of samples. Here we present a computational approach for boosting SNR in lensfree holographic microscopy. This approach optimizes a sparsity-promoting cost function in conjunction with a pixel superresolution method for synthesizing a high resolution hologram out of multiple low-resolution holograms captured at slightly different angles. The resulting high-resolution hologram can be computationally reconstructed to provide an in-focus image of the sample. We find that a sparsity-promoting cost function yields ~8 dB of improvement over conventional pixel superresolution approaches that involve cardinal neighbor regularization, provided that the surface coverage is below ~4%. The impacts of the sparsity-promoting cost function on image resolution and computational time will be presented, as well as a guide to which regularization parameters work best for given target sizes and coverage densities. These computational approaches can be used to extend the limit of detection of lensfree holographic microscopes in sensing applications.
Quantitative phase imaging camera with a weak diffuser based on the transport of intensity equation
Linpeng Lu, Jiasong Sun, Jialin Zhang, et al.
We present an efficient quantitative phase imaging camera with a weak diffuser (QPICWD) based on the transport of intensity equation (TIE). The compact QPICWD measures object induced phase delay under low-coherence quasi-monochromatic illumination via examining the deformation of the speckle intensity pattern. Analysing the speckle deformation with an ensemble average of the geometric ow, we can achieve the high-resolution distortion field by the TIE. We present some applications for the proposed design involving nondestructive optical testing of microlens array with nanometric thickness and imaging of fixed and live unstained HeLa cells. Since the designed QPICWD needs no modification of the common bright-field microscope or additional accessories, it may advance QPI as a widely useful tool for biological analysis at subcellular levels.
Theoretical analysis of diffraction imaging in Fourier ptychography microscopy
Shaohui Zhang, Yao Hu, Ying Wang, et al.
Fourier ptychography microscopy (FPM) is a recently developed computational imaging approach which surpasses the resolution barrier of a low numerical aperture (NA) imaging system. It is a powerful tool due to its ability to achieve super resolution of complex sample function, pupil aberration, LED misalignment, and beyond. However, recent studies have focused more on the optimization algorithms and set-ups instead of its theoretical background. Although some imaging laws about FPM have already been set forth, the formulas and laws are not fully defined, and the connection between diffraction theory and Fourier optics has a gap. Therefore, there exist a need for comprehensive research on physical and mathematical basis of FPM for future applications. Keeping this goal in mind, this manuscript utilizes scalar field diffraction theory to rigorously study the relationship between wavelength, the propagation mode, illumination direction of the incident wave, sample structure information and the direction of the output wave. The theoretical analysis of diffraction imaging in FPM provides a clear physical basis for not only the FPM systems, but also for the ptychography iterative engine (PIE) and any other coherent diffraction imaging techniques and systems. It can help to find the source of noise and therefore improve image quality in FPM technique and systems.
Microscopic image enhancement based on Fourier ptychography technique
Ying Wang, Guocheng Zhou, Yao Hu, et al.
Image enhancement technique is utilized to emphasize the overall or local characteristics of pictures and widely used in aerospace, and machine vision application. However, most of these techniques are mathematical algorithms based on captured pictures instead of the imaging process. Fourier ptychographic microscopy (FPM) is a recently developed computational imaging approach which stitches together low-resolution images acquired under different angles of illumination with the same intensity in Fourier space to produce a wide-field, high-resolution complex sample image. In this article, a theoretical model about the illumination intensity is proposed. The effect of uneven illumination intensity can be reduced significantly based on our model. Furthermore, the quality of the reconstructed image can be enhanced by adjusting the intensity of the illumination light corresponding to the high frequency components of the original spectrum.
Novel Computational Imaging System Design I
icon_mobile_dropdown
Fully-flexible glass-air disordered fiber imaging through deep learning (Conference Presentation)
Sean Pang, Yangyang Sun, Jian Zhao, et al.
Computational imaging systems apply encoding on the physical layer of the imaging device, demonstrating superior performance in resolution, dynamic range, and acquisition speed, compared to conventional point-to-point mapping imaging system. However, accurate mathematical models is required for such systems, and the calibration is a major concern for practical implementation. In this invited talk, we will discuss the efforts in applying the learning approach in computational imaging system from the Optical Imaging System Lab at the University of Central Florida. Specifically, the talk will be focus on a demonstration of such approach in fully flexible lensless fiber imaging.
Illumination pattern design with deep learning for single-shot Fourier ptychographic microscopy (Conference Presentation)
Vidya Ganapati
Fourier ptychographic microscopy allows for the collection of images with a high space-bandwidth product at the cost of temporal resolution. In Fourier ptychographic microscopy, the light source of a conventional widefield microscope is replaced with a light-emitting diode (LED) matrix, and multiple images are collected with different LED illumination patterns. From these images, a higher-resolution image can be computationally reconstructed without sacrificing field-of-view. We use deep learning to achieve single-shot imaging without sacrificing the space-bandwidth product, reducing the acquisition time in Fourier ptychographic microscopy by a factor of 69. In our deep learning approach, a training dataset of high-resolution images is used to jointly optimize a single LED illumination pattern with the parameters of a reconstruction algorithm. Our work paves the way for high-throughput imaging in biological studies.
Design and evaluation of task-specific compressive optical systems
Brian J. Redman, Gabriel C. Birch, Charles F. LaCasse, et al.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.
Fixed point simulation of compressed sensing and reconstruction
This work presents a fixed point simulation of Compressed Sensing (CS) and reconstruction for Super-Resolution task using Image System Engineering Toolbox (ISET). This work shows that performance of CS for super- resolution in fixed point implementation is similar to floating point implementation and there is negligible loss in reconstruction quality. It also shows that CS Super-Resolution requires much less computation effort compared to CS using Gaussian Random matrices. Additionally, it also studies the effect of Analog-to-Digital-Converter (ADC) bitwidth and image sensor noise on reconstruction performance. CS super-resolution cuts the raw data bits generated from image sensor by more than half and conversion of reconstruction algorithm to fixed point allows one to simplify the hardware implementation by replacing expensive floating point computational units with faster and energy efficient fixed point units.
Novel Computational Imaging System Design II
icon_mobile_dropdown
Generating synthetic imagery of complex scenes from ideal synthetic source imagery via MSERs on entropy imagery
Synthetic imagery generation is not a new topic; however, it has reemerged as a major focus in recent years. This is in part due to the success achieved by modern machine learning methodologies, in particular, deep learning. One reason these technologies have succeeded is due to the wealth of available training data. A majority of the available data are of generic objects or scenes. However, there are numerous applications in which data are neither readily available nor easily obtained in large quantities. In such scenarios, synthetic imagery is an appealing choice to address this shortcoming. While still faster than the performance of data collections, physics- based models tend to have computational complexity and require extensive computational time. This work seeks to investigate the use of reduced-order modeling (ROM) of relevant objects identified by a maximally stable extremal region (MSER) detector from the entropy image of simple ideal high-fidelity, physics-based synthetic images. Specifically, this work will utilize MSERs to identify pertinent objects to be placed within the simple scene via ROM to produce a more complex scene. This approach has the benefit of rapidly increasing both the complexity of simple, ideal, high-fidelity, physics-based scenes and the amount of synthetic imagery generated via random or statistically-based placement of the objects throughout the scene.
PlumeNET: a convolutional neural network for plume classification in thermal imagery
The development of PlumeNet, a thermal imagery based classifier for aerosolized chemical and biological warfare agents, is detailed. PlumeNet is a convolutional neural network designed for the real-time classification of threat-like plumes from background clutter. The model weights were trained from the ground up using thermal imagery of simulant plumes recorded at various test events. The performance between different convolutional neural network architectures are compared. An analysis of the final model layers through activation mapping methods is performed to demystify the methods by which PlumeNet performs classification. The classification performance of PlumeNet at government conducted open-release field testing at Dugway Proving Ground is detailed.
Tailored glass optics using 3D printing (Conference Presentation)
Rebecca Dylla-Spears, Nikola Dudukovic, Koroush Sasan, et al.
The capability to customize the structure or composition of an optical element gives designers access to previously unrealizable configurations that show promise for reducing costs, enhancing functionality, as well as improving the size, weight, and power of optical systems. Techniques for three-dimensional (3d) printing of glass have opened the door to novel glass optics with both unconventional structures and tailored composition. An overview of the state-of-the art in glass 3d printing will be presented. Particular emphasis will be placed on the direct ink writing (DIW) technique, in which specially formulated silica pastes are extruded through a nozzle and deposited in the geometry of interest, forming low density green bodies. The green bodies are then converted to full density, optically homogeneous glass by a series of heat treatments. The 3d printed silica-based glass components have material and optical properties that rival conventionally prepared optical grade fused silica. In addition, glass optics that contain tailored gradients in composition, such as gradient index lenses, have been achieved by DIW by blending separate inks inline at the print nozzle and directly depositing the desired composition profile before forming the glass. Strategies are also being developed to reduce time to development of new materials and structures.
Multispectral compressive sensing using a silicon-based PEC cell
Ji-Hoon Kang, Minjeong Kim, Jongtae Ahn, et al.
The properties of photoelectrochemical (PEC) cells have mainly been investigated with a focus on PEC hydrogen production. Because anodic current begins to flow when PEC cell is under illumination, and that this current varies as a function of light intensity, PEC cells can be used as a photodetector. Different from other image sensors, PEC cells can detect the light immersed in solutions due to their PEC properties. To verify the feasibility of using silicon-based PEC cell as an image sensor, we demonstrated a single pixel imaging system based on compressive sensing. Compressive sensing is an algorithm designed to recover signals from a small number of measurements, assuming that the signal of interest can be represented in a sparse way. In this study, we have demonstrated multispectral imaging using a siliconbased PEC cell with compressive sensing. The images were obtained in three primary colors (red, green, and blue). Due to the high photoresponse, stability and unique characteristic that silicon-based PEC cell can be used underwater, the silicon-based PEC cell is expected to be utilized in the future as a photodetector for various applications. We believe this study would be a great example of advanced developments in an optoelectronic system based on PEC cells.
Tracking from a moving platform with the Dynamic Vision Sensor
Joseph Cox, Nicholas Morley
The Dynamic Vision Sensor (DVS) is an imaging sensor that processes the incident irradiance image and outputs temporal log irradiance changes in the image, such as those generated by moving target(s) and/or the moving sensor platform. From a static platform, this enables the DVS to cancel out background clutter and greatly decrease the sensor bandwidth required to track temporal changes in a scene. However, the sensor bandwidth advantage is lost when imaging a scene from a moving platform due to platform motion causing optical flow in the background. Imaging from a moving platform has been utilized in many recently reported applications of this sensor. However, this approach inherently outputs background clutter generated from optical flow, and as such this approach has limited spatio-temporal resolution and is of limited utility for target tracking applications. In this work we present a new approach to moving target tracking applications with the DVS. Essentially, we propose modifying the incident image to cancel out optical flow due to platform motion, thereby removing background clutter and recovering the bandwidth performance advantage of the DVS. We propose that such improved performance can be accomplished by integrating a hardware tracking and stabilization subsystem with the DVS. Representative simulation scenarios are used to quantify the performance of the proposed approach to clutter cancellation and improved sensor bandwidth.
Novel Computational Imaging System Design III
icon_mobile_dropdown
Imaging reconstruction with engineered point spread functions (Conference Presentation)
A Point Spread Function (PSF) engineered imaging system provides functionality at the expense of image distortion. Deconvolution and other post-processing techniques may partially restore the image if the PSF is known. We compare how various phase mask functions (e.g., vortex, axicon, cubic, and circular harmonic) may functionally protect a sensor from a coherent beam, and we discuss the subsequent trade-off between peak irradiance and the integrated modulation transfer function (Strehl ratio). Both experimental and numerical examples demonstrate that the peak irradiance may be suppressed by orders of magnitude without intolerable loss of image fidelity. The design of an optimal phase mask that accomplishes this task is made difficult by the nonlinear relationship between peak irradiance and Strehl. Results from experimental and numerical optimization schemes like simulated annealing, differential evolution, and Nelder-Mead will be compared.
A comparative investigation on the use of compressive sensing methods in computational ghost imaging
Usually, a large number of patterns are needed in the computational ghost imaging (CGI). In this work, the possibilities to reduce the pattern number by integrating compressive sensing (CS) algorithms into the CGI process are systematically investigated. Based on the different combinations of sampling patterns and image priors for the L1-norm regularization, different CS-based CGI approaches are proposed and implemented with the iterative shrinkage thresholding algorithm. These CS-CGI approaches are evaluated with various test scenes. According to the quality of the reconstructed images and the robustness to measurement noise, a comparison between these approaches is drawn for different sampling ratios, noise levels, and image sizes.
Finite element modeling of pulse phase thermography of an approximate model of low velocity impact induced damage in carbon fiber reinforced polymer structures
In this work, we apply the finite element (FE) method to simulate an approximate low velocity impact induced model. One important characteristic of low velocity impact damage is the presence of multiple defects located at different depths, creating overshadowing among each other, affecting the thermal diffusion and therefore the blind frequency and temperature distribution on the surface, understanding this phenomenon is paramount in order to quantify the magnitude of under-the-surface damages. In this paper, we create a representative geometry of a defect using the meshing code CUBIT and solve the finite element model in ARIA thermal code in order to simulate the phase component of reflected thermal waves. The phase and thermal data collected from the FE solution on the surface above each defect is post processed and linearly correlated, in conjunction with a two-point strategy to provide information about the defect below the surface of interest. We also present a comparison with a single defect representation of the defect, proving that single model defect is not accurate to represent damage created by low velocity impact.
X-ray Computational Imaging
icon_mobile_dropdown
X-ray phase imaging and retrieval using structured illumination (Conference Presentation)
X-rays enable non-invasive and high-resolution imaging that has become central to medical diagnostics and security. While conventional x-ray imaging captures only strongly-attenuating materials like bone or metal, in recent years new x-ray modalities have been developed that can capture weakly-attenuating materials. In particular, variations in x-ray phase can reveal soft tissue structures like the lungs and incoherent scattering of x-rays can describe sub-pixel structures like fine powders. Most techniques that can capture these image modalities require precision optics to convert the phase and incoherent scattering effects to measurable variations in the image intensity, and use multiple exposures to separate the effects. Recent work has been able to extract the three modalities using a single exposure (enabling dynamic and low-dose imaging) and without the need for precision optics (reducing the cost of a set-up). This is possible using structured x-ray illumination and post-measurement computational analysis. The x-ray illumination can be either a grid-like periodic pattern [1,2], or a completely random pattern, such as the pattern produced when a piece of sandpaper is placed in the x-ray beam [3]. Local shifts in the illumination pattern result from x-ray phase variations, and a ‘blurring-out’ of the illumination pattern indicates the presence of sub-pixel structures that scatter the x-ray light. This talk will describe these new methods of x-ray imaging, touching on mathematical models that predict the wavefield behavior, methods of computational analysis and applications to biomedical research [4]. [1] K. Morgan, T. Petersen, M. Donnelley, et al., Optics Express 24 (2016). [2] K. Morgan, P. Modregger, S. Irvine, et al., Optics Letters, 38 (2013). [3] K. Morgan, D. Paganin and K. Siu, Applied Physics Letters 100 (2012). [4] K. Morgan, M. Donnelley, N. Farrow et al., AJRCCM 190 (2014).
Propagation-based and mesh-based x-ray quantitative phase imaging with conventional sources
The contrast in conventional x-ray imaging is generated by differential attenuation of x rays, which is generally very small in soft tissue. Phase imaging has been shown to improve contrast and signal to noise ratio (SNR) by factors of 100 or more. However, acquiring phase images typically requires a highly spatially coherent source (e.g. a 50 μm or smaller microfocus source or a synchrotron facility), or multiple images acquired with precisely aligned gratings. Here we demonstrate two phase imaging techniques compatible with conventional sources: polycapillary focusing optics to enhance source coherence and mesh-based structured illumination.
Hierarchical convolutional network for sparse-view X-ray CT reconstruction
We present a hierarchical imaging reconstruction algorithm for a 3D phase tomography based on the densely extracted features on a multi-band pyramid of convolutional network. By implementing a layer-wise hierarchical machine learning network and combine different bands of information for the imaging retrieval, a more efficient and adaptive learning strategy is established to enable an accurate reconstruction with fewer training data and improved accuracy. In addition, the distinction of intensity and spectral bands in the feature training process enables bias correction for reconstruction under varied conditions. In particular, we demonstrate a robust imaging reconstruction for a sparse-view phase tomography application, where spectrally biased phase diffraction and intensity-biased photon noise are both successfully corrected for.
Deep neural networks for sparse-view filtered backprojection imaging
Cain Gantt, Yuanwei Jin, Enyue Lu
Though effective and computationally efficient algorithms have been developed, the commonly utilized filtered backprojection (FBP) approach to computed tomography (CT) reconstruction suffers from artifact production in sparse-view applications. Within the past few years, convolutional neural networks (CNNs) have been applied to enhance sparse-view reconstruction in CT imaging. Using a network trained on sparse-view FBP reconstructions, the artifacts introduced by undersampling the imaging space can be removed. In this paper, we investigate specific choices in the implementation of the CNN, including the network architecture, training parameters, and data preprocessing, to determine effects on the images produced by the network. Our proposed algorithm and implementation strategies improve upon the use of FBP algorithms alone by removing artifacts produced during sparse-view CT reconstruction.
Poster Session
icon_mobile_dropdown
Spatial image filtering using spline approximation in the case of impulse noise
D. Bezuglov, V. Voronin, S. Mishchenko
Recently, systems of real-time intellectual processing of television images have been developing intensively. There are special requirements for modern computational imaging systems for accuracy due to the high variability of the working scene, the heterogeneity of objects, and interference. The process of digital images processing in a priori unknown observation conditions significantly complicates the presence of impulse noise due to various factors, such as defects in the recording system, as well as environmental influences. One of the trends in modern information technologies is the development of highly efficient and computational methods for image filtering. The most filtering methods and algorithms are required a priori knowledge of the characteristics of distorting interference. In practice, in most cases, such information is missing or approximate. Currently, one of the urgent tasks of image processing is the problem of the objects contour detection in the presence of various interference. This paper considers the new real-time algorithm for spatial images filtering using spline approximation in the case of impulse noise. Mathematical modeling of the proposed approaches was carried out, and their advantages were shown. The results of testing the proposed methods on the test database of images in comparison with known methods are presented.