Proceedings Volume 8138

Wavelets and Sparsity XIV

cover
Proceedings Volume 8138

Wavelets and Sparsity XIV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 8 September 2011
Contents: 20 Sessions, 55 Papers, 0 Presentations
Conference: SPIE Optical Engineering + Applications 2011
Volume Number: 8138

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8138
  • Keynote Session
  • Sparsity in Wavefields: Radar, Acoustic, Optics and EEG
  • Frame Theory and Applications I
  • Applications of Sparse Representations in Bioimaging
  • Frame Theory and Sparse Approximations I
  • Non-Conventional Imaging Methods and Sparsity
  • Frame Theory and Applications II
  • Sparsity in Wavefields: Geosciences
  • Sparse Representations in Multidimensions I
  • Bases, Frames, and Dictionaries: Designs for Sparsity I
  • Frame Theory and Sparse Approximations II
  • Sparsity and Face Recognition
  • Sparsity in Wavefields: Tomography and MRI
  • Sparse Sampling and Image Reconstruction in MRI I
  • Sparse Representations on MRI
  • Bases, Frames, and Dictionaries: Designs for Sparsity II
  • Novel Filter and Dictionary Designs
  • Sparse Sampling and Image Reconstruction in MRI II
  • Sparse Representations in Multidimensions II
Front Matter: Volume 8138
icon_mobile_dropdown
Front Matter: Volume 8138
This PDF file contains the front matter associated with SPIE Proceedings Volume 8138, including the Title Page, Copyright Information, Table of Contents, Introduction, and the Conference Committee listing.
Keynote Session
icon_mobile_dropdown
Sparse reconstruction of visual appearance for computer graphics and vision
Ravi Ramamoorthi
A broad range of problems in computer graphics rendering, appearance acquisition for graphics and vision, and imaging, involve sampling, reconstruction, and integration of high-dimensional (4D-8D) signals. For example, precomputation-based real-time rendering of glossy materials and intricate lighting effects like caustics, can involve (pre)-computing the response of the scene to different light and viewing directions, which is often a 6D dataset. Similarly, image-based appearance acquisition of facial details, car paint, or glazed wood, requires us to take images from different light and view directions. Even offline rendering of visual effects like motion blur from a fast-moving car, or depth of field, involves high-dimensional sampling across time and lens aperture. The same problems are also common in computational imaging applications such as light field cameras. In the past few years, computer graphics and computer vision researchers have made significant progress in subsequent analysis and compact factored or multiresolution representations for some of these problems. However, the initial full dataset must almost always still be acquired or computed by brute force. This is often prohibitively expensive, taking hours to days of computation and acquisition time, as well as being a challenge for memory usage and storage. For example, on the order of 10,000 megapixel images are needed for a 1 degree sampling of lights and views for high-frequency materials. We argue that dramatically sparser sampling and reconstruction of these signals is possible, before the full dataset is acquired or simulated. Our key idea is to exploit the structure of the data that often lies in lower-frequency, sparse, or low-dimensional spaces. Our framework will apply to a diverse set of problems such as sparse reconstruction of light transport matrices for relighting, sheared sampling and denoising for offline shadow rendering, time-coherent compressive sampling for appearance acquisition, and new approaches to computational photography and imaging.
Sparsity in Wavefields: Radar, Acoustic, Optics and EEG
icon_mobile_dropdown
SAR moving target imaging in a sparsity-driven framework
N. Özben Önhon, Müjdat Çetin
In synthetic aperture radar (SAR) imaging, sparsity-driven imaging techniques have been shown to provide high resolution images with reduced sidelobes and reduced speckle, by allowing the incorporation of prior information about the scene into the problem. Just like many common SAR imaging methods, these techniques also assume the targets in the scene are stationary over the data collection interval. Here, we consider the problem of imaging in the presence of targets with unknown motion in the scene. Moving targets cause phase errors in the SAR data and these errors lead to defocusing in the corresponding spatial region in the reconstructed image. We view phase errors resulting from target motion as errors on the observation model of a static scene. Based on these observations we propose a method which not only benefits from the advantages of sparsity-driven imaging but also compansates the errors arising due to the moving targets. Considering that in SAR imaging the underlying scene usually admits a sparse representation, a nonquadratic regularization-based framework is used. The proposed method is based on minimization of a cost function which involves regularization terms imposing sparsity on the reflectivity field to be imaged, as well as on the spatial structure of the motion-related phase errors, reflecting the assumption that only a small percentage of the entire scene contains moving targets. Experimental results demonstrate the effectiveness of the proposed approach in reconstructing focused images of scenes containing multiple targets with unknown motion.
Refractive index map reconstruction in optical deflectometry using total-variation regularization
Laurent Jacques, Adriana González, Emmanuel Foumouo, et al.
We study the resolution of an inverse problem arising in Optical Deflectometry: the reconstruction of refractive index map of transparent materials from light deflection measurements under multiple orientations. This problem is solved from a standard convex optimization procedure aiming at minimizing the Total Variation of the map (as a prior information) while matching the observed data (fidelity constraint). In this process, the forward measurement operator, mapping any index map to its deflectometric observation, is simplified in the Fourier domain according to a modified Projection-Slice theorem.
Compressively sampling the plenacoustic function
Rémi Mignot, Gilles Chardon, Laurent Daudet
Directly measuring the full set of acoustic impulse responses within a room would require an unreasonably large number of measurements. Considering that the acoustic wavefield is sparse in some dictionaries, Compressed Sensing allows the recovery of the full wavefield with a reduced set of measurements, but raises challenging computational and memory issues. Two practical algorithms are presented and compared: one that exploits the structured sparsity of the soundfield, with projections of the modes onto plane waves sharing the same wavenumber, and one that computes a sparse decomposition on a dictionary of independent plane waves with time/space variable separation.
Frame Theory and Applications I
icon_mobile_dropdown
Weighted-l1 minimization with multiple weighting sets
Hassan Mansour, Özgür Yilmaz
In this paper, we study the support recovery conditions of weighted ℓ1 minimization for signal reconstruction from compressed sensing measurements when multiple support estimate sets with different accuracy are available. We identify a class of signals for which the recovered vector from ℓ1 minimization provides an accurate support estimate. We then derive stability and robustness guarantees for the weighted ℓ1 minimization problem with more than one support estimate. We show that applying a smaller weight to support estimate that enjoy higher accuracy improves the recovery conditions compared with the case of a single support estimate and the case with standard, i.e., non-weighted, ℓ1 minimization. Our theoretical results are supported by numerical simulations on synthetic signals and real audio signals.
Deterministic matrices with the restricted isometry property
Matthew Fickus, Dustin G. Mixon
The state of the art in compressed sensing uses sensing matrices which satisfy the restricted isometry property (RIP). Unfortunately, the known deterministic RIP constructions fall short of the random constructions, which are only valid with high probability. In this paper, we consider certain deterministic constructions and compare different proof techniques that demonstrate RIP in the deterministic setting.
Non-orthogonal fusion frames
Jameson Cahill, Peter G. Casazza, Shidong Li
Fusion frames have become a major tool in the implementation of distributed systems. The effectiveness of fusion frame applications in distributed systems is reflected in the efficiency of the end fusion process. This requires the inversion of the fusion frame operator which is difficult or impossible in practice. What we want is for the fusion frame operator to be the identity. But in most applications, especially to sensor networks, this almost never occurs. We will solve this problem by introducing the notion of non-orthogonal fusion frames which have the property that in most cases we can turn a family of subspaces of a Hilbert space into a non-orthogonal fusion frame which has a fusion frame operator which is the identity.
Grassmannians in frame theory
Jameson Cahill, Shidong Li
The Grassmannian is the set of k dimensional subspace of an n dimensional vector space. This set naturally parametrizes isomorphism classes of frames. We will present several results which exploit this fact. In particular, we will use the Plucker embedding of the Grassmannian to identify a class of frames which is optimal for the maximal number of erasures.
Applications of Sparse Representations in Bioimaging
icon_mobile_dropdown
Higher degree total variation (HDTV) algorithms for biomedical inverse problems
Yue Hu, Mathews Jacob
We introduce novel generalized regularization functionals to overcome the practical problems associated with current total variation (TV) penalty. Specically, we extend the TV scheme to higher order derivatives to improve the representation of smoothly varying image regions. In addition, we introduce a rotation invariant anisotropic TV penalty to improve the regularity of the edge contours. The validation of the scheme demonstrates the signicantly improved performance of the proposed methods in the context of compressed sensing and denoising.
Tissue quantification in photon-limited microendoscopy
Zachary T. Harmany, Jenna Mueller, Qunicy Brown, et al.
This paper explores the use of Poisson sparse decomposition methods for computationally separating tumor nuclei from normal tissue structures in photon-limited microendoscopic images. Sparse decomposition tools are a natural fit for this application with promising preliminary results. However, there are significant the tradeoffs among different algorithms used for Poisson sparse decomposition which are described in detail and demonstrated via simulation.
Coherent source imaging and dynamic support tracking for inverse scattering using compressive MUSIC
Okkyun Lee, Jong Min Kim, Jaejoon Yoo, et al.
The goal of this paper is to develop novel algorithms for inverse scattering problems such as EEG/MEG, microwave imaging, and/or diffuse optical tomograpahy, and etc. One of the main contributions of this paper is a class of novel non-iterative exact nonlinear inverse scattering theory for coherent source imaging and moving targets. Specifically, the new algorithms guarantee the exact recovery under a very relaxed constraint on the number of source and receivers, under which the conventional methods fail. Such breakthrough was possible thanks to the recent theory of compressive MUSIC and its extension using support correction criterion, where partial support are estimated using the conventional compressed sensing approaches, then the remaining supports are estimated using a novel generalized MUSIC criterion. Numerical results using coherent sources in EEG/MEG and dynamic targets confirm that the new algorithms outperform the conventional ones.
Numerical evaluation of subsampling effects on image reconstruction in compressed sensing microscopy
When undergoing reconstruction through a compressed sensing scheme, real microscopic images are affected by various artifacts that lead to detail loss. Here, we discuss how the sampling strategy and the subsampling rate affect the compressed sensing reconstruction, and how they should be determined according to a targeted accuracy level of the reconstruction. We investigate the relevance and limits of theoretical results through several numerical reconstructions of test images. We discuss the quantitative and qualitative artifacts that affect the reconstructed signal when reducing the number of measurements in the Fourier domain. We conclude by extending our results to real microscopic images.
Fresnelab: sparse representations of digital holograms
Digital holography plays an increasingly important role for biomedical imaging; it has particularly low invasiveness and allows quantitatively characterizing both amplitude and phase of propagating wave fronts. Fresnelets have been introduced as both a conceptual and practical tool to reconstruct digital holograms, simulate the propagation of monochromatic waves, or compress digital holograms. Propagating wavefronts that have a sparse representation in a traditional wavelet basis in their originating plane have a similarly sparse representation in propagation-distance-dependent Fresnelet bases. Although several applications have been reported, no implementation has been made widely available. Here we describe a Matlab-based Fresnelets toolbox that provides a set of user-friendly functions to implement the Fresnelet transform.
Frame Theory and Sparse Approximations I
icon_mobile_dropdown
Joint sparsity models for wideband array processing
Petros T. Boufounos, Paris Smaragdis, Bhiksha Raj
Recent work has demonstrated the power of sparse models and representations in signal processing applications and has provided the community with computational tools to use it. In this paper we explore the use of sparsity in localization and beamforming when capturing multiple broadband sources using a sensor array. Specifically, we reformulate the wideband signal acquisition as a joint/group sparsity problem in a combined frequency-space domain. Under this formulation the signal is sparse in the spatial domain but has common support in all frequencies. Using techniques from the model-based compressive sensing literature we demonstrate that it is possible to robustly capture, localize and often reconstruct multiple signals present in the scene.
Uniqueness conditions for low-rank matrix recovery
Low-rank matrix recovery addresses the problem of recovering an unknown low-rank matrix from few linear measurements. Nuclear-norm minimization is a tractable approach with a recent surge of strong theoretical backing. Analagous to the theory of compressed sensing, these results have required random measurements. For example, m ≥ Cnr Gaussian measurements are sufficient to recover any rank-r n x n matrix with high probability. In this paper we address the theoretical question of how many measurements are needed via any method whatsoever - tractable or not. We show that for a family of random measurement ensembles, m ≥ 4nr-4r2 measurements are sufficient to guarantee that no rank-2r matrix lies in the null space of the measurement operator with probability one. This is a necessary and sufficient condition to ensure uniform recovery of all rank-r matrices by rank minimization. Furthermore, this value of m precisely matches the dimension of the manifold of all rank-2r matrices. We also prove that for a fixed rank-r matrix, m ≥ 2nr - r2 + 1 random measurements are enough to guarantee recovery using rank minimization. These results give a benchmark to which we may compare the efficacy of nuclear-norm minimization.
Non-Conventional Imaging Methods and Sparsity
icon_mobile_dropdown
Diffuse imaging: replacing lenses and mirrors with omnitemporal cameras
Ahmed Kirmani, Haris Jeelani, Vahid Montazerhodjat, et al.
Conventional imaging uses steady-state illumination and light sensing with focusing optics; variations of the light field with time are not exploited. We develop a signal processing framework for estimating the reflectance f of a Lambertian planar surface in a known position using omnidirectional, time-varying illumination and unfocused, time-resolved sensing in place of traditional optical elements such as lenses and mirrors. Our model associates time sampling of the intensity of light incident at each sensor with a linear functional of f. The discrete-time samples are processed to obtain ℓ2-regularized estimates of f. Using non-impulsive, bandlimited light sources instead of impulsive illumination significantly improves signal-to-noise ratio (SNR) and reconstruction quality.
Localization of point sources for systems governed by the wave equation
Zafer Dogan, Vagia Tsiminaki, Ivana Jovanovic, et al.
Analytic sensing has recently been proposed for source localization from boundary measurements using a generalization of the finite-rate-of-innovation framework. The method is tailored to the quasi-static electromagnetic approximation, which is commonly used in electroencephalography. In this work, we extend analytic sensing for physical systems that are governed by the wave equation; i.e., the sources emit signals that travel as waves through the volume and that are measured at the boundary over time. This source localization problem is highly ill-posed (i.e., the unicity of the source distribution is not guaranteed) and additional assumptions about the sources are needed. We assume that the sources can be described with finite number of parameters, particularly, we consider point sources that are characterized by their position and strength. This assumption makes the solution unique and turns the problem into parametric estimation. Following the framework of analytic sensing, we propose a two-step method. In the first step, we extend the reciprocity gap functional concept to wave-equation based test functions; i.e., well-chosen test functions can relate the boundary measurements to generalized measure that contain volumetric information about the sources within the domain. In the second step-again due to the choice of the test functions - we can apply the finite-rate-of-innovation principle; i.e., the generalized samples can be annihilated by a known filter, thus turning the non-linear source localization problem into an equivalent root-finding one. We demonstrate the feasibility of our technique for a 3-D spherical geometry. The performance of the reconstruction algorithm is evaluated in the presence of noise and compared with the theoretical limit given by Cramer-Rao lower bounds.
Frame Theory and Applications II
icon_mobile_dropdown
Frame completions for optimally robust reconstruction
Matthew Fickus, Dustin G. Mixon, Miriam J. Poteet
In information fusion, one is often confronted with the following problem: given a preexisting set of measurements about an unknown quantity, what new measurements should one collect in order to accomplish a given fusion task with optimal accuracy and efficiency. We illustrate just how difficult this problem can become by considering one of its more simple forms: when the unknown quantity is a vector in a Hilbert space, the task itself is vector reconstruction, and the measurements are linear functionals, that is, inner products of the unknown vector with given measurement vectors. Such reconstruction problems are the subject of frame theory. Here, we can measure the quality of a given frame by the average reconstruction error induced by noisy measurements; the mean square error is known to be the trace of the inverse of the frame operator. We discuss preliminary results which help indicate how to add new vectors to a given frame in order to reduce this mean square error as much as possible.
Geometric optimization on spaces of finite frames
Nate Strawn
A finite (μ; S)-frame variety consists of the real or complex matrices F = [f1...fN] with frame operator FF* = S, and satisfying IIfiII = μi for all i = 1,...,N. Here, S is a fixed Hermitian positive definite matrix and μ = [μ1,..., μN] is a fixed list of lengths. These spaces generalize the well-known spaces of finite unit norm tight frames. We explore the local geometry of these spaces and develop geometric optimization algorithms based on the resulting insights.
Sparse dual frames in compressed sensing
Shidong Li, Tiebin Mi, Yulong Liu
A notion of sparse dual frames for a given non-exact frame is introduced. The sparse dual frame is motivated in a study of compressed sensing problems where signal are sparse with respect to a redundant and coherent dictionary (frames). A sparse-dual-based ℓ1-analysis is thereby proposed. We show that sparse dual frames are locally stable. An error bound ensuring the correct signal recovery is obtained. More importantly, solutions to very hard problems in compressed sensing with redundant dictionaries that are otherwise completely unsuccessful by known algorithms of ℓ1-synthesis and the 1-analysis are seen as satisfactory, by the new sparse-dual-based approach and an alternating iterative algorithm that we propose. Examples are provided.
An uncertainty principle for functions defined on graphs
Ameya Agaskar, Yue M. Lu
The classical uncertainty principle provides a fundamental tradeoff in the localization of a function in the time and frequency domains. In this paper we extend this classical result to functions defined on graphs. We justify the use of the graph Laplacian's eigenbasis as a surrogate for the Fourier basis for graphs, and define the notions of "spread" in the graph and spectral domains. We then establish an analogous uncertainty principle relating the two quantities, showing the degree to which a function can be simultaneously localized in the graph and spectral domains.
A domain-knowledge-inspired mathematical framework for the description and classification of H&E stained histopathology images
Melody L. Massar, Ramamurthy Bhagavatula, John A. Ozolek, et al.
We present the current state of our work on a mathematical framework for identification and delineation of histopathology images-local histograms and occlusion models. Local histograms are histograms computed over defined spatial neighborhoods whose purpose is to characterize an image locally. This unit of description is augmented by our occlusion models that describe a methodology for image formation. In the context of this image formation model, the power of local histograms with respect to appropriate families of images will be shown through various proved statements about expected performance. We conclude by presenting a preliminary study to demonstrate the power of the framework in the context of histopathology image classification tasks that, while differing greatly in application, both originate from what is considered an appropriate class of images for this framework.
Sparsity in Wavefields: Geosciences
icon_mobile_dropdown
A modified, sparsity-promoting, Gauss-Newton algorithm for seismic waveform inversion
Felix J. Herrmann, Xiang Li, Aleksandr Y. Aravkin, et al.
Images obtained from seismic data are used by the oil and gas industry for geophysical exploration. Cutting-edge methods for transforming the data into interpretable images are moving away from linear approximations and high-frequency asymptotics towards Full Waveform Inversion (FWI), a nonlinear data-fitting procedure based on full data modeling using the wave-equation. The size of the problem, the nonlinearity of the forward model, and ill-posedness of the formulation all contribute to a pressing need for fast algorithms and novel regularization techniques to speed up and improve inversion results. In this paper, we design a modified Gauss-Newton algorithm to solve the PDE-constrained optimization problem using ideas from stochastic optimization and compressive sensing. More specifically, we replace the Gauss-Newton subproblems by randomly subsampled, ℓ1 regularized subproblems. This allows us us significantly reduce the computational cost of calculating the updates and exploit the compressibility of wavefields in Curvelets. We explain the relationships and connections between the new method and stochastic optimization and compressive sensing (CS), and demonstrate the efficacy of the new method on a large-scale synthetic seismic example.
Wavelets and wavelet-like transforms on the sphere and their application to geophysical data inversion
Frederik J. Simons, Ignace Loris, Eugene Brevdo, et al.
Many flexible parameterizations exist to represent data on the sphere. In addition to the venerable spherical harmonics, we have the Slepian basis, harmonic splines, wavelets and wavelet-like Slepian frames. In this paper we focus on the latter two: spherical wavelets developed for geophysical applications on the cubed sphere, and the Slepian "tree", a new construction that combines a quadratic concentration measure with wavelet-like multiresolution. We discuss the basic features of these mathematical tools, and illustrate their applicability in parameterizing large-scale global geophysical (inverse) problems.
Mismatch and resolution in compressive imaging
Albert Fannjiang, Wenjing Liao
Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects independent of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.
Sparse Representations in Multidimensions I
icon_mobile_dropdown
Coarse quantization with the fast digital shearlet transform
Bernhard G. Bodmann, Gitta Kutyniok, Xiaosheng Zhuang
The fast digital shearlet transform (FDST) was recently introduced as a means to analyze natural images efficiently, owing to the fact that those are typically governed by cartoon-like structures. In this paper, we introduce and discuss a first-order hybrid sigma-delta quantization algorithm for coarsely quantizing the shearlet coefficients generated by the FDST. Radial oversampling in the frequency domain together with our choice for the quantization helps suppress the reconstruction error in a similar way as first-order sigma-delta quantization for finite frames. We provide a theoretical bound for the reconstruction error and confirm numerically that the error is in accordance with this theoretical decay.
Discrete shearlet transform: faithful digitization concept and its applications
Wang-Q Lim
Over the past years, various representation systems which sparsely approximate functions governed by anisotropic features such as edges in images have been proposed. Alongside the theoretical development of these systems, algorithmic realizations of the associated transforms were provided. However, one of the most common short-comings of these frameworks is the lack of providing a unified treatment of the continuum and digital world, i.e., allowing a digital theory to be a natural digitization of the continuum theory. Shearlets were introduced as means to sparsely encode anisotropic singularities of multivariate data while providing a unified treatment of the continuous and digital realm. In this paper, we introduce a discrete framework which allows a faithful digitization of the continuum domain shearlet transform based on compactly supported shearlets. Finally, we show numerical experiments demonstrating the potential of the discrete shearlet transform in several image processing applications.
Bases, Frames, and Dictionaries: Designs for Sparsity I
icon_mobile_dropdown
On accelerated hard thresholding methods for sparse approximation
Volkan Cevher
We propose and analyze acceleration schemes for hard thresholding methods with applications to sparse approximation in linear inverse systems. Our acceleration schemes fuse combinatorial, sparse projection algorithms with convex optimization algebra to provide computationally efficient and robust sparse recovery methods. We compare and contrast the (dis)advantages of the proposed schemes with the state-of-the-art, not only within hard thresholding methods, but also within convex sparse recovery algorithms.
Deblurring of Poissonian images using BM3D frames
Aram Danielyan, Vladimir Katkovnik, Karen Egiazarian
We propose a novel deblurring algorithm for Poissonian images. The algorithm uses data adaptive BM3D-frames for sparse image modeling. Reconstruction is formulated as a generalized Nash equilibrium problem, seeking a balance between the data fit and the complexity of the solution. Simulated experiments demonstrate numerical and visual superiority of the proposed algorithm over the current state-of-the-art methods.
Frame Theory and Sparse Approximations II
icon_mobile_dropdown
Spectral tetris fusion frame constructions
Peter G. Casazza, Matthew Fickus, Andreas Heinecke, et al.
A fusion frame is a collection of subspaces in a Hilbert space, generalizing the idea of a frame for signal representation. A tool to construct fusion frames is the spectral tetris algorithm, a flexible and elementary method to construct unit norm frames with a given frame operator having all of its eigenvalues greater than or equal to two. We discuss how spectral tetris can be used to construct fusion frames with prescribed eigenvalues for its fusion frame operator and with prescribed dimensions for its subspaces.
Stable signal recovery from the roots of the short-time Fourier transform
Bernhard G. Bodmann, Christopher L. Liner
This paper presents a method to recover a bandlimited signal, up to an overall multiplicative constant, from the roots of its short-time Fourier transform. We assume that only finitely many sample values are non-zero. To generate the number of roots needed for recovery, we use a type of aliasing, a time-frequency quasi-periodization of the transform. We investigate the stability of the recovery algorithm under perturbations of the signal, in particular under low-pass filtering, and verify the stability results with numerical experiments. In these experiments we implement a deconvolution strategy for sparse bandlimited signals, whose non-zero sample values are interspersed with vanishing ones. The recovery from roots of such signals is insensitive to the effect of random echoes. In addition, we study the effect of aliasing by the time-frequency quasi-periodization on such sparse signals. If the signal is convolved with white noise, then the number of roots generated with the quasi-periodized short-time Fourier transform can be adjusted to be proportional to the number of non-vanishing samples to give recoverability with overwhelming probability.
Analysis of data separation and recovery problems using clustered sparsity
Emily J. King, Gitta Kutyniok, Xiaosheng Zhuang
Data often have two or more fundamental components, like cartoon-like and textured elements in images; point, filament, and sheet clusters in astronomical data; and tonal and transient layers in audio signals. For many applications, separating these components is of interest. Another issue in data analysis is that of incomplete data, for example a photograph with scratches or seismic data collected with fewer than necessary sensors. There exists a unified approach to solving these problems which is minimizing the ℓ1 norm of the analysis coefficients with respect to particular frame(s). This approach using the concept of clustered sparsity leads to similar theoretical bounds and results, which are presented here. Furthermore, necessary conditions for the frames to lead to sufficiently good solutions are also shown.
Sparsity and Face Recognition
icon_mobile_dropdown
An experimental comparison of online object-tracking algorithms
Qing Wang, Feng Chen, Wenli Xu, et al.
This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT,1 VRT,2 FragT,3 BoostT,4 SemiT,5 BeSemiT,6 L1T,7 MILT,8 VTD9 and TLD10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.
Sparsity in Wavefields: Tomography and MRI
icon_mobile_dropdown
Use of learned dictionaries in tomographic reconstruction
Vincent Etter, Ivana Jovanovic, Martin Vetterli
We study the use and impact of a dictionary in a tomographic reconstruction setup. First, we build two different dictionaries: one using a set of bases functions (Discrete Cosine Transform), and the other that is learned using patches extracted from training images, similar to the image that we would like to reconstruct. We use K-SVD as the learning algorithm. These dictionaries being local, we convert them to global dictionaries, ready to be applied on whole images, by generating all possible shifts of each atom across the image. During the reconstruction, we minimize the reconstruction error by performing a gradient descent on the image representation in the dictionary space. Our experiments show promising results, allowing to eliminate standard artifacts in the tomographic reconstruction, and to reduce the number of measurements required for the inversion. However, the quality of the results depends on the convergence of the learning process, and on the parameters of the dictionaries (number of atoms, convergence criterion, atom size, etc.). The exact influence of each of these remains to be studied.
On structured sparsity and selected applications in tomographic imaging
This work explores the potentials of structure encoding in sparse tomographic reconstructions. We are encoding spatial structure with Markov Random Field (MRF) models and employ it within Magnetic Resonance Imaging (MRI) and Quantitative Microwave Tomography. We illustrate thereby also different ways of MRF modelling: as a discrete, binary field imposed on hidden labels and as a continuous model imposed on the observable field. In case of MRI, the analyzed approach is a straightforward extension of sparse MRI methods and is related to the so-called LaMP (Lattice Matching Pursuit) algorithm, but with a number of differences. In case of Microwave Tomography, we give another interpretation of structured sparsity using much different, but also effective approach. Thorough experiments demonstrate clear advantages of MRF based structure encoding in both cases and motivate strongly further development.
Some proximal methods for CBCT and PET tomography
S. Anthoine, J. F. Aujol, Y. Boursier, et al.
The reconstruction of the images obtained via the Cone Beam Computerized Tomography (CBCT) and Positron Emission Tomography (PET) Scanners are ill-posed inverse problems. One needs to adress carefully the problem of inversion of the mathematical operators involved. Recent advances in optimization have yielded efficient algorithms to solve very general classes of inverse problems via the minimization of non-differentiable convex functions. We show that such models are well suited to solve the CBCT and PET reconstruction problems. On the one hand, they can incorporate directly the physics of new acquisition devices, free of dark noise; on the other hand, they can take into account the specificity of the pure Poisson noise. We propose various fast numerical schemes to recover the original data and compare them to state-of-the-art algorithms on simulated data. We study more specifically how different contrasts and resolutions may be resolved as the level of noise and/or the number of projections of the acquired sinograms decrease. We conclude that the proposed algorithms compare favorably with respect to well-established methods in tomography.
Sampling theorems and compressive sensing on the sphere
Jason D. McEwen, Gilles Puy, Jean-Philippe Thiran, et al.
We discuss a novel sampling theorem on the sphere developed by McEwen & Wiaux recently through an association between the sphere and the torus. To represent a band-limited signal exactly, this new sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere, such as the canonical Driscoll & Healy sampling theorem. A reduction in the number of samples required to represent a band-limited signal on the sphere has important implications for compressive sensing, both in terms of the dimensionality and sparsity of signals. We illustrate the impact of this property with an inpainting problem on the sphere, where we show superior reconstruction performance when adopting the new sampling theorem.
Compressed sensing in k-space: from magnetic resonance imaging and synthetic aperture radar
Mike E. Davies, Chaoran Du, Shaun I. Kelly, et al.
We consider two imaging applications of compressed sensing where the acquired data corresponds to samples in the Fourier domain (aka k- space). The rst one is magnetic resonance imaging (MRI), which has been one of the standard examples in the compressed sensing literature. The second one is synthetic aperture radar (SAR). We consider the practical issues of applying compressed sensing ideas in these two applications noting that the physical prossesses involved in these two sensing modalities are very different. We consider the issues of: appropriate image models and sampling strategies, dealing with noise, and the need for calibration.
Sparse Sampling and Image Reconstruction in MRI I
icon_mobile_dropdown
Compressive sensing MRI with complex sparsification
Compressive Sensing Magnetic Resonance Imaging (CS-MRI) has been rapidly developed during last several years. To reconstruct an image from incomplete data using compressive sensing, the image has to be sparse or can be transformed to sparse representation. Gradient operators associated with total variation (TV) and discrete wavelet transform (DWT) are two commonly used sparsifying transforms in CS-MRI. Since the data acquired in MRI are complex, these transforms are usually applied to the real and the imaginary parts of the image independently. In this paper, we will explore the application of the complex wavelet transform (CWT) as a more effective sparsifying transform for CS-MRI. Specifically, dual-tree complex wavelet transform (DT-CWT), a CWT previously used for real or complex image compression, is integrated with compressive sensing reconstruction algorithm. We will test the new method using both simulated and in-vivo MRI data. The results will be compared with those of DWT and TV, which show that the new method can achieve better sparsity and reduced reconstruction errors in CS-MRI.
Sparse Representations on MRI
icon_mobile_dropdown
Paradigm-free mapping with morphological component analysis: getting most out of fMRI data
César Caballero Gaudes, Dimitri Van De Ville, Natalia Petridou, et al.
Functional magnetic resonance imaging (fMRI) is a non-invasive imaging technique that maps the brain's response to neuronal activity based on the blood oxygenation level dependent (BOLD) effect. This work proposes a novel method for fMRI data analysis that enables the decomposition of the fMRI signal in its sources based on morphological descriptors. Beyond traditional fMRI hypothesis-based or blind data-driven exploratory approaches, this method allows the detection of BOLD responses without prior timing information. It is based on the deconvolution of the neuronal-related haemodynamic component of the fMRI signal with paradigm free mapping and also furnishes estimates of the movement-related effects, instrumental drifts and physiological fluctuations. Our algorithm is based on an overcomplete representation of the fMRI voxel time series with an additive linear model that is recovered by means of a L1-norm regularized least-squares estimators and an adapted block coordinate relaxation procedure. The performance of the technique is evaluated with simulated data and real experimental data acquired at 3T.
Regularizing GRAPPA using simultaneous sparsity to recover de-noised images
Daniel S. Weller, Jonathan R. Polimeni, Leo Grady, et al.
To enable further acceleration of magnetic resonance (MR) imaging, compressed sensing (CS) is combined with GRAPPA, a parallel imaging method, to reconstruct images from highly undersampled data with significantly improved RMSE compared to reconstructions using GRAPPA alone. This novel combination of GRAPPA and CS regularizes the GRAPPA kernel computation step using a simultaneous sparsity penalty function of the coil images. This approach can be implemented by formulating the problem as the joint optimization of the least squares fit of the kernel to the ACS lines and the sparsity of the images generated using GRAPPA with the kernel.
Bases, Frames, and Dictionaries: Designs for Sparsity II
icon_mobile_dropdown
Learned dictionaries for sparse image representation: properties and results
Karl Skretting, Kjersti Engan
Sparse representation of images using learned dictionaries have been shown to work well for applications like image denoising, impainting, image compression, etc. In this paper dictionary properties are reviewed from a theoretical approach, and experimental results for learned dictionaries are presented. The main dictionary properties are the upper and lower frame (dictionary) bounds, and (mutual) coherence properties based on the angle between dictionary atoms. Both ℓ0 sparsity and ℓ1 sparsity are considered by using a matching pursuit method, order recursive matching Pursuit (ORMP), and a basis pursuit method, i.e. LARS or Lasso. For dictionary learning the following methods are considered: Iterative least squares (ILS-DLA or MOD), recursive least squares (RLS-DLA), K-SVD and online dictionary learning (ODL). Finally, it is shown how these properties relate to an image compression example.
Learning hierarchical and topographic dictionaries with structured sparsity
Julien Mairal, Rodolphe Jenatton, Guillaume Obozinski, et al.
Recent work in signal processing and statistics have focused on defining new regularization functions, which not only induce sparsity of the solution, but also take into account the structure of the problem. We present in this paper a class of convex penalties introduced in the machine learning community, which take the form of a sum of ℓ2- andℓ- norms over groups of variables. They extend the classical group-sparsity regularization in the sense that the groups possibly overlap, allowing more flexibility in the group design. We review efficient optimization methods to deal with the corresponding inverse problems, and their application to the problem of learning dictionaries of natural image patches: On the one hand, dictionary learning has indeed proven effective for various signal processing tasks. On the other hand, structured sparsity provides a natural framework for modeling dependencies between dictionary elements. We thus consider a structured sparse regularization to learn dictionaries embedded in a particular structure, for instance a tree or a two-dimensional grid. In the latter case, the results we obtain are similar to the dictionaries produced by topographic independent component analysis.
Design of a tight frame of 2D shearlets based on a fast non-iterative analysis and synthesis algorithm
The shearlet transform is a recent sibling in the family of geometric image representations that provides a traditional multiresolution analysis combined with a multidirectional analysis. In this paper, we present a fast DFT-based analysis and synthesis scheme for the 2D discrete shearlet transform. Our scheme conforms to the continuous shearlet theory to high extent, provides perfect numerical reconstruction (up to floating point rounding errors) in a non-iterative scheme and is highly suitable for parallel implementation (e.g. FPGA, GPU). We show that our discrete shearlet representation is also a tight frame and the redundancy factor of the transform is around 2.6, independent of the number of analysis directions. Experimental denoising results indicate that the transform performs the same or even better than several related multiresolution transforms, while having a significantly lower redundancy factor.
A diagonally-oriented DCT-like 2D block transform
Due to the prevalence of edges in image content, various directional transforms have been proposed for the efficient representation of images. Such transforms are useful for coding, denoising, and image restoration using sparse signal representation techniques. This paper describes a new non-separable 2D DCT-like orthonormal block transform that is optimized for a specified orientation angle. The approach taken in this paper is to extend to two-dimensions one approach (of several) for constructing the standard 1D DCT. The proposed transform is obtained as the eigenvectors of particular matrices, as is the standard 1D DCT.
Novel Filter and Dictionary Designs
icon_mobile_dropdown
Radio frequency (RF) transient classification using sparse representations over learned dictionaries
Daniela I. Moody, Steven P. Brumby, Kary L. Myers, et al.
Automatic classification of transitory or pulsed radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such transients are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. We compare two dictionary learning methods from the image analysis literature, the K-SVD algorithm and Hebbian learning, and extend them for use with RF data. Both methods allow us to learn discriminative RF dictionaries directly from data without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. In this paper we compare the two dictionary learning methods and discuss how their performance changes as a function of dictionary training parameters. We demonstrate that learned dictionary techniques are suitable for pulsed RF analysis and present results with varying background clutter and noise levels.
Tight frame 6-band symmetric wavelets with limited redundancy
We consider the design of 6-channel tight frame symmetric wavelet with scaling factor M = 4. The lowpass filter is designed using either Grobner bases or truncated Taylor series methods. The bandpass and highpass filters are then designed using Grobner bases. The resulting filters have linear phase, smooth limit functions, and reduced redundancy. It was possible to obtain filterbanks with K0 (zeros at z = 1 and z = ±j for the lowpass filter) up to 5 and Kmin (zeros at z = 1 for bandpass/highpass filters) of 1 or greater. The proposed filterbanks generate five wavelets and a scaling function with the underlying filters related as follows: H5-i(z) = Hi(-z), i = 0... 5.
Sparse signal representations using the tunable Q-factor wavelet transform
The tunable Q-factor wavelet transform (TQWT) is a fully-discrete wavelet transform for which the Q-factor, Q, of the underlying wavelet and the asymptotic redundancy (over-sampling rate), r, of the transform are easily and independently specified. In particular, the specified parameters Q and r can be real-valued. Therefore, by tuning Q, the oscillatory behavior of the wavelet can be chosen to match the oscillatory behavior of the signal of interest, so as to enhance the sparsity of a sparse signal representation. The TQWT is well suited to fast algorithms for sparsity-based inverse problems because it is a Parseval frame, easily invertible, and can be efficiently implemented using radix-2 FFTs. The TQWT can also be used as an easily-invertible discrete approximation of the continuous wavelet transform.
Sparse Sampling and Image Reconstruction in MRI II
icon_mobile_dropdown
Blind linear models for the recovery of dynamic MRI data
Sajan Goud Lingala, Yue Hu, Mathews Jacob
Classical accelerated dynamic MRI schemes rely on the sparsity or banded structure of the data in specied transform domains (eg. Fourier space). Clearly, the utility of these schemes depend on the specic data and the transform. For example, these methods only provide modest accelerations in free-breathing myocardial perfusion MRI. In this paper, we discuss a novel blind linear model to recover the data when the optimal transform is not known a-priori. Specically, we pose the simultaneous recovery of the optimal linear model/transform and its coecients from the measurements as a non-convex optimization problem. We also introduce an ecient majroize-minimize algorithm to minimize the cost function. We demonstrate the utility of the algorithm in considerably accelerating free breathing myocardial perfusion MRI data.
Sparse dictionary learning for resting-state fMRI analysis
Kangjoo Lee, Paul Kyu Han, Jong Chul Ye
Recently, there has been increased interest in the usage of neuroimaging techniques to investigate what happens in the brain at rest. Functional imaging studies have revealed that the default-mode network activity is disrupted in Alzheimer's disease (AD). However, there is no consensus, as yet, on the choice of analysis method for the application of resting-state analysis for disease classification. This paper proposes a novel compressed sensing based resting-state fMRI analysis tool called Sparse-SPM. As the brain's functional systems has shown to have features of complex networks according to graph theoretical analysis, we apply a graph model to represent a sparse combination of information flows in complex network perspectives. In particular, a new concept of spatially adaptive design matrix has been proposed by implementing sparse dictionary learning based on sparsity. The proposed approach shows better performance compared to other conventional methods, such as independent component analysis (ICA) and seed-based approach, in classifying the AD patients from normal using resting-state analysis.
Sparse Representations in Multidimensions II
icon_mobile_dropdown
3D discrete shearlet transform and video denoising
Demetrio Labate, Pooran Negi
This paper introduces a numerical implementation of the 3D shearlet transform, a directional transform which is derived from the theory of shearlets. The shearlet approach belongs to a class of directional multiscale methods emerged during the last 10 years to overcome the limitations of traditional multiscale systems, which also include curvelets and contourlets. Unlike other methods, shearlets are derived from the theory of affine systems, which allows a very flexible mathematical structure and a natural transition from the continuous to the digital setting. Following the recent proof of the optimality of the 3D shearlet representation, in this paper we develop an algorithmic implementation of the 3D shearlet transform that follows closely the spatial-frequency pattern of the corresponding continuous transform. The performance of the algorithm is illustrated on problems of video denoising and successfully compared against other state-of-the-art multiscale techniques, including curvelets and surfacelets.
Efficient multiscale and multidirectional representation of 3D data using the 3D discrete shearlet transform
In recent years, there has been a lot of interest in multiresolution representations that also perform a multidirectional analysis. These representations often yield very sparse representation for multidimensional data. The shearlet representation, which has been derived within the framework of composite wavelets, can be extended quite trivially from 2D to 3D. However, the extension to 3D is not unique and consequently there are different implementations possible for the discrete transform. In this paper, we investigate the properties of two relevant designs having different 3D frequency tilings. We show that the first design has a redundancy factor of around 7, while in the second design the transform can attain a redundancy factor around 3.5, independent of the number of analysis directions. Due to the low redundancy, the 3D shearlet transform becomes a viable alternative to the 3D curvelet transform. Experimental results are provided to support these findings.
Multicomposite wavelet estimation
Glenn R. Easley, Demetrio Labate, Vishal M. Patel
In this work, we present a new approach to image denoising derived from the general framework of wavelets with composite dilations. This framework extends the traditional wavelet approach by allowing for waveforms to be defined not only at various scales and locations but also according to various orthogonal transformations such as shearing transformations. The shearlet representation is, perhaps, the most widely known example of wavelets with composite dilations. However, many other representations are obtained within this framework, where directionality properties are controlled by different types of orthogonal matrices, such as the newly defined hyperbolets. In this paper, we show how to take advantage of different wavelets with composite dilations to sparsely represent important features such as edges and texture independently, and apply these techniques to derive improved algorithms for image denoising.
3D-rigid motion invariant discrimination and classification of 3D-textures
Sanat Upadhyay, Saurabh Jain, Manos Papadakis, et al.
In this paper we implement a method for the 3D-rigid motion invariant texture discrimination and binary classification for discrete 3D-textures that are spatially homogeneous by modelling them as stationary Gaussian random fields. We use a novel 'distance' between 3D-textures that remains invariant under all 3D-rigid motions of the texture to develop rules for 3D-rigid motion invariant texture discrimination and binary classification of textures.