Proceedings Volume 10394

Wavelets and Sparsity XVII

cover
Proceedings Volume 10394

Wavelets and Sparsity XVII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 10 October 2017
Contents: 20 Sessions, 55 Papers, 37 Presentations
Conference: SPIE Optical Engineering + Applications 2017
Volume Number: 10394

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10394
  • Keynote Session I
  • Keynote Session II
  • Computational Bioimaging
  • Unconventional Optical Imaging
  • Neuroimaging Data Analytics
  • Imaging and Inverse Problems
  • Applied Harmonic Analysis I
  • Mathematical Data Analysis and Frame Theory I
  • Applied Harmonic Analysis II
  • Directional and Scattering Transforms in Neuroscience Imaging
  • Mathematical Data Analysis and Frame Theory II
  • Applied Harmonic Analysis III
  • Mathematical Data Analysis and Frame Theory III
  • Theory and Applications of Structured Low-Rank Matrix Completion
  • Optimization and Sparse Inverse Problems I
  • Data Representation with Graphs and Multi-Scale Processing, Application to fMRI Brain Connectivity I
  • Optimization and Sparse Inverse Problems II
  • Data Representation with Graphs and Multi-Scale Processing, Application to fMRI Brain Connectivity II
  • Multiscale Analysis
Front Matter: Volume 10394
icon_mobile_dropdown
Front Matter: Volume 10394
This PDF file contains the front matter associated with SPIE Proceedings Volume 10394, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Keynote Session I
icon_mobile_dropdown
Speeding up sparse signal recovery using sparse-graph codes (Conference Presentation)
This recording is for the presentation titled, “Speeding up sparse signal recovery using sparse-graph codes”, part of the SPIE symposium on “Optical Engineering + Applications"
Keynote Session II
icon_mobile_dropdown
Sketchy decisions: convex optimization with optimal storage (Conference Presentation)
This recording is for the presentation titled, “Sketchy decisions: convex optimization with optimal storage”, part of the SPIE symposium on “SPIE Optical Engineering + Applications"
Computational Bioimaging
icon_mobile_dropdown
Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging
An imaging platform based on time-resolved structured light and hyperspectral single-pixel detection has been developed to perform quantitative fluorescence lifetime imaging (FLI) over a large field of view (FOV) and multiple spectral bands simultaneously. We demonstrate the potential of this new imaging platform by quantitatively imaging near-infrared FRET (Förster Resonance Energy Transfer) pair, both in vitro and in vivo. The technique is well-suited for quantitative hyperspectral lifetime imaging with high-sensitivity, paving the way for many important biomedical applications.
A sparsity-based simplification method for segmentation of spectral-domain optical coherence tomography images
Optical coherence tomography (OCT) has emerged as a promising image modality to characterize biological tissues. With axio-lateral resolutions at the micron-level, OCT images provide detailed morphological information and enable applications such as optical biopsy and virtual histology for clinical needs. Image enhancement is typically required for morphological segmentation, to improve boundary localization, rather than enrich detailed tissue information. We propose to formulate image enhancement as an image simplification task such that tissue layers are smoothed while contours are enhanced. For this purpose, we exploit a Total Variation sparsity-based image reconstruction, inspired by the Compressed Sensing (CS) theory, but specialized for images with structures arranged in layers. We demonstrate the potential of our approach on OCT human heart and retinal images for layers segmentation. We also compare our image enhancement capabilities to the state-of-the-art denoising techniques.
Unconventional Optical Imaging
icon_mobile_dropdown
Photon-efficient super-resolution laser radar
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Itertative back-projection techniques for non-line-of-sight imaging (Conference Presentation)
Marco La Manna, Eric C. Breitbach, Jonathan Jackson, et al.
This talk was invited by special session organizer VIVEK GOYAL.
Neuroimaging Data Analytics
icon_mobile_dropdown
Riemannian multi-manifold modeling and clustering in brain networks
Konstantinos Slavakis, Shiva Salsabilian, David S. Wack, et al.
This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
Probing the dynamics of spontaneous cortical activities via widefield Ca+2 imaging in GCaMP6 transgenic mice
Li Zhu, Christian R. Lee, David J. Margolis, et al.
Studying the temporal and spectral characteristics of brain function under spontaneous activity has been recently receiving great interest in the field of neuroscience. By combining wavelet coherence and multivariate permutation test, this paper presents a new method for investigating changes in functional connectivity under spontaneous activity. The proposed method does not impose any prior assumption about the frequency bands that are involved in the activity, nor on the distribution of the data. The proposed method is applied on data obtained from widefield transcranial calcium imaging of GCaMP6 transgenic mice. Results on how function connectivity corresponding to two forms of spontaneous activity differ across frequency and space are presented.
MR correlation spectroscopic imaging of multidimensional exponential decays: probing microstructure with diffusion and relaxation
Diffusion-relaxation correlation spectroscopic imaging (DR-CSI) is a novel multidimensional magnetic resonance (MR) imaging approach that we recently introduced for probing microstructure. DR-CSI uses high-dimensional MR data that is acquired with spatial encoding and non-separable diffusion-relaxation contrast encoding, and uses constrained image reconstruction to estimate a spectroscopic image with a multidimensional spectrum for each voxel. The spectral peaks correspond to distinct compartmental microenvironments that coexist within each voxel. This paper reviews DR-CSI and describes and demonstrates its generalization to other contrast mechanisms (i.e., we demonstrate multidimensional relaxation (T1-T2) correlation spectroscopic imaging).
A regularized clustering approach to brain parcellation from functional MRI data
We consider a data-driven approach for the subdivision of an individual subject's functional Magnetic Resonance Imaging (fMRI) scan into regions of interest, i.e., brain parcellation. The approach is based on a computational technique for calculating resolution from inverse problem theory, which we apply to neighborhood selection for brain connectivity networks. This can be efficiently calculated even for very large images, and explicitly incorporates regularization in the form of spatial smoothing and a noise cutoff. We demonstrate the reproducibility of the method on multiple scans of the same subjects, as well as the variations between subjects.
Joint fMRI analysis and subject clustering using sparse dictionary learning
Seung-Jun Kim, Krishna K. Dontaraju
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
Detecting brain dynamics during resting state: a tensor based evolutionary clustering approach
Esraa Al-sharoa, Mahmood Al-khassaweneh, Selin Aviyente
Human brain is a complex network with connections across different regions. Understanding the functional connectivity (FC) of the brain is important both during resting state and task; as disruptions in connectivity patterns are indicators of different psychopathological and neurological diseases. In this work, we study the resting state functional connectivity networks (FCNs) of the brain from fMRI BOLD signals. Recent studies have shown that FCNs are dynamic even during resting state and understanding the temporal dynamics of FCNs is important for differentiating between different conditions. Therefore, it is important to develop algorithms to track the dynamic formation and dissociation of FCNs of the brain during resting state. In this paper, we propose a two step tensor based community detection algorithm to identify and track the brain network community structure across time. First, we introduce an information-theoretic function to reduce the dynamic FCN and identify the time points that are similar topologically to combine them into a tensor. These time points will be used to identify the different FC states. Second, a tensor based spectral clustering approach is developed to identify the community structure of the constructed tensors. The proposed algorithm applies Tucker decomposition to the constructed tensors and extract the orthogonal factor matrices along the connectivity mode to determine the common subspace within each FC state. The detected community structure is summarized and described as FC states. The results illustrate the dynamic structure of resting state networks (RSNs), including the default mode network, somatomotor network, subcortical network and visual network.
Imaging and Inverse Problems
icon_mobile_dropdown
Optimal transport-based dictionary learning and its application to Euclid-like Point Spread Function representation
Morgan A. Schmitz, Matthieu Heitz, Nicolas Bonneel, et al.
Optimal Transport theory enables the definition of a distance across the set of measures on any given space. This Wasserstein distance naturally accounts for geometric warping between measures (including, but not exclusive to, images). We introduce a new, Optimal Transport-based representation learning method in close analogy with the usual Dictionary Learning problem. This approach typically relies on a matrix dot-product between the learned dictionary and the codes making up the new representation. The relationship between atoms and data is thus ultimately linear. By reconstructing our data as Wasserstein barycenters of learned atoms instead, our approach yields a representation making full use of the Wasserstein distance's attractive properties and allowing for non-linear relationships between the dictionary atoms and the datapoints.

We apply our method to a dataset of Euclid-like simulated PSFs (Point Spread Function). ESA's Euclid mission will cover a large area of the sky in order to accurately measure the shape of billions of galaxies. PSF estimation and correction is one of the main sources of systematic errors on those galaxy shape measurements. PSF variations across the field of view and with the incoming light's wavelength can be highly non-linear, while still retaining strong geometrical information, making the use of Optimal Transport distances an attractive prospect. We show that our representation does indeed succeed at capturing the PSF's variations.
Localization of sound sources in a room with one microphone
Helena Peić Tukuljac, Hervé Lissek, Pierre Vandergheynst
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
Limited-memory trust-region methods for sparse relaxation
Lasith Adhikari, Omar DeGuchy, Jennifer B. Erway, et al.
In this paper, we solve the ℓ2-ℓ1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.
Underwater object classification using scattering transform of sonar signals
Naoki Saito, David S. Weber
In this paper, we apply the scattering transform (ST)—a nonlinear map based off of a convolutional neural network (CNN)—to classification of underwater objects using sonar signals. The ST formalizes the observation that the filters learned by a CNN have wavelet-like structure. We achieve effective binary classification both on a real dataset of Unexploded Ordinance (UXOs), as well as synthetically generated examples. We also explore the effects on the waveforms with respect to changes in the object domain (e.g., translation, rotation, and acoustic impedance, etc.), and examine the consequences coming from theoretical results for the scattering transform. We show that the scattering transform is capable of excellent classification on both the synthetic and real problems, thanks to having more quasi-invariance properties that are well-suited to translation and rotation of the object.
Non-convex Shannon entropy for photon-limited imaging
Lasith Adhikari, Reheman Baikejiang, Omar DeGuchy, et al.
Reconstructing high-dimensional sparse signals from low-dimensional low-count photon observations is a challenging nonlinear optimization problem. In this paper, we build upon previous work on minimizing the Poisson log-likelihood and incorporate recent work on the generalized nonconvex Shannon entropy function for promoting sparsity in solutions. We explore the effectiveness of the proposed approach using numerical experiments.
Applied Harmonic Analysis I
icon_mobile_dropdown
Binary block codes from Euclidean embeddings and random hyperplane tessellations
Bernhard G. Bodmann, Robert P. Mendez
In this paper we use Euclidean embeddings and random hyperplane tessellations to construct binary block codes. The construction proceeds in two stages. First, an auxiliary ternary code is chosen which consists of vectors in the union of coordinate subspaces. The subspaces are selected so that any two vectors of different support have a sufficiently large distance. In addition, any two ternary vectors from the auxiliary codebook that have common support are at a guaranteed minimum distance. In the second stage, the auxiliary ternary code is converted to a binary code by an additional random hyperplane tessellation.
Finding the closest probabilistic Parseval frame in the 2-Wasserstein metric
It is known in the standard l2 distance that the closest Parseval frame to a given frame {Φi}mi=1d is {S-1/2(Φi)}mi=1 where S is the frame operator of the original frame. We show this is also the case for probabilistic frames in the 2-Wasserstein distance. In the process we develop some regularity properties for the map taking a probabilistic frame to its closest Parseval frame.
Low frame coherence via zero-mean tensor embeddings
This paper is concerned with achieving optimal coherence for highly redundant real unit-norm frames. As the redundancy grows, the number of vectors in the frame becomes too large to admit equiangular arrangements. In this case, other geometric optimality criteria need to be identified. To this end, we use an iteration of the embedding technique by Conway, Hardin and Sloane. As a consequence of their work, a quadratic mapping embeds equiangular lines into a simplex in a real Euclidean space. Here, higher degree polynomial maps embed highly redundant unit-norm frames to simplices in high-dimensional Euclidean spaces. We focus on the lowest degree case in which the embedding is quartic.
Optimal projective packings from association schemes
Joseph W. Iverson, John Jasper, Dustin G. Mixon
We present the theory of equiangular tight frames (ETFs) from association schemes as a generalization of harmonic ETFs. This theory has the pleasing consequence of including not only harmonic ETFs from abelian groups but also real ETFs with centroidal symmetry and the ETFs discovered by Renes and Strohmer coming from skew-symmetric conference matrices. Finally we present a theory of nonabelian harmonic ETFs from the perspective of association schemes. An infinite class of ETFs arising as group frames from nonabelian groups is presented.
Memory-optimal neural network approximation
Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, et al.
We summarize the main results of a recent theory—developed by the authors—establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems—so-called affine systems—can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.
Mathematical Data Analysis and Frame Theory I
icon_mobile_dropdown
Real phase retrieval by orthogonal complements and hyperplanes
Sara Botelho-Andrade, Peter G. Casazza, Desai Cheng, et al.
The fundamental question concerning phase retrieval by projections in Rm is what is the least number of projections needed and what dimensions can be used. We will look at recent advances concerning phase retrieval by orthogonal complements and phase retrieval by hyperplanes which raise a number of problems which would give a complete answer to this fundamental problem.
Prediction models for graph-linked data with localized regression
Alexander Cloninger
Regression problems typically assume the training data are independent samples, but in many modern applications samples come from individuals connected by a network or some other form of baseline information. In the case that the population is divided into discrete and disjoint classes with no notion of inter-class connection, hierarchical linear modelling can address such issues. We propose a series of optimization schemes that create generalized linear models, which incorporate both the local neighborhoods and global structure of the population network Laplacian to define localized regressions that smoothly vary with respect to the network. Depending on the context of the data and function, this method can be used to learn local function gradients on a manifold, model and denoise time varying processes with drift terms determined by some initial condition on a network, and perform co-clustering of a matrix or tensor. We also discuss incorporating the local regression coefficient estimates to redefine the network geometry and build a function adapted diffusion process.
A brief introduction to equi-chordal and equi-isoclinic tight fusion frames
Matthew Fickus, John Jasper, Dustin G. Mixon, et al.
Equi-chordal and equi-isoclinic tight fusion frames (ECTFFs and EITFFs) are both types of optimal packings of subspaces in Euclidean spaces. In the special case where these subspaces are one-dimensional, ECTFFs and EITFFs both correspond to types of optimal packings of lines known as equiangular tight frames. In this brief note, we review some of the fundamental ideas and results concerning ECTFFs and EITFFs.
Soft recovery
Axel Flinth
This article provides a new type of analysis of a compressed-sensing based technique for recovering column-sparse matrices, namely minimization of the l1,2-norm. Rather than providing conditions on the measurement matrix which guarantees the solution of the program to be exactly equal to the ground truth signal (which already has been thoroughly investigated), it presents a condition which guarantees that the solution is approximately equal to the ground truth. Soft recovery statements of this kind are to the best knowledge of the author a novelty in Compressed Sensing.
Applied Harmonic Analysis II
icon_mobile_dropdown
The Levenstein bound for packings in projective spaces
John I. Haas IV, Nathaniel Hammen, Dustin G. Mixon
We study the problem of packing points in real or complex projective space so that the minimum distance is maximized. Existing bounds for this problem include the Welch and orthoplex bounds. This paper discusses the Levenstein bound, which follows from Delsarte's linear programming bound. We highlight the relationship between the Welch and Levenstein bounds.
K−means clustering on the space of persistence diagrams
Andrew Marchese, Vasileios Maroulas, Josh Mike
A recent cohort of research aims to apply topological and geometric theory to data analysis. However, more effort is needed to incorporate statistical ideas and structure to these analysis methods. To this end, we present persistent homology clustering techniques through the perspective of data analysis. These techniques provide insight into the structure of the underlying dynamic and are able to recognize important shape properties such as periodicity, chaos, and multi-stability. Moreover, introducing quantitative structure on the topological data space allows for rigorous understanding of the data's geometry, a powerful tool for scrutinizing the morphology of the inherent dynamic. Additionally, we illustrate the advantages of these techniques and results through examples derived from dynamical systems applications.
Phase retrieval from local measurements in two dimensions
Mark Iwen, Brian Preskitt, Rayan Saab, et al.
The phase retrieval problem has appeared in a multitude of applications for decades. While ad hoc solutions have existed since the early 1970s, recent developments have provided algorithms that offer promising theoretical guarantees under increasingly realistic assumptions. Motivated by ptychographic imaging, we generalize a recent result on phase retrieval of a one dimensional objective vector x ∈ ℂd to recover a two dimensional sample Q ∈ ℂd x d from phaseless measurements, using a tensor product formulation to extend the previous work.
Directional and Scattering Transforms in Neuroscience Imaging
icon_mobile_dropdown
Shearlet-based regularization in sparse dynamic tomography
T. A. Bubba, M. März, Z. Purisha, et al.
Classical tomographic imaging is soundly understood and widely employed in medicine, nondestructive testing and security applications. However, it still offers many challenges when it comes to dynamic tomography. Indeed, in classical tomography, the target is usually assumed to be stationary during the data acquisition, but this is not a realistic model. Moreover, to ensure a lower X-ray radiation dose, only a sparse collection of measurements per time step is assumed to be available. With such a set up, we deal with a sparse data, dynamic tomography problem, which clearly calls for regularization, due to the loss of information in the data and the ongoing motion. In this paper, we propose a 3D variational formulation based on 3D shearlets, where the third dimension accounts for the motion in time, to reconstruct a moving 2D object. Results are presented for real measured data and compared against a 2D static model, in the case of fan-beam geometry. Results are preliminary but show that better reconstructions can be achieved when motion is taken into account.
Detecting neuronal activity from calcium imaging data using FRI methods (Conference Presentation)
Stephanie Reynolds, Jon Oñativia, Simon R. Schultz, et al.
Two-photon calcium imaging can be used to monitor the activity of thousands of neurons across multiple brain areas at single-cell resolution. To harness the power of this imaging technology, neuroscientists require algorithms to detect from the imaging data the time points at which each neuron was active. We present an algorithm based on Finite Rate of Innovation (FRI) theory to detect neuronal spiking activity from this data. By exploiting the parametric structure of the signal, the activity detection problem can be reduced to the classic FRI problem of reconstructing a stream of Diracs.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
Bart Goossens, Hiêp Luong, Wilfried Philips
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity
Maziar Yaesoubi, Vince D. Calhoun
In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.
Exploring neuronal synapses with directional and symmetric frame filters with small support
Nikolaos Atreas, Nikolaos Karantzas, Manos Papadakis, et al.
Spines are protrusions of neuronal dendritic surfaces. These subcellular compartments are essential in neuronal information processing since electrical signals from other neurons are transmitted to dendrites via synaptic gateways located at dendritic spines. One of the important tasks for assessing synaptic strength is estimating the volume of spines, which is quite challenging because the image resolution for spines in live animal microscopy images is low and the level of noise is high. In order to carry out this task we develop a method for spine surface segmentation using sparse representations based on directional 3D filters with small spatial support.
Mathematical Data Analysis and Frame Theory II
icon_mobile_dropdown
Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)
Jade Larriva-Latt, Angela Morrison, Alison Radgowski, et al.
Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon – with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.
Reconstruction of finite-valued sparse signals
Sandra Keiper, Gitta Kutyniok, Dae Gwan Lee, et al.
The need of reconstructing discrete-valued sparse signals from few measurements, that is solving an undetermined system of linear equations, appears frequently in science and engineering. Those signals appear, for example, in error correcting codes as well as massive Multiple-Input Multiple-Output (MIMO) channel and wideband spectrum sensing. A particular example is given by wireless communications, where the transmitted signals are sequences of bits, i.e., with entries in f0; 1g. Whereas classical compressed sensing algorithms do not incorporate the additional knowledge of the discrete nature of the signal, classical lattice decoding approaches do not utilize sparsity constraints. In this talk, we present an approach that incorporates a discrete values prior into basis pursuit. In particular, we address finite-valued sparse signals, i.e., sparse signals with entries in a finite alphabet. We will introduce an equivalent null space characterization and show that phase transition takes place earlier than when using the classical basis pursuit approach. We will further discuss robustness of the algorithm and show that the nonnegative case is very different from the bipolar one. One of our findings is that the positioning of the zero in the alphabet - i.e., whether it is a boundary element or not - is crucial.
Tolerant compressed sensing with partially coherent sensing matrices
Tobias Birnbaum, Yonina C. Eldar, Deanna Needell
Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial . We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes.
De-biasing low-rank projection for matrix completion
Simon Foucart, Deanna Needell, Yaniv Plan, et al.
We study matrix completion with non-uniform, deterministic sampling patterns. We introduce a computable parameter, which is a function of the sampling pattern, and show that if this parameter is small, then we may recover missing entries of the matrix, with appropriate weights. We theoretically analyze a simple and well-known recovery method, which simply projects the (zero-padded) subsampled matrix onto the set of low-rank matrices. We show that under non-uniform deterministic sampling, this method yields a biased solution, and we propose an algorithm to de-bias it. Numerical simulations demonstrate that de-biasing significantly improves the estimate. However, when the observations are noisy, the error of this method can be sub-optimal when the sampling is highly non-uniform. To remedy this, we suggest an alternative which is based on projection onto the max-norm ball whose robustness to noise tolerates arbitrarily non-uniform sampling. Finally, we analyze convex optimization in this framework.
Topology reduction in deep convolutional feature extraction networks
Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei
Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a−N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 — ε) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes — for fixed network depth N — the average number of operationally significant nodes per layer.
Applied Harmonic Analysis III
icon_mobile_dropdown
On the distribution of a product of N Gaussian random variables
Željka Stojanac, Daniel Suess, Martin Kliesch
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
Framed frames for data frames
Nate Strawn
We present a computationally efficient, data-driven procedure for constructing linear isometric embeddings of high-dimensional data (data frames) into spaces of smooth images, and thereby obtain tight frame dictionaries for the data space using tight frame dictionaries for the image space (“framed frames” – wavelets, curvelets, shearlets, etc.). Experiments indicate that data are more compressible in these induced dictionaries when compared to compressibility in terms of principal components.
Projected power iteration for network alignment
Efe Onaran, Soledad Villar
The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erdős-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.
Mathematical Data Analysis and Frame Theory III
icon_mobile_dropdown
Blind demixing and deconvolution with noisy data at near optimal rate
Blind demixing and deconvolution refers to the problem of simultaneous deconvolution of several source signals from its noisy superposition. This problem appears, amongst others, in the field of Wireless Communication: Many sensors sporadically communicate only short messages over unknown channels. We show that robust recovery of message and channel vectors can be achieved via convex recovery. This requires that random linear encoding is applied at the devices and that the number of required measurements at the receiver scales essentially with the degrees of freedom of the overall estimation problem. Thus, the scaling is linear in the number of source signals. This significantly improves previous results.
Theory and Applications of Structured Low-Rank Matrix Completion
icon_mobile_dropdown
Convex relaxations of spectral sparsity for robust super-resolution and line spectrum estimation
We consider recovering the amplitudes and locations of spikes in a point source signal from its low-pass spectrum that may suffer from missing data and arbitrary outliers. We first review and provide a unified view of several recently proposed convex relaxations that characterize and capitalize the spectral sparsity of the point source signal without discretization under the framework of atomic norms. Next we propose a new algorithm when the spikes are known a priori to be positive, motivated by applications such as neural spike sorting and fluorescence microscopy imaging. Numerical experiments are provided to demonstrate the effectiveness of the proposed approach.
2D phaseless super-resolution
Myung Cho, Christos Thramboulidis, Babak Hassibi, et al.
In phaseless super-resolution, we only have the magnitude information of continuously-parameterized signals in a transform domain, and try to recover the original signals from these magnitude measurements. Optical microscopy is one application where the phaseless super-resolution for 2D signals arise. In this paper, we propose algorithms for performing phaseless super-resolution for 2D or higher-dimensional signals, and investigate their performance guarantees.
Recovery of continuous domain piecewise smooth images: theory, algorithms, and applications (Conference Presentation)
Mathews Jacob, Gregory Ongie, Sampurna Biswas, et al.
This talk was invited by special session organizers Jong Chul Ye and Mathews Jacob
Learning-based low-rank Hankel structured matrix approach (Conference Presentation)
In our recent theoretical work, we have provided a complete 1-D theory for restoring signals with finite rate of innovations (FRI) from sparse Fourier measurements using low-rank Fourier interpolation. In this work, we are expanding this theory for restoring common sparse signals where the singularity of the FRI signals is at the same position, while the unknown weights are different for each channel. We consider two situations: one for the same Fourier sampling pattern, which corresponds to the multiple measurement vector problems (MMV), and the other for different sampling patterns. We will derive performance guarantees: one for the algebraic guraantee and the other for probabilistic guarantee from randomly sampled Fourier measurements. We will discuss the relationship between the newly derived bounds and those of multiple signal classification (MUSIC) algorithm and single channel low-rank Fourier interpolation.
Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion (Conference Presentation)
A spectrally sparse signal of order r is a mixture of r damped or undamped complex sinusoids. This talk investigates the problem of reconstructing spectrally sparse signals from a random subset of n regular time domain samples, which can be reformulated as a low rank Hankel matrix completion problem. We introduce an iterative hard thresholding (IHT) algorithm and a fast iterative hard thresholding (FIHT) algorithm for efficient reconstruction of spectrally sparse signals via low rank Hankel matrix completion. Theoretical recovery guarantees have been established for FIHT, showing that O(r^2log^2(n)) number of samples are sufficient for exact recovery with high probability. Empirical performance comparisons establish significant computational advantages for IHT and FIHT. In particular, numerical simulations on 3D arrays demonstrate the capability of FIHT on handling large and high-dimensional real data.
Optimization and Sparse Inverse Problems I
icon_mobile_dropdown
A note on the blind deconvolution of multiple sparse signals from unknown subspaces
This note studies the recovery of multiple sparse signals, xn ∈ ℝL, n = 1, . . . , N, from the knowledge of their convolution with an unknown point spread function h ∈ ℝL. When the point spread function is known to be nonzero, |h[k]| > 0, this blind deconvolution problem can be relaxed into a linear, ill-posed inverse problem in the vector concatenating the unknown inputs xn together with the inverse of the filter, d ∈ ℝL where d[k] := 1/h[k]. When prior information is given on the input subspaces, the resulting overdetermined linear system can be solved efficiently via least squares (see Ling et al. 20161). When no information is given on those subspaces, and the inputs are only known to be sparse, it still remains possible to recover these inputs along with the filter by considering an additional l1 penalty. This note certifies exact recovery of both the unknown PSF and unknown sparse inputs, from the knowledge of their convolutions, as soon as the number of inputs N and the dimension of each input, L , satisfy L N and NT2max, up to log factors. Here Tmax = maxn{Tn} and Tn, n = 1, . . . , N denote the supports of the inputs xn. Our proof system combines the recent results on blind deconvolution via least squares to certify invertibility of the linear map encoding the convolutions, with the construction of a dual certificate following the structure first suggested in Candés et al. 2007.2 Unlike in these papers, however, it is not possible to rely on the norm ||(A*TAT)−1|| to certify recovery. We instead use a combination of the Schur Complement and Neumann series to compute an expression for the inverse (A*TAT)−1. Given this expression, it is possible to show that the poorly scaled blocks in (A*TAT)−1 are multiplied by the better scaled ones or vanish in the construction of the certificate. Recovery is certified with high probablility on the choice of the supports and distribution of the signs of each input xn on the support. The paper follows the line of previous work by Wang et al. 20163 where the authors guarantee recovery for subgaussian × Bernoulli inputs satisfying 𝔼xn|k| ∈ [1/10, 1] as soon as NL. Examples of applications include seismic imaging with unknown source or marine seismic data deghosting, magnetic resonance autocalibration or multiple channel estimation in communication. Numerical experiments are provided along with a discussion on the sample complexity tightness.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
Matthias J. Ehrhardt, Pawel Markiewicz, Antonin Chambolle, et al.
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
Data Representation with Graphs and Multi-Scale Processing, Application to fMRI Brain Connectivity I
icon_mobile_dropdown
Local stationarity of graph signals: insights and experiments
Benjamin Girault, Shrikanth S. Narayanan, Antonio Ortega
In this paper, we look at one of the most crucial ingredient to graph signal processing: the graph. By taking a step back on the conventional approach using Gaussian weights, we are able to obtain a better spectral representation of a stochastic graph signal. Our approach focuses on learning the weights of the graphs, thus enabling better richness in the structure by incorporating both the distance and the local structure into the weights. Our results show that the graph power spectrum we obtain is closer to what we expect, and stationarity is better preserved when going from a continuous signal to its sampled counterpart on the graph. We further validate the approach on a real weather dataset.
Multiresolution analysis of functions on directed networks
Harry Sevi, Gabriel Rilling, Pierre Borgnat
We introduce a novel design for analyzing and approximating functions defined on the vertices of a directed graph Γ in a multi-scale fashion. The starting point of our construction is the setting-up of a frequency notion through the study of the Dirichlet energy of random walk operator's eigenfunctions. By this alluring frequency interpretation, the set of random walk's eigenfunctions is considered as the Fourier basis for functions over directed graphs. We are thus able to construct a multi-scale frame based on the bi-orthogonal basis of the random walk on directed graphs. This multi-resolution frame paves thus the way to a generalization of the diffusion wavelet framework to the directed scope.
Extending classical multirate signal processing theory to graphs
Oguzhan Teke, Palghat P. Vaidyanathan
A variety of different areas consider signals that are defined over graphs. Motivated by the advancements in graph signal processing, this study first reviews some of the recent results on the extension of classical multirate signal processing to graphs. In these results, graphs are allowed to have directed edges. The possibly non-symmetric adjacency matrix A is treated as the graph operator. These results investigate the fundamental concepts for multirate processing of graph signals such as noble identities, aliasing, and perfect reconstruction (PR). It is shown that unless the graph satisfies some conditions, these concepts cannot be extended to graph signals in a simple manner. A structure called M-Block cyclic structure is shown to be sufficient to generalize the results for bipartite graphs on two-channels to M-channel filter banks. Many classical multirate ideas can be extended to graphs due to the unique eigenstructure of M-Block cyclic graphs. For example, the PR condition for filter banks on these graphs is identical to PR in classical theory, which allows the use of well-known filter bank design techniques. In order to utilize these results, the adjacency matrix of an M-Block cyclic graph should be given in the correct permutation. In the final part, this study proposes a spectral technique to identify the hidden M-Block cyclic structure from a graph with noisy edges whose adjacency matrix is given under a random permutation. Numerical simulation results show that the technique can recover the underlying M-Block structure in the presence of random addition and deletion of the edges.
Semiparametric, parametric, and possibly sparse models for multivariate long-range dependence
Changryong Baek, Stefanos Kechagias, Vladas Pipiras
Several available formulations, parametric models and sparsity settings for multivariate long-range dependence (MLRD) are discussed. Furthermore, a new parametric identifiable model for a general formulation of MLRD is introduced in any dimension, and another sparsity setting is identified of potential interest in MLRD modeling. Estimation approaches for MLRD are also reviewed, including some recent progress and open questions about estimation in higher dimensions and sparse settings.
Optimization and Sparse Inverse Problems II
icon_mobile_dropdown
A framework for learning affine transformations for multimodal sparse reconstruction
João F. C. Mota, Evaggelia Tsiligianni, Nikos Deligiannis
We introduce a novel framework to reconstruct highly undersampled signals from their measurements using a correlated signal as an aid. The correlated signal, called side information, need not be close or similar to the signal to reconstruct. Thus, our framework applies to the case in which the signals are multimodal. We use two main ingredients: the theory of l1l1 minimization, which establishes precise reconstruction guarantees of sparse signals using a similar signal as an aid, and a set of training data consisting of several examples of pairs of the signal to reconstruct and the side information. We adopt a statistical framework where the training and the test data are drawn from the same joint distribution, which is assumed unknown. Our main insight is that a quantity arising in the l1l1 minimization theory to measure the quality of the side information can be written as the 0-1 loss of a classification problem. Therefore, our problem can be solved with classification methods, such as support vector machines. Furthermore, using statistical learning theory, we provide guarantees for our method. Specifically, the expected value of the side information quality decreases with O(1/√T), where T is the number of training samples. Simulations with synthetic data validate our approach.
Recovery guarantees for low complexity models (Conference Presentation)
Samuel Vaiter, Gabriel Peyré, Jalal Fadili
This paper studies least-square regression penalized with partly smooth convex regularizers. This class of penalty functions is very large and versatile, and allows to promote solutions conforming to some notion of low-complexity. Indeed, such penalties/regularizers force the corresponding solutions to belong to a low-dimensional manifold (the so-called model) which remains stable when the penalty function undergoes small perturbations. Such a good sensitivity property is crucial to make the underlying low-complexity (manifold) model robust to small noise. In a deterministic setting, we show that a generalized "irrepresentable condition" implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level. We also prove that this condition is almost necessary for stable model recovery. We then turn to the random setting where the design matrix and the noise are random, and the number of observations grows large. We show that under our generalized "irrepresentable condition", and a proper scaling of the regularization parameter, the regularized estimator is model consistent. In plain words, with a probability tending to one as the number of measurements tends to infinity, the regularized estimator belongs to the correct low-dimensional model manifold. This work unifies and generalizes a large body of literature, where model consistency was known to hold, for instance for the Lasso, group Lasso, total variation (fused Lasso) and nuclear/trace norm regularizers. We show that under the deterministic model selection conditions, the forward-backward proximal splitting algorithm used to solve the penalized least-square regression problem, is guaranteed to identify the model manifold after a finite number of iterations. This manuscript is an excerpt from the work published in [1].
A non-convex perspective on calibration and imaging in radio interferometry
Audrey Repetti, Yves Wiaux
New generations of imaging devices aim to produce high resolution and high dynamic range images. In this context, the associated high dimensional inverse problems can become extremely challenging from an algorithmic view point. Moreover, the imaging procedure can be affected by unknown calibration kernels. This leads to the need of performing joint image reconstruction and calibration, and thus of solving non-convex blind deconvolution problems. In this work, we focus on the case where the observed object is affected by smooth calibration kernels in the context of radio astronomy, and we leverage a block-coordinate forward-backward algorithm, specifically designed to minimize non-smooth non-convex and high dimensional objective functions.
Data Representation with Graphs and Multi-Scale Processing, Application to fMRI Brain Connectivity II
icon_mobile_dropdown
Analytic wavelets for multivariate time series analysis
Irène Gannaz, Sophie Achard, Marianne Clausel, et al.
Many applications fields deal with multivariate long-memory time series. A challenge is to estimate the long-memory properties together with the coupling between the time series. Real wavelets procedures present some limitations due to the presence of phase phenomenons. A perspective is to use analytic wavelets to recover jointly long-memory properties, modulus of long-run covariance between time series and phases. Approximate wavelets Hilbert pairs of Selesnick (2002) fullfilled some of the required properties. As an extension of Selesnick (2002)’s work, we present some results about existence and quality of these approximately analytic wavelets.
Guiding network analysis using graph slepians: an illustration for the C. Elegans connectome
Dimitri Van De Ville, Robin Demesmaeker, Maria Giulia Preti
Spectral approaches of network analysis heavily rely upon the eigendecomposition of the graph Laplacian. For instance, in graph signal processing, the Laplacian eigendecomposition is used to define the graph Fourier trans- form and then transpose signal processing operations to graphs by implementing them in the spectral domain. Here, we build on recent work that generalized Slepian functions to the graph setting. In particular, graph Slepi- ans are band-limited graph signals with maximal energy concentration in a given subgraph. We show how this approach can be used to guide network analysis; i.e., we propose a visualization that reveals network organization of a subgraph, but while striking a balance with global network structure. These developments are illustrated for the structural connectome of the C. Elegans.
Multiscale Analysis
icon_mobile_dropdown
Adaptive synchrosqueezing based on a quilted short-time Fourier transform
In recent years, the synchrosqueezing transform (SST) has gained popularity as a method for the analysis of signals that can be broken down into multiple components determined by instantaneous amplitudes and phases. One such version of SST, based on the short-time Fourier transform (STFT), enables the sharpening of instantaneous frequency (IF) information derived from the STFT, as well as the separation of amplitude-phase components corresponding to distinct IF curves. However, this SST is limited by the time-frequency resolution of the underlying window function, and may not resolve signals exhibiting diverse time-frequency behaviors with sufficient accuracy. In this work, we develop a framework for an SST based on a “quilted" short-time Fourier transform (SST-QSTFT), which allows adaptation to signal behavior in separate time-frequency regions through the use of multiple windows. This motivates us to introduce a discrete reassignment frequency formula based on a finite difference of the phase spectrum, ensuring computational accuracy for a wider variety of windows. We develop a theoretical framework for the SST-QSTFT in both the continuous and the discrete settings, and describe an algorithm for the automatic selection of optimal windows depending on the region of interest. Using synthetic data, we demonstrate the superior numerical performance of SST-QSTFT relative to other SST methods in a noisy context. Finally, we apply SST-QSTFT to audio recordings of animal calls to demonstrate the potential of our method for the analysis of real bioacoustic signals.
Non-parametric warping via local scale estimation for non-stationary Gaussian process modelling
Sébastien Marmin, Jean Baccou, Jacques Liandrat, et al.
We tackle the problem of reconstructing functions possessing highly heterogeneous behaviour across the input space from scattered evaluations. Our main approach combines non-stationary Gaussian process (GP) modelling with wavelet local analysis. A warped GP model is assumed, and a novel stationarization algorithm is proposed that relies on successive inverse warpings based on local scale estimation. The approach is applied to two mechanical case studies highlighting promising prediction performance compared to state-of-the-art methods.
Affine shear tight frames with two-layer structure
Zhihua Che, Xiaosheng Zhuang
Affine shear tight frames with 2-layer structure in arbitrary dimensions are constructed. Affine shear filter banks with 2-layer structure associated with affine shear tight frames with 2-layer structure are designed and show to be with the perfect reconstruction property. The redundancy rate and computational complexity of affine shear filter banks are discussed. Applications to video denoising are conducted to demonstrate the effectiveness of the affine shear filter banks with 2-layer structure.