Proceedings Volume 9597

Wavelets and Sparsity XVI

cover
Proceedings Volume 9597

Wavelets and Sparsity XVI

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 2 October 2015
Contents: 11 Sessions, 41 Papers, 0 Presentations
Conference: SPIE Optical Engineering + Applications 2015
Volume Number: 9597

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9597
  • Sparse Representations in MRI I
  • Variational Techniques and Optimization for Compressed Sensing
  • Frame Theory and Applications
  • Sparse and Directional Representations in Neuroimaging
  • Frame Theory and Sparse Approximations
  • Computational Bioimaging
  • Sparse Representations in MRI II
  • Frame Theory and Applications
  • Frame Theory and Sparse Approximations
  • Multiscale and Randomized Algorithms for Large Data I
  • Keynote Session II
  • Sparse Analysis and Graph Models in Neuroimaging
Front Matter: Volume 9597
icon_mobile_dropdown
Front Matter: Volume 9597
This PDF file contains the front matter associated with SPIE Proceedings Volume 9597, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Sparse Representations in MRI I
icon_mobile_dropdown
Learning sparsifying filter banks
Recent years have numerous algorithms to learn a sparse synthesis or analysis model from data. Recently, a generalized analysis model called the 'transform model' has been proposed. Data following the transform model is approximately sparsified when acted on by a linear operator called a sparsifying transform. While existing transform learning algorithms can learn a transform for any vectorized data, they are most often used to learn a model for overlapping image patches. However, these approaches do not exploit the redundant nature of this data and scale poorly with the dimensionality of the data and size of patches. We propose a new sparsifying transform learning framework where the transform acts on entire images rather than on patches. We illustrate the connection between existing patch-based transform learning approaches and the theory of block transforms, then develop a new transform learning framework where the transforms have the structure of an undecimated filter bank with short filters. Unlike previous work on transform learning, the filter length can be chosen independently of the number of filter bank channels. We apply our framework to accelerating magnetic resonance imaging. We simultaneously learn a sparsifying filter bank while reconstructing an image from undersampled Fourier measurements. Numerical experiments show our new model yields higher quality images than previous patch based sparsifying transform approaches.
Advanced image reconstruction strategies for 4D prostate DCE-MRI: steps toward clinical practicality
Eric G. Stinson, Eric A. Borisch, Adam T. Froemming M.D., et al.
Dynamic contrast-enhanced (DCE) MRI is an important tool for the detection and characterization of primary and recurring prostate cancer. Advanced reconstruction strategies (e.g., sparse or low-rank regression) provide improved depiction of contrast dynamics and pharmacokinetic parameters; however, the high computation cost of reconstructing 4D (3D+time, 50+ frames) datasets typically inhibits their routine clinical use. Here, a novel alternating direction method-of-multipliers (ADMM) optimization strategy is described that enables these methods to be executed in ∠5 minutes, and thus within the standard clinical workflow. After overviewing the mechanics of this approach, high-performance implementation strategies will be discussed and demonstrated through clinical cases.
Imaging industry expectations for compressed sensing in MRI
Kevin F. King, Adriana Kanwischer, Rob Peters
Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.
Variational Techniques and Optimization for Compressed Sensing
icon_mobile_dropdown
A PDE-free variational model for multiphase image segmentation
Julia Dobrosotskaya, Weihong Guo
We introduce a PDE-free variational model for multiphase image segmentation that uses a sparse representation basis (wavelets or other) instead of a Fourier basis in a modified diffuse interface context. The segmentation model we present differs from other state-of-the-art models in several ways. The diffusive nature of the method originates from the sparse representations and thus propagates information in a different manner comparing to any existing PDE models, even though it still has such classical features as coarsening and phase separation. The model has a non-local nature, yet with much reduced diffuse interface blur, thus allowing to connect important features and preserve sharp edges in the output. Numerical experiments show that the method is robust to noise and is highly tunable.
Dictionary learning for compressive parameter mapping in magnetic resonance imaging
Benjamin Paul Berman, Mahesh Bharath Keerthivasan, Zhitao Li, et al.
Parameter mapping is a valuable quantitative tool for soft tissue contrast. Accelerated data acquisition is critical for clinical utility, which has lead to various novel reconstruction techniques. In this work, a model-based compressed sensing method is extended to include a sparse regularization that is learned from the principal component coefficient. The principal components for a range of T2 decay curves are computed, and the coefficients of the principal components are reconstructed. These coefficient maps share coherent spatial structures, suggesting a patch{based dictionary is a well suited sparse transformation. This transformation is learned from the coefficients themselves. The proposed reconstruction is suited for non-Cartesian, multi-channel data. The dictionary constraint leads to parameter maps with less noise and less aliasing for high amounts of acceleration.
Semi-supervised high-dimensional clustering by tight wavelet frames
Bin Dong, Ning Hao
High-dimensional clustering arises frequently from many areas in natural sciences, technical disciplines and social medias. In this paper, we consider the problem of binary clustering of high-dimensional data, i.e. classification of a data set into 2 classes. We assume that the correct (or mostly correct) classification of a small portion of the given data is known. Based on such partial classification, we design optimization models that complete the clustering of the entire data set using the recently introduced tight wavelet frames on graphs.1 Numerical experiments of the proposed models applied to some real data sets are conducted. In particular, the performance of the models on some very high-dimensional data sets are examined; and combinations of the models with some existing dimension reduction techniques are also considered.
Smooth affine shear tight frames: digitization and applications
Xiaosheng Zhuang
In this paper, we mainly discuss one of the recent developed directional multiscale representation systems: smooth affine shear tight frames. A directional wavelet tight frame is generated by isotropic dilations and translations of directional wavelet generators, while an affine shear tight frame is generated by anisotropic dilations, shears, and translations of shearlet generators. These two tight frames are actually connected in the sense that the affine shear tight frame can be obtained from a directional wavelet tight frame through subsampling. Consequently, an affine shear tight frame indeed has an underlying filter bank from the MRA structure of its associated directional wavelet tight frame. We call such filter banks affine shear filter banks, which can be designed completely in the frequency domain. We discuss the digitization of affine shear filter banks and their implementations: the forward and backward digital affine shear transforms. Redundancy rate and computational complexity of digital affine shear transforms are also investigated in this paper. Numerical experiments and comparisons in image/video processing show the advantages of digital affine shear transforms over many other state-of-art directional multiscale representation systems.
A fast algorithm for reconstruction of spectrally sparse signals in super-resolution
Jian-Feng Cai, Suhui Liu, Weiyu Xu
We propose a fast algorithm to reconstruct spectrally sparse signals from a small number of randomly observed time domain samples. Different from conventional compressed sensing where frequencies are discretized, we consider the super-resolution case where the frequencies can be any values in the normalized continuous frequency domain [0; 1). We first convert our signal recovery problem into a low rank Hankel matrix completion problem, for which we then propose an efficient feasible point algorithm named projected Wirtinger gradient algorithm(PWGA). The algorithm can be further accelerated by a scheme inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Numerical experiments are provided to illustrate the effectiveness of our proposed algorithm. Different from earlier approaches, our algorithm can solve problems of large scale efficiently.
Frame Theory and Applications
icon_mobile_dropdown
Algebraic and geometric spread in finite frames
Emily J. King
When searching for finite unit norm tight frames (FUNTFs) of M vectors in 𝔽N which yield robust representations, one is concerned with finding frames consisting of frame vectors which are in some sense as spread apart as possible. Algebraic spread and geometric spread are the two most commonly used measures of spread. A frame with optimal algebraic spread is called full spark and is such that any subcollection of N frame vectors is a basis for 𝔽N. A Grassmannian frame is a FUNTF which satisfies the Grassmannian packing problem; that is, the frame vectors are optimally geometrically spread given fixed M and N. A particular example of a Grassmannian frame is an equiangular frame, which is such that the absolute value of all inner products of distinct vectors is equal. The relationship between these two types of optimal spread is complicated. The folk knowledge for many years was that equiangular frames were full spark; however, this is now known not to hold for an infinite class of equiangular frames. The exact relationship between these types of spread will be further explored in this talk, as well as Plücker coordinates and coherence, which are measures of how much a frame misses being optimally algebraically or geometrically spread.
Learning Boolean functions with concentrated spectra
Dustin G. Mixon, Jesse Peterson
This paper discusses the theory and application of learning Boolean functions that are concentrated in the Fourier domain. We first estimate the VC dimension of this function class in order to establish a small sample complexity of learning in this case. Next, we propose a computationally efficient method of empirical risk minimization, and we apply this method to the MNIST database of handwritten digits. These results demonstrate the effectiveness of our model for modern classification tasks. We conclude with a list of open problems for future investigation.
Gabor fusion frames generated by difference sets
Irena Bojarovska, Victoria Paternostro
Collections of time- and frequency-shifts of suitably chosen generators (Alltop or random vectors) proved successful for many applications in sparse recovery and related fields. It is known1 that taking a characteristic function of a difference set as a generator, and considering only the frequency shifts, gives an equaingular tight frame for the subspace they span. In this paper, we investigate the system of all N2 time- and frequency-shifts of a difference set in dimension N via the mutual coherence, and compare numerically its sparse recovery effectiveness with Alltop and random generators. We further view this Gabor system as a fusion frame, show that it is optimally sparse, and moreover an equidistant tight fusion frame, i.e. it is an optimal Grassmannian packing.
Tightness of stability bounds by null space property
Xuemei Chen, Rongrong Wang
The null space property (NSP) and the restricted isometry property (RIP) are two properties which have received considerable attention in the compressed sensing literature. It is known that the null space property guarantees a less than ideal stability result. In this paper, we show that this bound is actually tight by specific construction, which implies a fundamental difference between NSP and RIP.
Quasi-symmetric designs and equiangular tight frames
Matthew Fickus, John Jasper, Dustin Mixon, et al.
An equiangular tight frame (ETF) is an M×N matrix which has orthogonal equal norm rows, equal norm columns, and the inner products of all pairs of columns have the same modulus. ETFs arise in numerous applications, including compressed sensing. They also seem to be rare: despite over a decade of active research by the community, only a few construction methods have been discovered. In this article we introduce a new construction of ETFs which uses a particular set of combinatorial designs called quasi-symmetric designs. For ETFs whose entries are contained in {+1;-1}, called real constant amplitude ETFs (RCAETFs), we see that this construction is reversible, giving new quasi-symmetric designs from the known constructions RCAETFs.
Sparse and Directional Representations in Neuroimaging
icon_mobile_dropdown
Image registration using the shearlet transform
In this paper, we present an algorithm for image registration utilizing the shearlet representation. The shearlet framework allows one to collect multi-scale and multi-directional feature information from multidimensional data that can be used to create key feature vectors that are scale, rotation, and shift invariant. These key feature vectors produce a transformation that will align the sensed image to the source image. We demonstrate our registration algorithm on various medical databases.
Shearlet-domain task-driven post-processing and filtering of CT noise
While many existing CT noise filtering post-processing techniques optimize minimum mean squared error (MSE)-based quality metrics, it is well-known that the MSE is generally not related to the diagnostic quality of CT images. In medical image quality assessment, model observers (MOs) have been proposed for predicting diagnostic quality in medical images. MOs optimize a task-based quality criterion such as lesion or tumor detection performance. In this paper, we first discuss some of the non-stationary noise properties of CT noise. These properties will be utilized to construct a multi-directional non-stationary noise model that can be used by MOs. Next, we investigate a new shearlet-based denoising scheme that opti- mizes a task-based image quality metric for CT background noise. This work makes a connection between multi-resolution sparsity-based denoising techniques on the one hand and model observers on the other hand. The main advantage is that this approach avoids the two-step procedure of MSE-optimized denoising followed by a MO-based quality evaluation (of- ten with contradictory quality goals), while instead optimizing the desired task-based image quality directly. Experimental results are given to illustrate the benefits of the proposed approach.
Geometry and dimensionality reduction of feature spaces in primary visual cortex
Davide Barbieri
Some geometric properties of the wavelet analysis performed by visual neurons are discussed and compared with experimental data. In particular, several relationships between the cortical morphologies and the parametric dependencies of extracted features are formalized and considered from a harmonic analysis point of view.
Shearlet-based regularized ROI reconstruction in fan beam computed tomography
T. A. Bubba, D. Labate, G. Zanghirati, et al.
Region-of-interest (ROI) reconstruction in computed tomography (CT) is a problem receiving increasing attention in the medical imaging community, due to its potential to lower exposure to X-ray radiation and to reduce the scanning time. Since the ROI reconstruction problem requires to deal with truncated projection images, classical CT reconstruction algorithms tend to become very unstable and the solution of this problem requires either ad hoc analytic formulas or more sophisticated numerical schemes. In this paper, we introduce a novel approach for ROI CT reconstruction, formulated as a convex optimization problem with a regularized functional based on shearlets or wavelets. Our numerical implementation consists of an iterative algorithm based on the scaled gradient projection method. As illustrated by numerical tests in the context of fan beam CT, our algorithm is insensitive to the location of the ROI and remains very stable also when the ROI size is rather small.
Directional ratio based on parabolic molecules and its application to the analysis of tubular structures
Demetrio Labate, Pooran Negi, Burcin Ozcan, et al.
As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.
Investigating the impact of blood pressure increase to the brain using high resolution serial histology and image processing
F. Lesage, A. Castonguay, P. L. Tardif, et al.
A combined serial OCT/confocal scanner was designed to image large sections of biological tissues at microscopic resolution. Serial imaging of organs embedded in agarose blocks is performed by cutting through tissue using a vibratome which sequentially cuts slices in order to reveal new tissue to image, overcoming limited light penetration encountered in microscopy. Two linear stages allow moving the tissue with respect to the microscope objective, acquiring a 2D grid of volumes (1x1x0.3 mm) with OCT and a 2D grid of images (1x1mm) with the confocal arm. This process is repeated automatically, until the entire sample is imaged. Raw data is then post-processed to re-stitch each individual acquisition and obtain a reconstructed volume of the imaged tissue. This design is being used to investigate correlations between white matter and microvasculature changes with aging and with increase in pulse pressure following transaortic constriction in mice. The dual imaging capability of the system allowed to reveal different contrast information: OCT imaging reveals changes in refractive indices giving contrast between white and grey matter in the mouse brain, while transcardial perfusion of FITC or pre-sacrifice injection of Evans Blue shows microsvasculature properties in the brain with confocal imaging.
Estimating brain's functional graph from the structural graph's Laplacian
F. Abdelnour, M. Dayan, O. Devinsky, et al.
The interplay between the brain’s function and structure has been of immense interest to the neuroscience and connectomics communities. In this work we develop a simple linear model relating the structural network and the functional network. We propose that the two networks are related by the structural network’s Laplacian up to a shift. The model is simple to implement and gives accurate prediction of function’s eigenvalues at the subject level and its eigenvectors at group level.
Frame Theory and Sparse Approximations
icon_mobile_dropdown
Phase retrieval
Jameson Cahill, Peter G. Casazza, John Jasper, et al.
We answer a number of open problems concerning phase retrieval and phase retrieval by projections. In particular, one main theorem classifies phase retrieval by projections via collections of sequences of vectors allowing norm retrieval. Another key result computes the minimal number of vectors needed to add to a frame in order for it to possess the complement property and hence allow phase retrieval. In furthering this idea, in a third main theorem we show that when a collection of subspaces is one subspace short from allowing phase retrieval, then any partition of these subspaces spans two hyperplanes. We offer many more results in this area as well as provide a large number of examples showing the limitations of the theory.
Deterministic compressed sensing and quantization
Arman Ahmadieh, Özgur Yilmaz
Compressed Sensing (CS) is a sampling paradigm used for acquiring sparse or compressible signals from a seemingly incomplete set of measurements. In any practical application with our digitally driven technology, these "compressive measurements" are quantized and thus they do not have infinite precision. So far, the theory of quantization in CS has mainly focused on compressive sampling systems designed with random measurement matrices. In this note, we turn our attention to "deterministic compressed sensing". Specifically, we focus on quantization in CS with chirp sensing matrices and present quantization approaches and numerical experiments.
Scalable filter banks
Youngmi Hur, Kasso A. Okoudjou
A finite frame is said to be scalable if its vectors can be rescaled so that the resulting set of vectors is a tight frame. The theory of scalable frame has been extended to the setting of Laplacian pyramids which are based on (rectangular) paraunitary matrices whose column vectors are Laurent polynomial vectors. This is equivalent to scaling the polyphase matrices of the associated filter banks. Consequently, tight wavelet frames can be constructed by appropriately scaling the columns of these paraunitary matrices by diagonal matrices whose diagonal entries are square magnitude of Laurent polynomials. In this paper we present examples of tight wavelet frames constructed in this manner and discuss some of their properties in comparison to the (non tight) wavelet frames they arise from.
Perturbations of fusion frames and the effect on their canonical dual
Gitta Kutyniok, Victoria Paternostro, Friedrich Philipp
Measurements in real world applications are often corrupted by noise due to exterior influences. It is therefore convenient to investigate perturbations of the utilized measurement systems. In the present paper, we consider small perturbations of fusion frames in Hilbert spaces and study the effect on the canonical duals of original and perturbed fusion frame. It turns out that the duals are stable under these perturbations.
Computational Bioimaging
icon_mobile_dropdown
Fast live cell imaging at nanometer scale using annihilating filter-based low-rank Hankel matrix approach
Junhong Min, Lina Carlini, Michael Unser, et al.
Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.
Image denoising by adaptive Compressed Sensing reconstructions and fusions
In this work, Compressed Sensing (CS) is investigated as a denoising tool in bioimaging. The denoising algorithm exploits multiple CS reconstructions, taking advantage of the robustness of CS in the presence of noise via regularized reconstructions and the properties of the Fourier transform of bioimages. Multiple reconstructions at low sampling rates are combined to generate high quality denoised images using several sparsity constraints. We present different combination methods for the CS reconstructions and quantitatively compare the performance of our denoising methods to state-of-the-art ones.
Sparse Representations in MRI II
icon_mobile_dropdown
Low-rank modeling of local k-space neighborhoods: from phase and support constraints to structured sparsity
Low-rank modeling of local k-space neighborhoods (LORAKS) is a recent novel framework for constrained MRI reconstruction. LORAKS relies on embedding MRI data into carefully-constructed matrices, which will have low-rank structure when the MRI image has sparse support or slowly-varying phase. Low-rank matrix representation allows MRI images to be reconstructed from undersampled data using modern low-rank matrix techniques, and enables data acquisition strategies that are incompatible with more traditional representations. This paper reviews LORAKS, and describes extensions that allow LORAKS to additionally impose structured transform-domain sparsity constraints (e.g., structured sparsity of the image derivatives or wavelet coefficients).
Sparse methods for Quantitative Susceptibility Mapping
Berkin Bilgic, Itthi Chatnuntawech, Christian Langkammer, et al.
Quantitative Susceptibility Mapping (QSM) aims to estimate the tissue susceptibility distribution that gives rise to subtle changes in the main magnetic field, which are captured by the image phase in a gradient echo (GRE) experiment. The underlying susceptibility distribution is related to the acquired tissue phase through an ill-posed linear system. To facilitate its inversion, spatial regularization that imposes sparsity or smoothness assumptions can be employed. This paper focuses on efficient algorithms for regularized QSM reconstruction. Fast solvers that enforce sparsity under Total Variation (TV) and Total Generalized Variation (TGV) constraints are developed using Alternating Direction Method of Multipliers (ADMM). Through variable splitting that permits closed-form iterations, the computation efficiency of these solvers are dramatically improved. An alternative approach to improve the conditioning of the ill-posed inversion is to acquire multiple GRE volumes at different head orientations relative to the main magnetic field. The phase information from such multi-orientation acquisition can be combined to yield exquisite susceptibility maps and obviate the need for regularized reconstruction, albeit at the cost of increased data acquisition time.
Data-driven adaptation of a union of sparsifying transforms for blind compressed sensing MRI reconstruction
Saiprasad Ravishankar, Yoram Bresler
Compressed Sensing has been demonstrated to be a powerful tool for magnetic resonance imaging (MRI), where it enables accurate recovery of images from highly undersampled k-space measurements by exploiting the sparsity of the images or image patches in a transform domain or dictionary. In this work, we focus on blind compressed sensing, where the underlying sparse signal model is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the unknown model from highly undersampled measurements. Specifically, our model is that the patches of the underlying MR image(s) are approximately sparse in a transform domain. We also extend this model to a union of transforms model that is better suited to capture the diversity of features in MR images. The proposed block coordinate descent type algorithms for blind compressed sensing are highly efficient. Our numerical experiments demonstrate the superior performance of the proposed framework for MRI compared to several recent image reconstruction methods. Importantly, the learning of a union of sparsifying transforms leads to better image reconstructions than a single transform.
Frame Theory and Applications
icon_mobile_dropdown
The unconditional constants of frame expansions and cross-frame expansions
Travis Bemrose, Peter G. Casazza, Richard G. Lynch
It was shown in Ref. 1 that the unconditional constant for frame expansions is √B/A, where A and B are the frame bounds of the frame. It was also shown that a Bessel sequence is 1-unconditional if and only if it can be partitioned into an orthogonal sum of tight frames. In Ref. 2 cross-frame expansions were considered. It was shown that as long as the cross-frame expansions stay uniformly bounded away from zero, then similar results could be obtained. In this paper, we summarize these results into one concise source as well as add a few basic results that were not considered before.
A Bayesian approach to estimation of a statistical change-point in the mean parameter for high dimensional non-linear time series
Darrin Speegle, Robert Steward
We propose a semiparametric approach to infer the existence of and estimate the location of a statistical change-point to a nonlinear high dimensional time series contaminated with an additive noise component. In particular, we consider a p―dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach first involves a dimension reduction of the original time series through a random matrix multiplication. Next, we conduct a Bayesian analysis on the empirical detail coefficients of this dimensionally reduced time series after a wavelet transform. We also present a means to associate confidence bounds to the conclusions of our results. Aside from being computationally efficient and straight forward to implement, the primary advantage of our methods is seen in how these methods apply to a much larger class of time series whose mean functions are subject to only general smoothness conditions.
Fast angular synchronization for phase retrieval via incomplete information
Aditya Viswanathan, Mark Iwen
We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.
Frame Theory and Sparse Approximations
icon_mobile_dropdown
Detailing the equivalence between real equiangular tight frames and certain strongly regular graphs
Matthew Fickus, Cody E. Watson
An equiangular tight frame (ETF) is a set of unit vectors whose coherence achieves the Welch bound, and so is as incoherent as possible. They arise in numerous applications. It is well known that real ETFs are equivalent to a certain subclass of strongly regular graphs. In this note, we give some alternative techniques for understanding this equivalence. In a later document, we will use these techniques to further generalize this theory.
Connectivity of spaces of finite unit-norm tight frames
Jameson Cahill, Dustin Mixon, Nate Strawn
We show that the spaces of finite unit norm tight frames are connected, which verifies a conjecture first appearing in Dykema and Strawn (2006). Our central technique involves continuous liftings of paths from the polytope of eigensteps (see Cahill et al., 2012), or Gelfand-Tsetlin patterns, to spaces of FUNTFs. After demonstrating this connectivity result, we refine our analysis to show that the set of nonsingular points on these spaces is also connected, and we use this result to show that spaces of FUNTFs are irreducible in the algebro-geometric sense.
Multiscale and Randomized Algorithms for Large Data I
icon_mobile_dropdown
Geometric multi-resolution analysis for dictionary learning
Mauro Maggioni, Stanislav Minsker, Nate Strawn
We present an efficient algorithm and theory for Geometric Multi-Resolution Analysis (GMRA), a procedure for dictionary learning. Sparse dictionary learning provides the necessary complexity reduction for the critical applications of compression, regression, and classification in high-dimensional data analysis. As such, it is a critical technique in data science and it is important to have techniques that admit both efficient implementation and strong theory for large classes of theoretical models. By construction, GMRA is computationally efficient and in this paper we describe how the GMRA correctly approximates a large class of plausible models (namely, the noisy manifolds).
Geometric multi-resolution analysis and data-driven convolutions
Nate Strawn
We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.
Higher-order graph wavelets and sparsity on circulant graphs
The notion of a graph wavelet gives rise to more advanced processing of data on graphs due to its ability to operate in a localized manner, across newly arising data-dependency structures, with respect to the graph signal and underlying graph structure, thereby taking into consideration the inherent geometry of the data. In this work, we tackle the problem of creating graph wavelet filterbanks on circulant graphs for a sparse representation of certain classes of graph signals. The underlying graph can hereby be data-driven as well as fixed, for applications including image processing and social network theory, whereby clusters can be modelled as circulant graphs, respectively. We present a set of novel graph wavelet filter-bank constructions, which annihilate higher-order polynomial graph signals (up to a border effect) defined on the vertices of undirected, circulant graphs, and are localised in the vertex domain. We give preliminary results on their performance for non-linear graph signal approximation and denoising. Furthermore, we provide extensions to our previously developed segmentation-inspired graph wavelet framework for non-linear image approximation, by incorporating notions of smoothness and vanishing moments, which further improve performance compared to traditional methods.
Keynote Session II
icon_mobile_dropdown
Applied and computational harmonic analysis on graphs and networks
Jeff Irion, Naoki Saito
In recent years, the advent of new sensor technologies and social network infrastructure has provided huge opportunities and challenges for analyzing data recorded on such networks. In the case of data on regular lattices, computational harmonic analysis tools such as the Fourier and wavelet transforms have well-developed theories and proven track records of success. It is therefore quite important to extend such tools from the classical setting of regular lattices to the more general setting of graphs and networks. In this article, we first review basics of graph Laplacian matrices, whose eigenpairs are often interpreted as the frequencies and the Fourier basis vectors on a given graph. We point out, however, that such an interpretation is misleading unless the underlying graph is either an unweighted path or cycle. We then discuss our recent effort of constructing multiscale basis dictionaries on a graph, including the Hierarchical Graph Laplacian Eigenbasis Dictionary and the Generalized Haar-Walsh Wavelet Packet Dictionary, which are viewed as generalizations of the classical hierarchical block DCTs and the Haar-Walsh wavelet packets, respectively, to the graph setting. Finally, we demonstrate the usefulness of our dictionaries by using them to simultaneously segment and denoise 1-D noisy signals sampled on regular lattices, a problem where classical tools have difficulty.
Sparse Analysis and Graph Models in Neuroimaging
icon_mobile_dropdown
Innovation-based sparse estimation of functional connectivity from multivariate autoregressive models
François Deloche, Fabrizio De Vico Fallani, Stéphanie Alassoniѐre
One of the main limitations of functional connectivity estimators of brain networks is that they can suffer from statistical reliability when the number of areas is large and the available time series are short. To estimate directed functional connectivity with multivariate autoregressive (MVAR) model on sparse connectivity assumption, we propose a modified Group Lasso procedure with an adapted penalty. Our procedure includes the innovation estimates as explaining variables. This approach is inspired by two criteria that are used to interpret the coefficients of the MVAR model, the Directed Transfer Function (DTF) and the Partial Directed Coherence (PDC). A causality measure can be deduced from the output coefficients which can be understood as a synthesis of PDC and DTF. We demonstrate the potential of our method and compare our results with the standard Group Lasso on simulated data.
Statistical methods for comparing brain connectomes at different scales
Djalel-E. Meskaldji, Stephan Morgenthaler, Dimitri Van De Ville
Physiological Brain connectivity and spontaneous interaction between regions of interest of the brain can be represented by a matrix (full or sparse) or equivalently by a complex network called connectome. This representation of brain connectivity is adopted when comparing different patterns of structural and functional connectivity to null models or between groups of individuals. Two levels of comparison could be considered when analyzing brain connectivity: the global level and the local level. In the global level, the whole brain information is summarized by one summary statistic, whereas in the local analysis, each region of interest of the brain is summarized by a specific statistic. We show that these levels are mutually informatively integrative in some extent. We present different methods of analysis at both levels, the most relevant global and local network measures. We discuss as well the assumptions to be satisfied for each method; the error rates controlled by each method, and the challenges to overcome, especially, in the local case. We also highlight the possible factors that could influence the statistical results and the questions that have to be addressed in such analyses.