Proceedings Volume 11138

Wavelets and Sparsity XVIII

cover
Proceedings Volume 11138

Wavelets and Sparsity XVIII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 3 November 2019
Contents: 13 Sessions, 44 Papers, 0 Presentations
Conference: SPIE Optical Engineering + Applications 2019
Volume Number: 11138

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 11138
  • Neural Networks and Sparse Representations I
  • Applications of Frames and Related Transforms
  • Wavelets on Graphs
  • Neural Networks and Sparse Representations II
  • Time-Dependent Graph Signals and Dynamic Graphs I
  • Applications of Frames and Transforms in Neural Networks
  • Computational Photonic Imaging
  • Time-Dependent Graph Signals and Dynamic Graphs II
  • Applications in Bio-Imaging
  • Inverse Problems in MRI
  • Optimal Frames and Subspace Packings
  • Spectral Graph Analysis
Front Matter: Volume 11138
icon_mobile_dropdown
Front Matter: Volume 11138
This PDF file contains the front matter associated with SPIE Proceedings Volume 11138, including the Title Page, Copyright Information, Table of Contents, Author and Conference Committee lists.
Neural Networks and Sparse Representations I
icon_mobile_dropdown
Optimal translational-rotational invariant dictionaries for images
Davide Barbieri, Carlos Cabrelli, Eugenio Hernández, et al.
We provide the construction of a set of square matrices whose translates and rotates provide a Parseval frame that is optimal for approximating a given dataset of images. Our approach is based on abstract harmonic analysis techniques. Optimality is considered with respect to the quadratic error of approximation of the images in the dataset with their projection onto a linear subspace that is invariant under translations and rotations. In addition, we provide an elementary and fully self-contained proof of optimality, and the numerical results from datasets of natural images.
Virtual multi-modal object detection and classification with deep convolutional neural networks
Nikolaos Mitsakos, Manos Papadakis
In this paper we demonstrate how the post-processing of gray-scale images with algorithms which have a singularity enhancement effect can assume the role of auxiliary modalities, as in the case where an intelligent system fuses information from multiple physical modalities. We show that as in multimodal AI-fusion, “virtual” multimodal inputs can improve the performance of object detection. We design, implement and test a novel Convolutional Neural Network architecture, based on the Faster R-CNN network for multiclass object detection and classification. Our architecture combines deep feature representations of the input images, generated by networks trained independently on physical and virtual imaging modalities. Using an analog of the ROC curve, the Average Recall over Precision curve, we show that the fusion of certain virtual modality inputs, capable of enhancing singularities and neutralizing illumination, improve recognition accuracy.
Applications of Frames and Related Transforms
icon_mobile_dropdown
Discrete optimizations using graph convolutional networks
In this paper we discuss the use of graph deep learning in solving quadratic assignment problems (QAP). The quadratic assignment problem is an NP hard optimization problem. We shall analyze an approach using Graph Convolutional Networks (GCN). We prove that a specially designed GCN produces the optimal solution for a broad class of assignment problems. By appropriate training, the class of problems correctly solved is thus enlarged. Numerical examples compare this method with other simpler methods.
Data adaptive multi-scale representations for image analysis
Julia Dobrosotskaya, Weihong Guo
Data adaptive tight frame methods have been proven a powerful sparse approximation tool in a variety of settings. We introduce a model of a data adaptive representation that also provides a multi-scale structure. Our idea is to design a multi-scale frame representation for a given data set, with scaling properties similar to the ones of a wavelet basis, but without the necessary self-similar structure. The adaptivity provides better sparsity properties, using Besov-like norm structure both induces sparsity and helps in identifying important features. We focus on investigating the efficiency of a weighted l1 constraint in the context of sparse recovery from noisy data and compare it to the weighted l0 model alongside. Numerical experiments confirm that the recovered frame vectors assigned lower weights correspond to image elements of larger scale and lower local variation, thus indicating that weighted sparsity in natural images leads to a natural scale separation.
Multiscale analysis for higher-order tensors
Alp Ozdemir, Ali Zare, Mark A. Iwen, et al.
The widespread use of multisensor technology and the emergence of big datasets have created the need to develop tools to reduce, approximate, and classify large and multimodal data such as higher-order tensors. While early approaches focused on matrix- and vector-based methods to represent these higher-order data, more recently it has been shown that tensor decomposition methods are better equipped to capture couplings across their different modes. For these reasons, tensor decomposition methods have found applications in many different signal processing problems including dimensionality reduction, signal separation, linear regression, feature extraction, and classification. However, most of the existing tensor decomposition methods are based on the principle of finding a low-rank approximation in a linear subspace structure, where the definition of rank may change depending on the particular decomposition. Since many datasets are not necessarily low-rank in a linear subspace, this often results in high approximation errors or low compression rates. In this paper, we introduce a new adaptive, multi-scale tensor decomposition method for higher-order data inspired by hybrid linear modeling and subspace clustering techniques. In particular, we develop a multi-scale higher-order singular value decomposition (MS-HoSVD) approach where a given tensor is first permuted and then partitioned into several sub-tensors each of which can be represented as a low-rank tensor with increased representational efficiency. The proposed approach is evaluated for dimensionality reduction and classification for several different real-life tensor signals with promising results.
Iterative and greedy algorithms for the sparsity in levels model in compressed sensing
Ben Adcock, Simone Brugiapaglia, Matthew King-Roskamp
Motivated by the question of optimal functional approximation via compressed sensing, we propose generalizations of the Iterative Hard Thresholding and the Compressive Sampling Matching Pursuit algorithms able to promote sparse in levels signals. We show, by means of numerical experiments, that the proposed algorithms are successfully able to outperform their unstructured variants when the signal exhibits the sparsity structure of interest. Moreover, in the context of piecewise smooth function approximation, we numerically demonstrate that the structure promoting decoders outperform their unstructured variants and the basis pursuit program when the encoder is structure agnostic.
Radio astronomical image restoration with shape constraint
Weak gravitational lensing is a very promising probe for cosmology that relies on highly precise shape measurements. Several new instruments are being deployed and will allow for weak lensing studies on unprecedented scales, and at new frequencies. In particular, some of these new instruments should allow for the blooming of radio-weak lensing, specially the SKA with many Petabits per second of raw data.1 Hence, great challenges will be waiting at the turn. In addition, processing methods should be able to extract the highest precision possible and ideally, be applicable to radio-astronomy. For the moment, the two methods that already exist do not satisfy both conditions. In this paper, we present a new plug-and-play solution where we add a shape constraint to deconvolution algorithms and results show measurements improvement of at least 20%.
Wavelets on Graphs
icon_mobile_dropdown
Tight framelets on graphs for multiscale data analysis
Yu Guang Wang, Xiaosheng Zhuang
In this paper, we discuss the construction and applications of decimated tight framelets on graphs. Based on graph clustering algorithms, a coarse-grained chain of graphs can be constructed where a suitable orthonormal eigenpair can be deduced. Decimated tight framelets can then be constructed based on the orthonormal eigen-pair. Moreover, such tight framelets are associated with filter banks with which fast framelet transform algorithms can be realized. An explicit toy example of decimated tight framelets on a graph is provided.
The extended generalized Haar-Walsh transform and applications
Extending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important and much research has been done recently. Our previous Generalized Haar-Walsh Transform (GHWT) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh-Hadamard Transforms. This article proposes the extended Generalized HaarWalsh Transform (eGHWT). The eGHWT and its associated best-basis selection algorithm for graph signals will significantly improve the performance of the previous GHWT with the similar computational cost, O(N log N) where N is the number of nodes of an input graph. While the previous GHWT/best-basis algorithm seeks the most suitable orthonormal basis for a given task among more than (1.5)N possible bases, the eGHWT/best-basis algorithm can find a better one by searching through more than 0.618 · (1.84)N possible bases. This article describes the details of the eGHWT/basis-basis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals.
Slepian guided filtering of graph signals
Joint localization of graph signals in vertex and spectral domain is achieved in Slepian vectors calculated by either maximizing energy concentration ) or minimizing modified embedded distance (ξ) in the subgraph of interest. On the other hand, graph Laplacian is extensively used in graph signal processing as it defines graph Fourier transform (GFT) and operators such as filtering, wavelets, etc. In the context of modeling human brain as a graph, low pass (smooth over neighboring nodes) filtered graph signals represent a valuable source of information known as aligned signals. Here, we propose to define GFT and graph filtering using Slepian orthogonal basis. We explored power spectrum density estimates of random signals on Erdős-Rényi graphs and determined local discrepancies in signal behavior which cannot be accessed by the graph Laplacian, but are detected by the Slepian basis. This motivated the application of Slepian guided graph signal filtering in neuroimaging. We built a graph from diffusion-weighed brain imaging data and used blood-oxygenation-level-dependent (BOLD) time series as graph signals residing on its nodes. The dataset included recordings of 21 subjects performing a working memory task. In certain brain regions known to exhibit activity negatively correlated to performing the task, the only method capable of identifying this type of behavior in the bandlimited framework was ξ-Slepian guided filtering. The localization property of the proposed approach provides significant contribution to the strength of the graph spectral analysis, as it allows inclusion of a priori knowledge of the explored graph's mesoscale structure.
Neural Networks and Sparse Representations II
icon_mobile_dropdown
The structure of spaces of neural network functions
We analyse spaces of deep neural networks with a fixed architecture. We demonstrate that, when interpreted as a set of functions, spaces of neural networks exhibit many unfavourable properties: They are highly non-convex and not closed with respect to Lp-norms, for 0 < p < ∞ and all commonly-used activation functions. They are not closed with respect to the L-norm for almost all practically-used activation functions; here, the (parametric) ReLU is the only exception. Finally, we show that the function that maps a family of neural network weights to the associated functional representation of a network is not inverse stable for every practically-used activation function.
Compactly supported frame wavelets and applications in convolutional neural networks
Nikolaos Karantzas, Kazem Safari, Mozahid Haque, et al.
In this paper, we use the ideas presented in [1] to construct application-targeted convolutional neural network architectures (CNN). Specifically, we design frame filter banks consisting of sparse kernels with custom-selected orientations that can act as finite-difference operators. We then use these filter banks as the building blocks of structured receptive field CNNs [2] to compare baseline models with more application-oriented methods. Our tests are done on Google's Quick, Draw! data set.
Wavelet/shearlet hybridized neural networks for biomedical image restoration
Bart Goossens, Hiêp Luong, Wilfried Philips
Recently, new programming paradigms have emerged that combine parallelism and numerical computations with algorithmic differentiation. This approach allows for the hybridization of neural network techniques for inverse imaging problems with more traditional methods such as wavelet-based sparsity modelling techniques. The benefits are twofold: on the one hand traditional methods with well-known properties can be integrated in neural networks, either as separate layers or tightly integrated in the network, on the other hand, parameters in traditional methods can be trained end-to-end from datasets in a neural network "fashion" (e.g., using Adagrad or Adam optimizers). In this paper, we explore these hybrid neural networks in the context of shearlet-based regularization for the purpose of biomedical image restoration. Due to the reduced number of parameters, this approach seems a promising strategy especially when dealing with small training data sets.
Ultra-high-order ICA: an exploration of highly resolved data-driven representation of intrinsic connectivity networks (sparse ICNs)
Armin Iraji, Ashkan Faghiri, Noah Lewis, et al.
Spatial independent component analysis (sICA) has become an integral part of functional MRI (fMRI) studies, particularly with resting-state fMRI. Early work used low-order ICA with between 20 and 45 components, which has led to the identification of around a dozen reproducible, distributed, large-scale brain networks. While regions within each largescale network are fairly temporally coherent, later studies have shown that each distributed network can be split into a group of spatially granular, and temporally covarying functional parcels. Thus, higher model order ICAs (75~150 components) have been employed to identify functional units known as intrinsic connectivity networks (ICNs). Our recent work suggests that an ICA framework can identify even more granular and functionally homogeneous brain functional units, and has the potential to provide more precise estimates of ICNs. In this study, we adopted an ICA with 1000 components (1k-ICA) to parcellate the brain into fine-grain sparse but overlapping ICNs and evaluated their properties and reliability in various ways. Our findings show that ultra-high-order ICA approaches like 1k-ICA can provide reliable, spatially-sparse ICNs.
Time-Dependent Graph Signals and Dynamic Graphs I
icon_mobile_dropdown
Diffusion source detection in a network using partial observations
Diffusive phenomena are ubiquitous in nature and society, and have been extensively studied in various fields, such as natural sciences and engineering. Recently, however, the more challenging inverse problem of diffusion source detection in a network has started to receive a significant amount of attention. A lot of research has concentrated on finding origins in tree-like networks, however these approaches cannot be easily extended to generic networks. Furthermore, only some methods consider realistic temporal diffusion dynamics. We introduce a novel method to localise the source of multiple rumours in an arbitrary network of known topology, using partial observations of the network nodes. We first present two mathematical models of the discrete-time, susceptible infected propagation dynamics, which accurately capture the diffusion process and have low computational complexity. The first one is a simplified likelihood of infection at a node, at a certain time after the rumour is initiated. The second is a formulation of the infection likelihood of a node, as a function of its shortest distance to the source. We then design an efficient single source detection algorithm, which leverages these mathematical models of diffusion, and the assumption that the start time of the propagation is known. Finally, we show how these methods can be extended to the case when the start time of the rumour is unknown, by taking advantage of the dissimilarity in dynamics of infection, of different nodes in the network. Simulation results show that a high source estimation probability is achieved using a small number of observations.
Sparse tensor dimensionality reduction with application to clustering of functional connectivity
Gaëtan Frusque, Julien Jung, Pierre Borgnat, et al.
Functional connectivity (FC) is a graph-like data structure commonly used by neuroscientists to study the dynamic behaviour of the brain activity. However, these analyses rapidly become complex and time-consuming. In this work, we present complementary empirical results on two tensor decomposition previously proposed named modified High Order Orthogonal Iteration (mHOOI) and High Order sparse Singular Value Decomposition (HOsSVD). These decompositions associated to k-means were shown to be useful for the study of multi trial functional connectivity dataset.
Applications of Frames and Transforms in Neural Networks
icon_mobile_dropdown
Structured receptive field networks and applications to hyperspectral image classification
Deep neural networks have achieved impressive performance in problems of object detection and object category classifications. To perform efficiently though, such methods typically require a large number of training samples. Unfortunately, this requirement is highly impractical or impossible in applications such as hyperspectral classification where it is expensive and labor intensive to generate labeled data for training. A few ideas have been proposed in the literature to address this problem such as transfer learning and domain adaptation. In this work, we propose an alternative strategy to reduce the number of network parameters based on Structured Receptive Field Networks (SRFN), a class of convolutional neural networks (CNNs) where each convolutional filter is a linear combination from a predefined dictionary. To better exploit the characteristics of hyperspectral data to be learned, we choose a filter dictionary consisting of directional filters inspired by the theory of shearlets and we train a SRFN by imposing that the convolutional filters form sparse linear combinations in such dictionary. The application of our SRFN to problems of hyperspectral classification shows that this approach achieves very competitive performance as compared to conventional CNNs.
Geometric wavelet scattering on graphs and manifolds
Feng Gao, Matthew Hirn, Michael Perlmutter, et al.
Convolutional neural networks (CNNs) are revolutionizing imaging science for two- and three-dimensional images over Euclidean domains. However, many data sets are intrinsically non-Euclidean and are better modeled through other mathematical structures, such as graphs or manifolds. This state of affairs has led to the development of geometric deep learning, which refers to a body of research that aims to translate the principles of CNNs to these non-Euclidean structures. In the process, various challenges have arisen, including how to define such geometric networks, how to compute and train them efficiently, and what are their mathematical properties.

In this letter we describe the geometric wavelet scattering transform, which is a type of geometric CNN for graphs and manifolds consisting of alternating multiscale geometric wavelet transforms and nonlinear activation functions. As the name suggests, the geometric wavelet scattering transform is an adaptation of the Euclidean wavelet scattering transform, first introduced by S. Mallat, to graph and manifold data. Like its Euclidean counterpart, the geometric wavelet scattering transform has several desirable properties. In the manifold setting these properties include isometric invariance up to a user specified scale and stability to small diffeomorphisms. Numerical results on manifold and graph data sets, including graph and manifold classification tasks as well as others, illustrate the practical utility of the approach.
Compressed sensing and generative models
Eric Price
The goal of compressed sensing is make use of image structure to estimate an image from a small number of linear measurements. The structure is typically represented by sparsity in a well-chosen basis. We describe how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all -instead, we suppose that vectors lie near the range of a generative model G: Rk → Rn. Our main theorem here is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice; this is O(kd log n) for typical d-layer neural networks. The above result describes how to use a model to recover a signal from noisy data. But if the data is noisy, how can we learn the generative model in the first place? This paper will describe how to incorporate the measurement process in generative adversarial network (GAN) training. Even if the noisy data does not uniquely identify the non-noisy signal, the distribution of noisy data may still uniquely identify the distribution of non-noisy signals. In presenting the above results we summarize and synthesize the work of [BJPD17] and [BPD18]. We then add some observations on the limitations of the approaches.
Experimental performance of graph neural networks on random instances of max-cut
Weichi Yao, Afonso S. Bandeira, Soledad Villar
This note explores the applicability of unsupervised machine learning techniques towards hard optimization problems on random inputs. In particular we consider Graph Neural Networks (GNNs) - a class of neural networks designed to learn functions on graphs - and we apply them to the max-cut problem on random regular graphs. We focus on the max-cut problem on random regular graphs because it is a fundamental problem that has been widely studied. In particular, even though there is no known explicit solution to compare the output of our algorithm to, we can leverage the known asymptotics of the optimal max-cut value in order to evaluate the performance of the GNNs. In order to put the performance of the GNNs in context, we compare it with the classical semidefinite relaxation approach by Goemans and Williamson (SDP), and with extremal optimization, which is a local optimization heuristic from the statistical physics literature. The numerical results we obtain indicate that, surprisingly, Graph Neural Networks attain comparable performance to the Goemans and Williamson SDP. We also observe that extremal optimization consistently outperforms the other two methods. Furthermore, the performances of the three methods present similar patterns, that is, for sparser, and for larger graphs, the size of the found cuts are closer to the asymptotic optimal max-cut value.
Computational Photonic Imaging
icon_mobile_dropdown
On fast object detection using single-photon lidar data
Julian Tachella, Yoann Altmann, Stephen McLaughlin, et al.
Light detection and ranging (Lidar) systems based on single-photon detection can be used to obtain range and reflectivity information from 3D scenes with high range resolution. However, reconstructing the 3D surfaces from the raw single-photon waveforms is challenging, in particular when a limited number of photons is detected and when the ratio of spurious background detection events is large. This paper reviews a set of fast detection algorithms, which can be used to assess the presence of objects/surfaces in each waveform, allowing only the histograms where the imaged surfaces are present to be further processed. The original method we recently proposed is extended here using a multiscale approach to further reduce the computational complexity of the detection process. The proposed methods are compared to state-of-the-art 3D reconstruction methods using synthetic and real single-photon data and the results illustrate their benefits for fast and robust target detection.
Passive indirect diffuse imaging
A passively illuminated scene presents a variety of photon pathways: direct and indirect, which convey varying levels of information about the scene across different dimensions of the light field. In indirect passive imaging, the object of interest is occluded from the imager which has no control over illumination. Using a second-order (non-linear) image formation model we demonstrate (experimentally) the feasibility of passive indirect diffuse imaging.
Lifting the veil: enhancing images in turbid aqueous environments
Sanat Upadhyay, Manos Papadakis
Water turbidity is a frequent impediment for achieving satisfactory imaging clarity in underwater video and inhibits the extraction of information concerning the condition of submerged structures. Ports, rivers, lakes and inland waterways are notoriously difficult spots for camera inspections due to poor visibility. This problem motivated us to study methods to extract a cleaner image/video from the one acquired in an almost real-time setting (delay of the order of 6-7 secs). This type of problem arises in image post-processing as an illumination neutralization problem. and, it can also be viewed as a blind deconvolution problem. We present a mathematical method which enables the derivation of a cleaner image from a poor visibility original by means of a combination of linear and non-linear deterministic mathematical transformations for illumination neutralization. Moreover, we propose a new stochastic model for the effects of water turbidity in sub-aquatic video and still images. In the light of this model, illumination neutralization does not offer the full solution to this dehazing problem.
Occlusion-based computational periscopy with consumer cameras
John Murray-Bruce, Charles Saunders, Vivek K. Goyal
The ability to form images of scenes hidden from direct view would be advantageous in many applications – from improved motion planning and collision avoidance in autonomous navigation to enhanced danger anticipation for first-responders in search-and-rescue missions. Recent techniques for imaging around corners have mostly relied on time-of-flight measurements of light propagation, necessitating the use of expensive, specialized optical systems. In this work, we demonstrate how to form images of hidden scenes from intensity-only measurements of the light reaching a visible surface from the hidden scene. Our approach exploits the penumbra cast by an opaque occluding object onto a visible surface. Specifically, we present a physical model that relates the measured photograph to the radiosity of the hidden scene and the visibility function due to the opaque occluder. For a given scene–occluder setup, we characterize the parts of the hidden region for which the physical model is well-conditioned for inversion – i.e., the computational field of view (CFOV) of the imaging system. This concept of CFOV is further verified through the Cram´er–Rao bound of the hidden-scene estimation problem. Finally, we present a two-step computational method for recovering the occluder and the scene behind it. We demonstrate the effectiveness of the proposed method using both synthetic and experimentally measured data.
Time-Dependent Graph Signals and Dynamic Graphs II
icon_mobile_dropdown
Estimation of time-series on graphs using Bayesian graph convolutional neural networks
Fatemeh Teimury, Soumyasundar Pal, Arezou Amini, et al.
There has recently been an explosion of interest in graph neural networks, which extend the application of neural networks to data recorded on graphs. Most of the work has focused on static tasks, where a feature vector is available at each node in a graph, often with an associated label, and the goal is to perform node classification or regression, graph classification, or link prediction. Some work has recently emerged addressing the processing of sequences of data (time-series) on graphs using methods based on neural networks; most strategies involve combining Long Short-Term Memory units (LSTMs) or Gated Recurrent Units (GRUs) with graph convolution. However most of these methods process the observed graph as the ground truth, thereby failing to account for the uncertainty associated with graph structure in the learning task. As a remedy to this issue, in the recently proposed Bayesian graph convolutional neural networks, the provided graph is treated as a noisy observation of a true underlying graph or a realization of a random graph model. We can thus model uncertainty in the identification of relationships between nodes in the graph. We specify a joint posterior probability on the graph and the weights of the neural network and then perform inference through a combination of variational inference and Monte Carlo sampling. In this paper, we extend the Bayesian framework to address a regression task for time-series on graphs.
Comparing linear structure-based and data-driven latent spatial representations for sequence prediction
Myriam Bontonou, Carlos Lassance, Vincent Gripon, et al.
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
Time-resolved analysis of dynamic graphs: an extended Slepian design
Raphaël Liégeois, Ibrahim Merad, Dimitri Van De Ville
Graphs are extensively used to represent networked data. In many applications, especially when considering large datasets, it is a desirable feature to focus the analysis onto specific subgraphs of interest. Slepian theory and its extension to graphs allows to do this and has been applied recently to analyze various types of networks. One limitation of this framework, however, is that the number of subgraphs of interest is typically limited to one. We introduce an extended Slepian design that allows to consider an arbitrary number of subgraphs of interest. This extension offers the possibility to encode prior information about multiple subgraphs in a two-dimensional plane. As a proof of concept and potential application, we demonstrate that this framework allows to perform time-resolved and spatio-temporal analyses of dynamic graphs.
Wavelet-based graph inference using multiple testing
Sophie Achard, Pierre Borgnat, Irène Gannaz, et al.
Graph-based representation enables to outline efficiently interactions between sensors and as such has encountered a growing interest. For example in neurosciences, the graph of interactions between brain regions has shed lights on evolution of diseases. In this paper, we describe a whole procedure which estimates the graph from multivariate time series. First correlations using wavelet decomposition of the signals are estimated. Bonferroni (1935)'s procedure on multiple correlation testing is then used. We prove theoretically that the Family Wise Error Rate (FWER) is asymptotically controlled for any graph structures. We implement our approach on smallworld graph structures, with signals possibly having long-memory properties. This structure is inspired by real data examples from resting-state functional magnetic resonance imaging. The control is confirmed graphically. Numerical simulations illustrate the behavior of the bias and the power of our proposed approach.
Applications in Bio-Imaging
icon_mobile_dropdown
Generalized temporal sampling with active illumination in optical microscopy
Generalized sampling is a flexible framework for signal acquisition, which relaxes the need for ideal pre-filters. Nevertheless, implementation remains challenging for dynamic imaging applications because it requires simultaneously measuring multiple overlapping inner-products and because only positive signals (intensities) can be measured by cameras. We present a method to collect videos of monochromatic objects by projecting the incoming signal at each pixel in a temporal B-spline space of degree 0, 1, or 2 by using a conventional RGB camera and a modulated three-color light source for illumination. Specifically, we solve the basis function overlap problem by multiplexing the acquisition in different color ranges and use B-spline pieces (which are positive) as projection kernels of a biorthogonal projection-expansion bases pair. The steps to recover signal samples include spectral unmixing and inverse filtering. Reconstructions we obtained from simulated and experimentally-acquired microscopy data demonstrate the feasibility of our approach.
Inverse Problems in MRI
icon_mobile_dropdown
Learning-based computational MRI reconstruction without big data: from linear interpolation and structured low-rank matrices to recurrent neural networks
We present a brief overview of computational image reconstruction methods that assume that Magnetic Resonance Imaging (MRI) data possesses shift-invariant autoregressive characteristics, where the unique autoregressive structure of each dataset is learned from a small amount of scan-specific calibration data. Our discussion focuses particular attention on a method we recently introduced named LORAKI. LORAKI is a learning-based image reconstruction method that relies on scan-specific nonlinear autoregressive modeling using a recurrent convolutional neural network, and has demonstrated better performance than previous approaches. As a novel contribution, we also describe and evaluate an extension of LORAKI that makes simultaneous use of support, phase, parallel imaging, and sparsity constraints, where the balance between these different constraints is automatically determined through the training procedure. Results with real data demonstrate that this modification leads to further performance improvements.
Highly efficient MRI through multi-shot echo planar imaging
Congyu Liao, Xiaozhi Cao, Jaejin Cho, et al.
Multi-shot echo planar imaging (msEPI) is a promising approach to achieve high in-plane resolution with high sampling efficiency and low T2* blurring. However, due to the geometric distortion, shot-to-shot phase variations and potential subject motion, msEPI continues to be a challenge in MRI. In this work, we introduce acquisition and reconstruction strategies for robust, high-quality msEPI without phase navigators. We propose Blip Up-Down Acquisition (BUDA) using interleaved blip-up and -down phase encoding, and incorporate B0 forward-modeling into Hankel structured low-rank model to enable distortion- and navigator-free msEPI. We improve the acquisition efficiency and reconstruction quality by incorporating simultaneous multi-slice acquisition and virtual-coil reconstruction into the BUDA technique. We further combine BUDA with the novel RF-encoded gSlider acquisition, dubbed “BUDA-gSlider”, to achieve rapid high isotropic resolution MRI. Deploying BUDA-gSlider with model-based reconstruction allows for distortion-free whole-brain 1mm isotropic T2 mapping in ~1 minute. It also provides whole-brain 1mm isotropic diffusion imaging with high geometric fidelity and SNR efficiency. We finally incorporate sinusoidal “wave” gradients during the EPI readout to better use coil sensitivity encoding with controlled aliasing.
Online MR image reconstruction for compressed sensing acquisition in T2* imaging
Loubna El Gueddari, Emilie Chouzenoux, Alexandre Vignaud, et al.
Reducing acquisition time is a major challenge in high-resolution MRI that has been successfully addressed by Compressed Sensing (CS) theory. While the scan time has been massively accelerated by a factor up to 20 in 2D imaging, the complexity of image recovery algorithms has strongly increased, resulting in slower reconstruction processes. In this work we propose an online approach to shorten image reconstruction times in the CS setting. We leverage the segmented acquisition in multiple shots of k-space data to interleave the MR acquisition and image reconstruction steps. This approach is particularly appealing for 2D high-resolution T2 -weighted anatomical imaging as the largest timing interval (i.e. Time of Repetition) between consecutive shots arises for this kind of imaging. During the scan, acquired shots are stacked together to form mini-batches and image reconstruction may start from incomplete data. For each newly available mini-batch, the previous partial solution is used as warm restart for the next sub-problem to be solved in a timing window compatible with the given TR and the number of shots stacked in a mini-batch. We demonstrate the interest and time savings of using online MR image reconstruction for Cartesian and non-Cartesian sampling strategies combined with a single receiver coil. Next, we extend the online formalism to address the more general multi-receiver phased array acquisition scenario. In this setting, calibrationless image reconstruction is adopted to remain compatible with the timing constraints of online delivery. Our retrospective and prospective results on ex-vivo 2D T2 -weighted brain imaging show that high-quality MR images are recovered by the end of acquisition for single receiver acquisition and that additional iterations are required when parallel imaging is adopted. Overall, our approach implemented through the Gadgetron framework may be compatible with the data workflow on the scanner to provide the physician with reliable MR images for diagnostic purposes.
Inverse GANs for accelerated MRI reconstruction
Dominik Narnhofer, Kerstin Hammernik, Florian Knoll, et al.
State-of-the-art algorithms for accelerated magnetic resonance image (MRI) reconstruction are nowadays dominated by deep learning-based techniques. However, the majority of these methods require the respective sampling patterns in training, which limits their application to a specific problem class. We propose an iterative reconstruction approach that incorporates the implicit prior provided by a generative adversarial network (GAN), which learns the probability distribution of uncorrupted MRI data in an off-line step. Since the unsupervised training of the GAN is completely independent of the measurement process, our method is in principle able to address multiple sampling modalities using a single pre-trained model. However, it turns out that the desired target images potentially lie outside the range space of the learned GAN, leading to reconstructions that resemble the target images only at a coarse level of detail. To overcome this issue, we propose a refinement scheme termed GAN prior adaption, that allows for additional adaption of the generating network with respect to the measured data. The proposed method is evaluated on multi-coil knee MRI data for different acceleration factors and compared to a classical as well as a deep learning-based approach showing promising quantitative and qualitative results.
sRAKI-RNN: accelerated MRI with scan-specific recurrent neural networks using densely connected blocks
Seyed Amir Hossein Hosseini, Chi Zhang, Kâmil Uǧurbil, et al.
This study aims to improve upon Self-consistent Robust Artificial-neural-networks for k-space Interpolation (sRAKI), which is a deep learning-based parallel imaging technique for accelerated MRI reconstruction. The proposed technique, called sRAKI-RNN, combines the calibration and reconstruction phases of sRAKI into a single step that jointly learns the self-consistency rule and performs iterative reconstruction using recurrent neural networks (RNN). Similar to sRAKI, sRAKI-RNN supports arbitrary undersampling patterns and is a databasefree technique that is trained on autocalibrating signal (ACS) data from the same scan. Densely connected blocks are used in each iteration of the RNN to improve the convergence during the learning phase. sRAKI-RNN was evaluated on targeted right coronary artery (RCA) MRI. The results indicate that sRAKI-RNN further improves the noise resilience of sRAKI in a shorter running time and also considerably outperforms its linear counterpart, SPIRiT, in suppressing reconstruction noise.
Parallel magnetic resonance imaging reconstruction algorithm by three-dimension directional Haar tight framelet regularization
Yan-Ran Li, Xiaosheng Zhuang
In this paper, a 3-dimension directional Haar tight framelet (3DHF) is used to detect the related features between coil images in parallel magnetic resonance imaging (pMRI). Such a Haar tight framelet has an extremely simple geometric structure in the sense that all the high-pass filters in its underlying filter bank have only two nonzero coefficients with opposite signs. A pMRI optimization model, which we coined 3DHF-SPIRiT, by regularizing the 3DHF features on the 3-D coil image data is proposed to reduce the aliasing artifacts caused by the downsampling operation in the k-space (Fourier) domain, which can be solved by alternating direction method of multipliers (ADMM) scheme. Numerical experiments are provided to demonstrate the superiority and efficiency of our 3DHF-SPIRiT model.
Optimal Frames and Subspace Packings
icon_mobile_dropdown
An infinite family of two-distance tight frames
Nicholas P. Brown, John Jasper
The relationship between equiangular tight frames and strongly regular graphs has been known for several years. This relationship has been exploited to construct many of the latest examples of new strongly regular graphs. Recently it was shown that there is a similar relationship between two-distance tight frames and strongly regular graphs. In this paper we present a new tensor like construction of two-distance tight frames, and hence a family of strongly regular graphs. While graphs with these parameters were known to exist, this new construction is very simple, requiring only the existence of an affine plane, whereas the original constructions often require more complicated objects such as generalized quadrangles.
Game of Sloanes: best known packings in complex projective space
John Jasper, Emily J. King, Dustin G. Mixon
It is often of interest to identify a given number of points in projective space such that the minimum distance between any two points is as large as possible. Such configurations yield representations of data that are optimally robust to noise and erasures. The minimum distance of an optimal configuration not only depends on the number of points and the dimension of the projective space, but also on whether the space is real or complex. For decades, Neil Sloane’s online Table of Grassmannian Packings has been the go-to resource for putatively or provably optimal packings of points in real projective spaces. Using a variety of numerical algorithms, we have created a similar table for complex projective spaces. This paper surveys the relevant literature, explains some of the methods used to generate the table, presents some new putatively optimal packings, and invites the reader to competitively contribute improvements to this table.
Connections between structured tight frames and sum-of-squares optimization
Afonso S. Bandeira, Dmitriy Kunisky
This note describes a new technique for generating tight frames that have a high degree of symmetry and entrywise Gramian structure. The technique is based on a "lifting" construction through which, from non-maximal real equiangular tight frames of N vectors in r-dimensional space, we produce real unit-norm tight frames of
Biangular Gabor frames and Zauner's conjecture
Mark Magsino, Dustin G. Mixon
Two decades ago, Zauner conjectured that for every dimension d, there exists an equiangular tight frame consisting of d2 vectors in Cd . Most progress to date explicitly constructs the promised frame in various dimensions, and it now appears that a constructive proof of Zauner’s conjecture may require progress on the Stark conjectures. In this paper, we propose an alternative approach involving biangular Gabor frames that may eventually lead to an unconditional non-constructive proof of Zauner’s conjecture.
Linear programming bounds for cliques in Paley graphs
Mark Magsino, Dustin G. Mixon, Hans Parshall
The Lovász theta number is a semidefinite programming bound on the clique number of (the complement of) a given graph. Given a vertex-transitive graph, every vertex belongs to a maximal clique, and so one can instead apply this semidefinite programming bound to the local graph. In the case of the Paley graph, the local graph is circulant, and so this bound reduces to a linear programming bound, allowing for fast computations. Impressively, the value of this program with Schrijver's nonnegativity constraint rivals the state-of-the-art closed-form bound recently proved by Hanson and Petridis. We conjecture that this linear programming bound improves on the Hanson{Petridis bound in infinitely often, and we derive the dual program to facilitate proving this conjecture.
Spectral Graph Analysis
icon_mobile_dropdown
Parameter tuning using asynchronous parallel pattern search in sparse signal reconstruction
Omar DeGuchy, Roummel F. Marcia
Parameter tuning is an important but often overlooked step in signal recovery problems. For instance, the regularization parameter in compressed sensing dictates the sparsity of the approximate signal reconstruction. More recently, there has been evidence that non-convex ℓp quasi-norm minimization, where 0 < p < 1, leads to an improvement in reconstruction over existing models that use convex regularization. However, these methods rely on good estimates of the value of not only p (the choice of norm) but also on the value of the penalty regularization parameter. This paper describes a method for choosing suitable parameters. The method involves creating a score to determine the effectiveness of the choice of parameters by partially reconstructing the signal. We then efficiently search through different combinations of parameters using a pattern search approach that exploits parallelism and asynchronicity to find the pair with the optimal score. We demonstrate the efficiency and accuracy of the proposed method through numerical experiments.
Metrics of graph Laplacian eigenvectors
The application of graph Laplacian eigenvectors has been quite popular in the graph signal processing field: one can use them as ingredients to design smooth multiscale basis. Our long-term goal is to study and understand the dual geometry of graph Laplacian eigenvectors. In order to do that, it is necessary to define a certain metric to measure the behavioral differences between each pair of the eigenvectors. Saito (2018) considered the ramified optimal transportation (ROT) cost between the square of the eigenvectors as such a metric. Clonginger and Steinerberger (2018) proposed a way to measure the affinity (or ‘similarity’) between the eigenvectors based on their Hadamard (HAD) product. In this article, we propose a simplified ROT metric that is more computational efficient and introduce two more ways to define the distance between the eigenvectors, i.e., the time-stepping diffusion (TSD) metric and the difference of absolute gradient (DAG) pseudometric. The TSD metric measures the cost of “flattening” the initial graph signal via diffusion process up to certain time, hence it can be viewed as a time-dependent version of the ROT metric. The DAG pseudometric is the l 2 -distance between the feature vectors derived from the eigenvectors, in particular, the absolute gradients of the eigenvectors. We then compare the performance of ROT, HAD and the two new “metrics” on different kinds of graphs. Finally, we investigate their relationship as well as their pros and cons.
The random component-wise power method
Oguzhan Teke, Palghat P. Vaidyanathan
This paper considers a random component-wise variant of the unnormalized power method, which is similar to the regular power iteration except that only a random subset of indices is updated in each iteration. For the case of normal matrices, it was previously shown that random component-wise updates converge in the mean-squared sense to an eigenvector of eigenvalue 1 of the underlying matrix even in the case of the matrix having spectral radius larger than unity. In addition to the enlarged convergence regions, this study shows that the eigenvalue gap does not directly affect the convergence rate of the randomized updates unlike the regular power method. In particular, it is shown that the rate of convergence is affected by the phase of the eigenvalues in the case of random component-wise updates, and the randomized updates favor negative eigenvalues over positive ones. As an application, this study considers a reformulation of the component-wise updates revealing a randomized algorithm that is proven to converge to the dominant left and right singular vectors of a normalized data matrix. The algorithm is also extended to handle large-scale distributed data when computing an arbitrary rank approximation of an arbitrary data matrix. Numerical simulations verify the convergence of the proposed algorithms under different parameter settings.