Show all abstracts
View Session
- Front Matter: Volume 6701
- Keynote Session
- Special Session on Frames and Coarse Quantization
- Wavelets: New Designs
- Special Session on Wavelets in Bio-Imaging
- Special Session on Geometrical X-lets and Nonseparable Bases
- Special Session on Sampling and Operator Theory I
- Special Session on Sampling and Operator Theory II
- Special Session on Wavelets in Neuro-Imaging
- Special Session on Wavelets in Physics
- Wavelets and Filterbank Designs
- Special Session on Wavelets for Denoising and Restoration
- Special Session on Finite-Dimensional Frames, Time-Frequency Analysis, and Applications
- Special Session on Wavelets in Medical Imaging
- Emerging Applications
- Special Session on Sparsity and Compressed Sampling
- Keynote Session
- Poster Session
Front Matter: Volume 6701
Front Matter: Volume 6701
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6701, including the Title Page, Copyright
information, Table of Contents, and the
Conference Committee listing.
Keynote Session
A wide-angle view at iterated shrinkage algorithms
Show abstract
Sparse and redundant representations − an emerging and powerful model for signals − suggests that a data source
could be described as a linear combination of few atoms from a pre-specified and over-complete dictionary. This
model has drawn a considerable attention in the past decade, due to its appealing theoretical foundations, and
promising practical results it leads to. Many of the applications that use this model are formulated as a mixture
of l2-lp (p ≤ 1) optimization expressions. Iterated Shrinkage algorithms are a new family of highly effective
numerical techniques for handling these optimization tasks, surpassing traditional optimization techniques. In
this paper we aim to give a broad view of this group of methods, motivate their need, present their derivation,
show their comparative performance, and most important of all, discuss their potential in various applications.
Special Session on Frames and Coarse Quantization
Random rounding in redundant representations
Show abstract
This paper investigates the performance of randomly dithered first and higher-order sigma-delta quantization
applied to the frame coefficients of a vector in a
infinite-dimensional Hilbert space. We compute
the mean square error resulting from linear reconstruction with the quantized frame coefficients. When
properly dithered, this computation simplifies in the same way as under the assumption of the white-noise
hypothesis. The results presented here are valid for a uniform
mid-tread quantizer operating in
the no-overload regime. We estimate the large-redundancy asymptotics of the error for each family of
tight frames obtained from regular sampling of a bounded, differentiable path in the Hilbert space. In
order to achieve error asymptotics that are comparable to the quantization of oversampled band-limited
functions, we require the use of smoothly terminated frame paths.
Sigma delta quantization for compressive sensing
Show abstract
Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few measurements with an incoherent, and
even randomly generated, dictionary. To date the hardware implementation of compressive sensing analog-to-digital systems has not been straightforward. This paper explores the use of Sigma-Delta quantizer architecture to implement such a system. After examining the challenges of using Sigma-Delta with a randomly generated
compressive sensing dictionary, we present efficient algorithms to compute the coefficients of the feedback loop. The experimental results demonstrate that Sigma-Delta relaxes the required analog filter order and quantizer precision. We further demonstrate that restrictions on the feedback coefficient values and stability constraints impose a small penalty on the performance of the
Sigma-Delta loop, while they make hardware implementations significantly simpler.
An improved family of exponentially accurate sigma-delta quantization schemes
Show abstract
ΣΔ-modulation is an A/D-conversion method which represents a bandlimited signal by sequences of ±1 whose
local averages approximate the function values. The best bounds for the decay rate of the script-l∞-error arising from
such quantization schemes have been given by Güntürk.1 He constructs an infinite family of schemes which lead
to an algorithm that establishes exponential error decay with decay rate 0.077. In this paper we improve his
construction introducing an additional symmetry, which is suggested by numerical experiments. To show that
the modified schemes are still stable, we use the asymptotics of the Γ-function. This leads to a bound of 0.088
for the error decay rate.
The tiling phenomenon of sigma-delta modulators with time-varying inputs
Show abstract
Currently, the most efficient technique to coarsely quantize the coefficients of redundant shift invariant signal
expansions is a recursive method called ΣΔ modulation. However, the error of approximation resulting from this
type of quantization is difficult to analyze rigorously. This is because a ΣΔ modulator is basically a nonlinear
feedback system. With constant inputs, it was previously shown that the state vectors of ΣΔ modulators of a
certain class appear to remain in a tile. In this paper, we show experimentally that this property remains valid
with a class of time-varying inputs. We explain the importance of the tiling property for the error analysis of
ΣΔ modulation.
The beta-alpha-encoders for A/D conversion
Show abstract
The β-encoder, introduced as an alternative to binary encoding in A/D conversion,
creates a quantization scheme robust with respect to quantizer imperfections by
the use of a β-expansion, where 1 < β < 2. In this paper we introduce a more general encoder
called the βα-encoder, that can offer more flexibility in design and robustness without
any significant drawback on the exponential rate of convergence of the obtained expansion.
Although an extra multiplication is introduced, it needs not be very accurate. Mathematically,
the βα-encoder gives rise to a dynamical system that is both very interesting and
challenging.
On quantization of finite frame expansions: sigma-delta schemes of arbitrary order
Show abstract
In this note we will show that the so called Sobolev dual is the minimizer over all linear reconstructions using dual frames for stable rth order ΣΔ quantization schemes under the so called White Noise Hypothesis (WNH) design criteria. We compute some Sobolev duals for common frames and apply them to audio clips to test their performance against canonical duals and another alternate dual corresponding to the well known Blackman filter.
Wavelets: New Designs
Multivariate complex B-splines
Show abstract
We extend the notion of complex B-splines to a multivariate setting by employing the relationship between
ordinary B-splines and multivariate B-splines by means of ridge functions. In order to obtain properties of
complex B-splines in Rs, 1 < s ∈ N, the Dirichlet average has to be generalized to include infinite dimensional
simplices. Based on this generalization several identities of multivariate complex B-splines are exhibited.
Multiscale representation for data on the sphere and applications to geopotential data
Show abstract
We develop a wavelet transform on the sphere, based on the spherical HEALPix coordinate system (Hierarchical
Equal Area iso-Latitude Pixelization). HEALPix is heavily used for astronomical data processing applications; it
is intrinsically multiscale and locally euclidean, hence appealing for building multiscale system. Furthermore, the
equal-area pixelization enables us to employ average-interpolating refinement, giving wavelets of local support.
HEALPix wavelets have numerous applications in geopotential modeling. A statistical analysis demonstrates
wavelet compressibility of the geopotential field and shows that geopotential wavelet coefficients have many
of the statistical properties that were previously observed with wavelet coefficients of natural images. The
HEALPix wavelet expansion allows to evaluate a gravimetric quantity over a local region far more rapidly than
the classic approach based on spherical harmonics. Our software tools are specifically tailored to demonstrate
these advantages.
A new family of rotation-covariant wavelets on the hexagonal lattice
Show abstract
We present multiresolution spaces of complex rotation-covariant functions, deployed on the 2-D hexagonal lattice.
The designed wavelets, which are complex-valued, provide an important phase information for image analysis,
which is missing in the discrete wavelet transform with real wavelets. Moreover, the hexagonal lattice allows to
build wavelets having a more isotropic magnitude than on the Cartesian lattice. The associated filters, defined
in the Fourier domain, yield an efficient FFT-based implementation.
An M-channel directional filter bank compatible with the contourlet and shearlet frequency tiling
Show abstract
In this work, we present new methods for creating M-channel directional filters to construct multiresolution
and multidirectional orthogonal/biorthogonal transforms. A key feature of these methods is the ability to solve
the polynomial Bezout equation in higher dimensions by taking advantage of solutions that have been proposed
for solving a related equation known as the analytic Bezout equation. These new techniques are capable of
creating directional filters that yield spatial-frequency tilings equivalent to those of the contourlet and the
shearlet transforms. Such directional filter banks can create sparse representations for a large class of images
and can be used for various restoration problems, compression schemes, and image enhancements.
Special Session on Wavelets in Bio-Imaging
A fast iterative thresholding algorithm for wavelet-regularized deconvolution
Show abstract
We present an iterative deconvolution algorithm that minimizes a functional with a non-quadratic wavelet-domain
regularization term. Our approach is to introduce subband-dependent parameters into the bound optimization
framework of Daubechies et al.; it is sufficiently general to cover arbitrary choices of wavelet bases
(non-orthonormal or redundant). The resulting procedure alternates between the following two steps:
1. a wavelet-domain Landweber iteration with subband-dependent step-sizes;
2. a denoising operation with subband-dependent thresholding functions.
The subband-dependent parameters allow for a substantial convergence acceleration compared to the existing
optimization method. Numerical experiments demonstrate a potential speed increase of more than one order of
magnitude. This makes our "fast thresholded Landweber algorithm" a viable alternative for the deconvolution
of large data sets. In particular, we present one of the first applications of wavelet-regularized deconvolution to
3D fluorescence microscopy.
Wavelet-based restoration methods: application to 3D confocal microscopy images
Show abstract
We propose in this paper an iterative algorithm for 3D confocal microcopy image restoration. The image quality
is limited by the diffraction-limited nature of the optical system which causes blur and the reduced amount of
light detected by the photomultiplier leading to a noise having Poisson statistics. Wavelets have proved to be
very effective in image processing and have gained much popularity. Indeed, they allow to denoise efficiently
images by applying a thresholding on coefficients. Moreover, they are used in algorithms as a regularization term
and seem to be well adapted to preserve textures and small objects. In this work, we propose a 3D iterative
wavelet-based algorithm and make some comparisons with
state-of-the-art methods for restoration.
Some uses of wavelets for imaging dynamic processes in live cochlear structures
Show abstract
A variety of image and signal processing
algorithms based on wavelet filtering tools have been developed during the last few decades, that are well adapted to the experimental variability typically encountered in live biological
microscopy. A number of processing tools are reviewed, that use wavelets for adaptive image restoration and for
motion or brightness variation analysis by optical flow computation. The usefulness of these tools for biological
imaging is illustrated in the context of the restoration of images of the inner ear and the analysis of cochlear
motion patterns in two and three dimensions. I also report on recent work that aims at capturing fluorescence
intensity changes associated with vesicle dynamics at synaptic zones of sensory hair cells. This latest application requires one to separate the intensity variations associated with the physiological process under study from
the variations caused by motion of the observed structures. A wavelet optical flow algorithm for doing this is
presented, and its effectiveness is demonstrated on artificial and experimental image sequences.
Multiresolution techniques for the classification of bioimage and biometric datasets
Show abstract
We survey our work on adaptive multiresolution (MR) approaches to the classification of biological and fingerprint
images. The system adds MR decomposition in front of a generic classifier consisting of feature computation
and classification in each MR subspace, yielding local decisions, which are then combined into a global decision
using a weighting algorithm. The system is tested on four different datasets, subcellular protein location images,
drosophila embryo images, histological images and fingerprint images. Given the very high accuracies obtained for
all four datasets, we demonstrate that the space-frequency localized information in the multiresolution subspaces
adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of
features is sufficient. Finally, we prove that frames are the class of MR techniques that performs the best in this
context. This leads us to consider the construction of a new family of frames for classification, which we term
lapped tight frame transforms.
Detection of curvilinear objects in biological noisy image using feature-adapted fast slant stack
Show abstract
This paper presents a new method for computing the Feature-adapted Radon and Beamlet transforms [1] in a
fast and accurate way. These two transforms can be used for detecting features running along lines or piecewise
constant curves. The main contribution of this paper is to unify the Fast Slant Stack method, introduced in
[2], with linear filtering technique in order to define what we call the Feature-adapted Fast Slant Stack. If
the desired feature detector is chosen to belong to the class of steerable filters, our method can be achieved in
O(N log(N)), where N = n2 is the number of pixels. This new method leads to an efficient implementation of
both Feature-adapted Radon and Beamlet transforms, that outperforms our previous works [1] both in terms
of accuracy and speed. Our method has been developed in the context of biological imaging to detect DNA
filaments in fluorescent microscopy.
Active contour-based multiresolution transforms for the segmentation of fluorescence microscope images
Show abstract
In recent years, the focus in biological science has shifted to understanding complex systems at the cellular and molecular levels, a task greatly facilitated by fluorescence microscopy. Segmentation, a fundamental yet difficult problem, is often the first processing step following acquisition. We have previously demonstrated that a stochastic active contour based algorithm together with the concept of topology preservation (TPSTACS) successfully segments single cells from multicell images. In this paper we demonstrate that TPSTACS successfully segments images from other imaging modalities such as DIC microscopy, MRI and fMRI. While this method is a viable alternative to hand segmentation, it is not yet ready to be used for high-throughput applications due to its large run time. Thus, we highlight some of the benefits of combining TPSTACS with the multiresolution approach for the segmentation of fluorescence microscope images. Here we propose a multiscale active contour (MSAC)
transformation framework for developing a family of modular algorithms for the segmentation of fluorescence microscope images in particular, and biomedical images in general. While this framework retains the flexibility and the high quality of the segmentation provided by active contour-based algorithms, it offers a boost in the
efficiency as well as a framework to compute new features that further enhance the segmentation.
Special Session on Geometrical X-lets and Nonseparable Bases
Curvelets and wave atoms for mirror-extended images
Show abstract
We present variants of both the digital curvelet transform, and the digital wave atom transform, which handle
the image boundaries by mirror extension. Previous versions of these transforms treated image boundaries by
periodization. The main ideas of the modifications are 1) to tile the discrete cosine domain instead of the
discrete Fourier domain, and 2) to adequately reorganize the in-tile data. In their shift-invariant versions, the
new constructions come with no penalty on the redundancy or computational complexity. For shift-variant wave
atoms, the penalty is a factor 2 instead of the naive factor 4.
These various modifications have been included in the CurveLab and WaveAtom toolboxes, and extend
the range of applicability of curvelets (good for edges and bandlimited wavefronts) and wave atoms (good for
oscillatory patterns and textures) to situations where periodization at the boundaries is uncalled for. The new
variants are dubbed ME-curvelets and ME-wave atoms, where ME stands for mirror-extended.
Geometrical image estimation with orthogonal bandlet bases
Show abstract
This article presents the first adaptive quasi minimax estimator for geometrically regular images in the white noise
model. This estimator is computed using a thresholding in an adapted orthogonal bandlet basis optimized for the noisy
observed image. In order to analyze the quadratic risk of this best basis denoising, the thresholding in an orthogonal
bandlets basis is recasted as a model selection process. The resulting estimator is computed with a fast algorithm whose
theoretical performance can be derived. This efficiency is confirmed through numerical experiments on natural images.
Image representation and compression using directionlets
Show abstract
The standard separable two-dimensional (2-D) wavelet transform (WT) has recently achieved a great success
in image processing because it provides a sparse representation of smooth images. However, it fails to capture
efficiently one-dimensional (1-D) discontinuities, like edges or contours. These features, being elongated and
characterized by geometrical regularity along different directions, intersect and generate many large magnitude
wavelet coefficients. Since contours are very important elements in visual perception of images, to provide a
good visual quality of compressed images, it is fundamental to preserve good reconstruction of these directional
features. We propose a construction of critically sampled perfect reconstruction transforms with directional
vanishing moments (DVMs) imposed in the corresponding basis functions along different directions, called directionlets.
We also demonstrate the outperforming non-linear approximation (NLA) results achieved by our transforms and we show how to design and implement a novel efficient space-frequency quantization (SFQ) compression algorithm using directionlets. Our new compression method beats the standard SFQ both in terms of mean-square-error (MSE) and visual quality, especially in the low-rate compression regime. We also show that our compression method, does not increase the order of computational complexity as compared to the standard SFQ algorithm.
Special Session on Sampling and Operator Theory I
On slanted matrices in frame theory
Show abstract
In this paper we present a brief account of the use of the spectral theory of slanted
matrices in frame and sampling theory. Some abstract results on slanted matrices are also
presented.
Localized frames and localizable operators
Show abstract
We introduce the notion of operators localized with respect to frames and prove the boundedness of such operators
on families of Banach spaces. This generalizes the method used in proving boundedness of pseudodifferential
operators on various spaces. We also use this notion to provide sufficient conditions for the construction of frames
which have the localization property.
Special Session on Sampling and Operator Theory II
Construction of wavelet bases that mimic the behaviour of some given operator
Show abstract
Probably the most important property of wavelets for signal processing is their multiscale derivative-like behavior
when applied to functions. In order to extend the class of problems that can profit of wavelet-based techniques, we
propose to build new families of wavelets that behave like an arbitrary scale-covariant operator. Our extension is
general and includes many known wavelet bases. At the same time, the method takes advantage a fast filterbank
decomposition-reconstruction algorithm. We give necessary conditions for the scale-covariant operator to admit
our wavelet construction, and we provide examples of new wavelets that can be obtained with our method.
On the sampling of functions and operators with an application to multiple-input multiple-output channel identification
Show abstract
The classical sampling theorem, attributed to Whittaker, Shannon, Nyquist, and Kotelnikov, states that a
bandlimited function can be recovered from its samples, as long as we use a sufficiently dense sampling grid.
Here, we review the recent development of an operator sampling theory which allows for a "widening" of the
classical sampling theorem. In this realm, bandlimited functions are replaced by "bandlimited operators". that
is, by pseudodifferential operators which have bandlimited
Kohn-Nirenberg symbols.
Similar to the Nyquist sampling density condition alluded to above, we discuss sufficient and necessary
conditions on the bandlimitation of pseudodifferential operators to ensure that they can be recovered by their
action on a single distribution. In fact, we show that an operator with Kohn-Nirenberg symbol bandlimited to
a Jordan domain of measure less than one can be recovered through its action on a distribution defined on a
appropriately chosen sampling grid. Further, an operator with bandlimitation to a Jordan domain of measure
larger than one cannot be recovered through its action on any tempered distribution whatsoever, pointing towards
a fundamental difference to the classical sampling theorem where a large bandwidth could always be compensated
through a sufficiently fine sampling grid. The dichotomy depending on the size of the bandlimitation is related
to Heisenberg's uncertainty principle.
Further, we discuss an application of this theory to the channel measurement problem for Multiple-Input
Multiple-Output (MIMO) channels.
Estimation algorithms with noisy frame coefficients
Show abstract
The Rangan-Goyal (RG) algorithm is a recursive method for constructing an estimate xN ∈ Rd of a signal x ∈ Rd,
given N ≥ d frame coefficient measurements of x that have been corrupted by uniform noise. Rangan and Goyal
proved that the RG-algorithm is constrained by the Bayesian lower bound: lim infN→∞N2 E||x − xN||2 > 0. As
a positive counterpart to this, they also proved that for every p < 1 and x ∈ Rd, the RG-algorithm satisfies
limN→∞ Np||x − xN|| = 0 almost surely. One consequence of the existing results is that one "almost" has mean
square error E||x − xN||2 of order 1/N2 for random choices of frames. It is proven here that the RG-algorithm
achieves mean square error of the optimal order 1/N2, and the applicability of such error estimates is also
extended to deterministic frames where ordering issues play an important role. Approximation error estimates
for consistent reconstruction are also proven.
Special Session on Wavelets in Neuro-Imaging
Learning and predicting brain dynamics from fMRI: a spectral approach
Show abstract
Traditional neuroimaging experiments, dictated by the dogma of functional specialization, aim at identifying regions of the
brain that are maximally correlated with a simple cognitive or sensory stimulus. Very recently, functional MRI (fMRI) has
been used to infer subjective experience and brain states of subjects immersed in natural environments. These environments
are rich with uncontrolled stimuli and resemble real life experiences. Conventional methods of analysis of neuroimaging
data fail to unravel the complex activity that natural environments elicit. The contribution of this work is a novel method
to predict action and sensory experiences of a subject from fMRI. This method relies on an embedding that provides an
optimal coordinate system to reduce the dimensionality of the fMRI dataset while preserving its intrinsic dynamics. We
learn a set of time series that are implicit functions of the fMRI data, and predict the values of these times series in the future
from the knowledge of the fMRI data only. We conducted several experiments with the datasets of the 2007 Pittsburgh
Experience Based Cognition competition.
Bayesian fMRI data analysis with sparse spatial basis function priors
Show abstract
This article presents a statistical framework to analyse brain functional Magnetic Resonance Imaging (fMRI) data. A particular emphasis is made on spatial correlation, which, contrary to the usual preprocessing step of spatial smoothing, is now part of the probabilistic model. The characterisation of regionally specific effects is done via the General Linear Model (GLM) using Posterior Probability Maps (PPMs). The spatial regularisation is defined over regression coefficients by specifying a spatial prior using Sparse Spatial Basis Functions (SSBFs), such as Wavelets. These are embedded in a hierarchical probabilistic model which, when inverted, automatically selects an appropriate subset of basis functions. The inversion of the model is done using Variational Bayes. We present results on synthetic data and on data from an event-related fMRI experiment. We conclude that SSBFs allow for spatial variations in signal smoothness, provide an increased sensitivity and are more computationally efficient than previously presented work.
Activelets and sparsity: a new way to detect brain activation from fMRI data
Show abstract
FMRI time course processing is traditionally performed using linear regression followed by statistical hypothesis
testing. While this analysis method is robust against noise, it relies strongly on the signal model. In this paper, we
propose a non-parametric framework that is based on two main ideas. First, we introduce a problem-specific type
of wavelet basis, for which we coin the term "activelets". The design of these wavelets is inspired by the form of
the canonical hemodynamic response function. Second, we take advantage of sparsity-pursuing search techniques
to find the most compact representation for the BOLD signal under investigation. The non-linear optimization
allows to overcome the sensitivity-specificity trade-off that limits most standard techniques. Remarkably, the
activelet framework does not require the knowledge of stimulus onset times; this property can be exploited to
answer to new questions in neuroscience.
Special Session on Wavelets in Physics
Regularization of inverse problems with adaptive discrepancy terms: application to multispectral data
Show abstract
In this paper, a general framework for the inversion of a linear operator in the case where one seeks several components
from several observations is presented. The estimation is done by minimizing a functional balancing discrepancy terms by
regularization terms. The regularization terms are adapted norms that enforce the desired properties of each component.
The main focus of this paper is the definition of the discrepancy terms. Classically, these are quadratic. We present
novel discrepancy terms adapt to the observations. They rely on adaptive projections that emphasize important information
in the observations. Iterative algorithms to minimize the functionals with adaptive discrepancy terms are derived and their
convergence and stability is studied.
The methods obtained are compared for the problem of reconstruction of astrophysical maps from multifrequency
observations of the Cosmic Microwave Background. We show the added flexibility provided by the adaptive discrepancy
terms.
Modeling images of the Quiet Sun in the extreme ultraviolet
Show abstract
We address the statistical modeling of solar images provided by the Extreme ultraviolet Imaging Telescope (EIT)
onboard the Solar and Heliospheric Observatory (SoHO, a joint ESA/NASA mission). We focus in particular
on the less structured regions, the "Quiet Sun". We first review on a brief historical viewpoint on multifractal
processes for physical modeling. Then we present a multifractal analysis of Quiet Sun images. Our aim is to
identify a model that would permit to simulate images that are similar to real ones, and to use the scale invariance
property to obtain artificial images at any finer resolution. We compare various families of models including
infinitely divisible cascades and fractional stable fields that permit to synthesize images that are statistically
similar to Quiet Sun images. This modeling will assist in promoting forthcoming high resolution observations
by analysing sub-pixel variability in today's solar corona images.
Chains of chirplets for the detection of gravitational wave chirps
Show abstract
A worldwide collaboration attempts to confirm the existence of gravitational waves predicted by Einstein's theory
of General Relativity, through direct observation with a network of large-scale laser interferometric antennas.
This paper is a contribution to the methodologies used to scrutinize the data in order to reveal the tiny signature
of a gravitational wave from rare cataclysmic events of astrophysical origin. More specifically, we are interested
in the detection of short frequency modulated transients or gravitational wave chirps. The amount of information
about the frequency vs. time evolution is limited: we only know that it is smooth. The detection problem is
thus non-parametric. We introduce a finite family of "template waveforms" which accurately samples the set of
admissible chirps. The templates are constructed as a puzzle, by assembling elementary bricks (the chirplets)
taken a dictionary. The detection amounts to testing the correlation between the data and the template family.
With an adequate time-frequency mapping, we establish a connection between this correlation measurement and
combinatorial optimization problems of graph theory, from which we obtain efficient algorithms to perform the
calculation. We present two variants. A first one addresses the case of amplitude modulated chirps and the
second allows the joint analysis of the data from several antennas. Those methods are not limited to the specific
context for which they have been developed. We pay a particular attention to the aspects that can be source of
inspiration for other applications.
Beta-lattice multiresolution of quasicrystalline Bragg peaks
Show abstract
We present a method for analyzing and classifying 2d-pure-point (pp) diffraction spectra (i.e. set of Bragg peaks)
of certain self-similar structures with scaling factor β > 1, like quasicrystals. The 2d-pp diffraction spectrum is
viewed as a point set in the complex plane in which each point is assigned a positive number, its Bragg intensity.
Then, by using a nested sequence of self-similar subsets called
beta-lattices, a multiresolution analysis is carried
out on the spectrum, leading to a partition of it at once in geometry, in scale, and in intensity ("fingerprint" of
the spectrum). As an illustration of our approach, the method is experimented on pp diffraction spectra of a
few mathematical structures.
Poisson wavelets on the sphere
Show abstract
In this paper we summarize the basic formulas of wavelet analysis
with the help of Poisson wavelets on the sphere. These wavelets have the nice property that all basic formulas of wavelet analysis as reproducing kernels, etc. may be expressed simply with the help of higher degree Poisson wavelets. This makes them numerically attractive for applications in geophysical modeling. We do not give any proofs and we refer to "M. Holschneider, I. Iglewska-Nowak, JFAA, 2007", where all proofs are published.
Detecting dark energy with wavelets on the sphere
Show abstract
Dark energy dominates the energy density of our Universe, yet we know very little about its nature and origin.
Although strong evidence in support of dark energy is provided by the cosmic microwave background, the relic
radiation of the Big Bang, in conjunction with either observations of supernovae or of the large scale structure of
the Universe, the verification of dark energy by independent physical phenomena is of considerable interest. We
review works that, through a wavelet analysis on the sphere, independently verify the existence of dark energy by
detecting the integrated Sachs-Wolfe effect. The effectiveness of a wavelet analysis on the sphere is demonstrated
by the highly statistically significant detections of dark energy that are made. Moreover, the detection is used
to constrain properties of dark energy. A coherent picture of dark energy is obtained, adding further support to
the now well established cosmological concordance model that describes our Universe.
Wavelets and curvelets on the sphere for polarized data
Show abstract
The statistics of the temperature anisotropies in the primordial Cosmic Microwave Background radiation field
provide a wealth of information for cosmology and the estimation of cosmological parameters. An even more
acute inference should stem from the study of maps of the polarization state of the CMB radiation. Measuring
the latter extremely weak CMB polarization signal requires very sensitive instruments. The full-sky maps of
both temperature and polarization anisotropies of the CMB to be delivered by the upcoming Planck Surveyor
satellite experiment are hence awaited with excitement. Still, analyzing CMB data requires tackling a number
of practical difficulties, notably that several other astrophysical sources emit radiation in the frequency range
of CMB observations. Separating the different astrophysical foreground components and the CMB proper from
available multichannel data is a problem that has drawn much attention in the community. Nevertheless, some
level of residual contributions, most significantly in the galactic region and at the locations of strong radio
point sources will unavoidably contaminate the estimated spherical CMB map. Masking out these regions is
common practice but the gaps in the data need proper handling. In order to restore the stationarity of a partly
incomplete CMB map and thus lower the impact of the gaps on non-local statistical tests, we developed an
inpainting algorithm on the sphere to fill in the gaps, based on an iterative thresholding scheme in a sparse
representation of the data. This algorithm relies on the variety of recently developed transforms on the sphere
among which several multiscale transforms which we will review. We also contribute to enlarging the set of
available transforms for polarized data on the sphere. We describe new multiscale decompositions namely the
isotropic undecimated wavelet and curvelet transforms for polarized data on the sphere. The proposed transforms
are invertible and so allow for applications in image restoration and denoising.
A spatiospectral localization approach to estimating potential fields on the surface of a sphere from noisy, incomplete data taken at satellite altitudes
Show abstract
Satellites mapping the spatial variations of the gravitational or magnetic fields of the Earth or other planets
ideally fly on polar orbits, uniformly covering the entire globe. Thus, potential fields on the sphere are usually
expressed in spherical harmonics, basis functions with global support. For various reasons, however, inclined
orbits are favorable. These leave a "polar gap": an antipodal pair of axisymmetric polar caps without any data
coverage, typically smaller than 10° in diameter for terrestrial gravitational problems, but 20° or more in some
planetary magnetic configurations. The estimation of spherical harmonic field coefficients from an incompletely
sampled sphere is prone to error, since the spherical harmonics are not orthogonal over the partial domain of
the cut sphere. Although approaches based on wavelets have gained in popularity in the last decade, we present
a method for localized spherical analysis that is firmly rooted in spherical harmonics. We construct a basis of
bandlimited spherical functions that have the majority of their energy concentrated in a subdomain of the unit
sphere by solving Slepian's (1960) concentration problem in spherical geometry, and use them for the geodetic problem at hand. Most of this work has been published by us elsewhere. Here, we highlight the connection of the "spherical Slepian basis" to wavelets by showing their asymptotic self-similarity, and focus on the computational considerations of calculating concentrated basis functions on irregularly shaped domains.
Time-frequency multipliers for sound synthesis
Show abstract
Time-frequency analysis and wavelet analysis are generally used for providing signal expansions that are suitable
for various further tasks such as signal analysis, de-noising, compression, source separation, ... However, time-frequency
analysis and wavelet analysis also provide efficient ways for constructing signals' transformations. They
are modelled as linear operators that can be designed directly in the transformed domain, i.e. the time-frequency
plane, or the time-scale half plane. Among these linear operators, transformations that are diagonal in the time-frequency
or time scale spaces, i.e. that may be expressed by multiplications in these domains, deserve particular
attention, as they are extremely simple to implement, even though their properties are not necessarily easy to
control.
This work is a first attempt for exploring such approaches in the context of the analysis and the design of
sound signals. We study more specifically the transformations that may be interpreted as linear time-varying
(LTV) systems (often called time-varying filters). It is known that under certain assumptions, the latter may
be conveniently represented by pointwise multiplication with a certain time frequency transfer function in the
time-frequency domain. The purpose of this work is to examine such representations in practical situations, and
investigate generalizations. The originality of this approach for sound synthesis lies in the design of practical
operators that can be optimized to morph a given sound into another one, at a very high sound quality.
Probing the Gaussianity and the statistical isotropy of the CMB with spherical wavelets
Show abstract
This article focuses on the study of the statistical properties of the cosmic microwave background (CMB) temperature
fluctuations. This study helps to define a coherent framework for the origin of the Universe, its evolution
and the structure formation. The current standard model is based in the Big-Bang theory, in the context of
the Cosmic Inflation scenario, and predicts that the CMB temperature fluctuations can be understood as the
realization of a statistical isotropic and Gaussian random field. To probe whether these statistical properties are
satisfied or not is capital, since deviations from these hypotheses would indicate that non-standard models might
play an important role in the dynamics of the Universe. But that is not all. There are alternative sources of
anisotropy or non-Gaussianity that could contaminate the CMB signal as well. Hence, sophisticated techniques
must be used to carry out such a probe. Among all the methodologies that one can face in the literature, those
based on wavelets are providing one of the most interesting insights to the problem. Their ability to explore
several scales keeping, at the same time, spatial information of the CMB, is a very useful property to discriminate
among possible sources of anisotropy and/or non-Gaussianity.
Wavelets and Filterbank Designs
Extending Vaidyanathan's procedure to improve the performance of unitary filter banks with a fixed lowpass by using additional elementary building blocks
Show abstract
Wavelet decomposition of signals using the classical
Daubechies-Wavelets could also be considered as a decomposition
using a filter bank with two channels, a low pass and a high pass channel, represented by the father and
mother wavelet, respectively. By generalizing this two channel approach filter banks with N ≥ 2 channels can
be constructed. They possess one scaling function or father wavelet representing the low pass filter and one, two
or more mother wavelets representing band pass filters.
The resulting band pass filters do not show a satisfactory selective behavior, in general. Hence, a modification
of the generalized design seems appropriate. Based on Vaidyanathan's procedure we developed a method to
modify the modulation matrix under the condition that the low pass is unchanged and the degree of the band
pass filters is not increased. This can be achieved by introducing one or more additional elementary building
blocks under certain orthogonality constraints with respect to their generating vectors. While the (polynomial)
degree of the modulation matrix remains unchanged, its complexity increases due to its increased McMillan degree.
Hilbert-like tight frame wavelets with symmetric envelope
Show abstract
Wavelets based on Hilbert pairs have appealing properties when applied to image denoising and feature detection
due to their directional sensitivity. In this paper we propose dual-tree tight frame wavelets and scaling functions
{φh, ψh1, ψh2, ψh3} and {φg, ψg1, ψg2, ψg3} based on FIR filterbanks of four filters, and downsampling by 2. Such
wavelets closely approximate shift invariance. Moreover, the resulting complex wavelets are smooth and lead to
exactly symmetric envelope. The filters in this paper enjoy vanishing moments property.
Distributed video coding based on sampling of signals with finite rate of innovation
Show abstract
This paper proposes a new approach to distributed video coding. Distributed video coding is a new paradigm in video coding, which is based on the concept of decoding with side information at the decoder. Such a coding scheme employs a low-complexity encoder, making it well suited for low-power devices such as mobile video cameras.
The uniqueness of our work lies in the combined use of discrete wavelet transform (DWT) and the concept of sampling of signals with finite rate of innnovation (FRI). This enables the decoder to retrieve the motion parameters and reconstruct the video sequence from the low-resolution version of each transmitted frame. Unlike the currently existing practical coders, we do not employ traditional channel coding techniqe. For a simple video sequence with a fixed background, Our preliminary results show that the proposed coding scheme can achieve a better PSNR than JPEG2000-intraframe coding at low bit rates.
Special Session on Wavelets for Denoising and Restoration
Sparse and Redundant Representations and Motion-Estimation-Free Algorithm for Video Denoising
Show abstract
The quality of video sequences (e.g. old movies, webcam, TV broadcast) is often reduced by noise, usually
assumed white and Gaussian, being superimposed on the sequence. When denoising image sequences, rather
than a single image, the temporal dimension can be used for gaining in better denoising performance, as well
as in the algorithms' speed. This paper extends single image denoising method reported in to sequences.
This algorithm relies on sparse and redundant representations of small patches in the images. Three different
extensions are offered, and all are tested and found to lead to substantial benefits both in denoising quality and
algorithm complexity, compared to running the single image algorithm sequentially. After these modifications,
the proposed algorithm displays state-of-the-art denoising performance, while not relying on motion estimation.
Predictive Compression and Denoising with Overcomplete Decompositions: A Simple Way to Reject Structured Interference
Show abstract
In this paper we propose a prediction method that is geared toward forming successful estimates of a signal
based on a correlated anchor signal contaminated with complex interference. The interference model is based
on real-life, and it involves intensity modulations, linear distortions, structured clutter, and white noise just
to name a few. The proposed method first transforms signals to an over-complete domain where we assume
sparse decompositions. In this sparse domain, we show that very simple predictors can be designed to perform
efficient prediction. The parameters of these predictors are derived from causal information, enabling completely
automated and blind operation. The utilized over-complete representation allows multiple predictions for each
sample in signal domain, which are averaged and combined into a single prediction. Experimental results on
images and video frames show that the proposed method can provide successful predictions under a variety of
complex transitions, such as cross-fades, brightness changes, focus variations, and other complex distortions. The
proposed prediction method is also implemented to operate inside a state-of-the-art video compression codec and
results show significant improvements on scenes that are hard to encode using traditional prediction techniques.
Image restoration using adaptive Gaussian scale mixtures in overcomplete pyramids
Show abstract
We describe here two ways to improve on recent results in image restoration using Bayes least squares estimation
with local Gaussian scale mixtures (BLS-GSM) in overcomplete oriented pyramids. First one consists of allowing
for a spatial adaptation of the covariance matrix defining the GSM model at each pyramid subband. This can
be implemented in practice by dividing the subbands into spatial blocks. The other, more powerful, method is
to generalize the GSM model to include more than one covariance matrices for each subband. The advantage of
the latter method is its flexibility, as it allows for mixing Gaussian densities with different covariance matrices
at every spatial location in every subband. It also allows for non-local selective processing, taking advantage
of the repetition in the scene of image features that are not necessarily spatially grouped. We also describe
an empirical method to adapt denoising algorithms for doing image restoration, with the only constraint on
the denoising method of being applicable to non-white noise sources. Here we present mature results of the
spatially adaptive method applied to denoising and deblurring, plus some estimation techniques and encouraging
preliminary results of the multi-GSM concept.
SURE-LET interscale-intercolor wavelet thresholding for color image denoising
Show abstract
We propose a new orthonormal wavelet thresholding algorithm for denoising color images that are assumed to
be corrupted by additive Gaussian white noise of known intercolor covariance matrix. The proposed wavelet
denoiser consists of a linear expansion of thresholding (LET) functions, integrating both the interscale and
intercolor dependencies. The linear parameters of the combination are then solved for by minimizing Stein's
unbiased risk estimate (SURE), which is nothing but a robust unbiased estimate of the mean squared error
(MSE) between the (unknown) noise-free data and the denoised one. Thanks to the quadratic form of this MSE
estimate, the parameters optimization simply amounts to solve a linear system of equations.
The experimentations we made over a wide range of noise levels and for a representative set of standard
color images have shown that our algorithm yields even slightly better peak signal-to-noise ratios than most
state-of-the-art wavelet thresholding procedures, even when the latters are executed in an undecimated wavelet
representation.
Signal-dependent noise characterization in Haar filterbank representation
Show abstract
Owing to the properties of joint time-frequency analysis that compress energy and approximately decorrelate
temporal redundancies in sequential data, filterbank and wavelets are popular and convenient platforms for
statistical signal modeling. Motivated by the prior knowledge and empirical studies, much of the emphasis in
signal processing has been placed on the choice of the prior distribution for these transform coefficients. In this
paradigm however, the issues pertaining to the loss of information due to measurement noise are difficult to
reconcile because the effects of point-wise signal-dependent noise permeate across scale and through multiple
coefficients. In this work, we show how a general class of
signal-dependent noise can be characterized to an
arbitrary precision in a Haar filterbank representation, and the corresponding maximum a posteriori estimate
for the underlying signal is developed. Moreover, the structure of noise in the transform domain admits a variant
of Stein's unbiased estimate of risk conducive to processing the corrupted signal in the transform domain. We
discuss estimators involving Poisson process, a situation that arises often in real-world applications such as
communication, signal processing, and imaging.
Double-density complex wavelet cartoon-texture decomposition
Show abstract
Both the Kingsbury dual-tree and the subsequent Selesnick
double-density dual-tree complex wavelet transform
approximate an analytic function. The classification of the phase dependency across scales is largely unexplored
except by Romberg et al.. Here we characterize the sub-band dependency of the orientation of phase gradients
by applying the Helmholtz principle to bivariate histograms to locate meaningful modes. A further characterization
using the Earth Mover's Distance with the fundamental
Rudin-Osher-Meyer Banach space decomposition
into cartoon and texture elements is presented. Possible applications include image compression and invariant
descriptor selection for image matching.
Modeling and estimation of wavelet coefficients using elliptically-contoured multivariate laplace vectors
Show abstract
In this paper, we are interested in modeling groups of wavelet coefficients using a zero-mean, elliptically-contoured
multivariate Laplace probability distribution function (pdf). Specifically, we are interested in the
problem of estimating a d-point Laplace vector, s, in additive white Gaussian noise (AWGN), n, from an
observation, y = s + n. In the scalar case (d = 1), the MAP and MMSE estimators are already known;
and in the vector case (d > 1), the MAP estimator can be obtained by an iterative successive substitution
algorithm. For the special case where the contour of the Laplace pdf is spherical, the MMSE estimators for
the vector case (d > 1) have been derived in our previous work; we have shown that the MMSE estimator can be expressed in terms of the generalized incomplete Gamma function. For the general elliptically-contoured case, the MMSE estimator can not be expressed as such. In this paper, we therefore investigate approximations to the MMSE estimator of a Laplace vector in AWGN.
Special Session on Finite-Dimensional Frames, Time-Frequency Analysis, and Applications
Fast algorithms for signal reconstruction without phase
Show abstract
We derive fast algorithms for doing signal reconstruction without phase. This type of problem is important in
signal processing, especially speech recognition technology, and has relevance for state tomography in quantum
theory. We show that a generic frame gives reconstruction from the absolute value of the frame coefficients in
polynomial time. An improved efficiency of reconstruction is obtained with a family of sparse frames or frames
associated with complex projective 2-designs.
Modeling sensor networks with fusion frames
Show abstract
The new notion of fusion frames will be presented in this article. Fusion frames provide an extensive framework
not only to model sensor networks, but also to serve as a means to improve robustness or develop efficient and
feasible reconstruction algorithms. Fusion frames can be regarded as sets of redundant subspaces each of which
contains a spanning set of local frame vectors, where the subspaces have to satisfy special overlapping properties.
Main aspects of the theory of fusion frames will be presented with a particular focus on the design of sensor
networks. New results on the construction of Parseval fusion frames will also be discussed.
Multiscale moment transforms over the integer lattice
Show abstract
Multiscale moment-based transformations, such as the multiscale Harris corner point detector, have been used in
image processing applications for several years. Typically, such transforms are used to identify objects-of-interest
in a given image, which, in turn, facilitate target tracking and registration. Though these transforms are usually applied to digitally sampled images, many of their properties were previously only known to hold for images over continuous domains. We prove that many of these properties indeed generalize to images over discrete domains. In particular, after introducing a mathematically well-behaved method for rotating an image over the two-dimensional integer lattice, we show that this rotation commutes with the moment-based transform in the expected manner.
Burst erasures and the mean-square error for cyclic frames
Show abstract
The general objective in this paper is the loss-insensitive transmission of vectors by linear, redundant
encoding with the help of frames. Our specific goal is to find frames which minimize the mean-square
reconstruction error for cyclic burst erasures with known
burst-length statistics. Finding the best
frames among the cyclic ones is reduced to a discrete optimization problem. We provide an upper and
lower bound for the mean-square error and discuss a family of frames for which both bounds coincide
while the upper bound is minimized.
Special Session on Wavelets in Medical Imaging
Isotropic multiresolution analysis for 3D-textures and applications in cardiovascular imaging
Show abstract
The main goal of this paper is to introduce formally the concept of texture segmentation/identification in three
dimensional images. A major problem in texture texture segmentation/identification is the lack of robustness to
both translations and rotations. This problem is more difficult to overcome in 3D-images, such as those generated
by modalities such as x-ray CT and MRI. To facilitate 3D-texture segmentation/identification which is robust to
3D rigid motions we formally introduce the concept of steerable feature maps and of appropriate metrics in the
feature space. We also introduce a new multiscale representation giving rise to a steerable feature map used in
our exploratory project in cardiovascular imaging and we propose a
3D-texture segmentation algorithm utilizing
this steerable feature map.
Emerging Applications
Learning adapted dictionaries for geometry and texture separation
Show abstract
This article proposes a new method for image separation into a linear combination of morphological components.
This method is applied to decompose an image into meaningful cartoon and textural layers and is used
to solve more general inverse problems such as image inpainting. For each of these components, a dictionary is
learned from a set of exemplar images. Each layer is characterized by a sparse expansion in the corresponding
dictionary. The separation inverse problem is formalized within a variational framework as the optimization of an
energy functional. The morphological component analysis algorithm allows to solve iteratively this optimization
problem under sparsity-promoting penalties. Using adapted dictionaries learned from data allows to circumvent
some difficulties faced by fixed dictionaries. Numerical results demonstrate that this adaptivity is indeed crucial
to capture complex texture patterns.
Morphological diversity and sparsity: new insights into multivariate data analysis
Show abstract
Over the last few years, the development of multi-channel sensors motivated interest in methods for the
coherent processing of multivariate data. From blind source separation (BSS) to multi/hyper-spectral
data restoration, an extensive work has already been dedicated to multivariate data processing. Previous
work has emphasized on the fundamental role played by sparsity and morphological diversity to
enhance multichannel signal processing.
Morphological diversity has been first introduced in the mono-channel case to deal with contour/texture
extraction. The morphological diversity concept states that the data are the linear combination of several
so-called morphological components which are sparse in different incoherent representations. In
that setting, piecewise smooth features (contours) and oscillating components (textures) are separated
based on their morphological differences assuming that contours (respectively textures) are sparse in the
Curvelet representation (respectively Local Discrete Cosine representation).
In the present paper, we define a multichannel-based framework for sparse multivariate data representation.
We introduce an extension of morphological diversity to the multichannel case which boils down
to assuming that each multichannel morphological component is diversely sparse spectrally and/or spatially.
We propose the Generalized Morphological Component Analysis algorithm (GMCA) which aims
at recovering the so-called multichannel morphological components. Hereafter, we apply the GMCA
framework to two distinct multivariate inverse problems : blind source separation (BSS) and multichannel
data restoration. In the two aforementioned applications, we show that GMCA provides new and
essential insights into the use of morphological diversity and sparsity for multivariate data processing.
Further details and numerical results in multivariate image and signal processing will be given illustrating
the good performance of GMCA in those distinct applications.
Automated discrimination of shapes in high dimensions
Show abstract
We present a new method for discrimination of data classes or data sets in a high-dimensional space. Our
approach combines two important relatively new concepts in
high-dimensional data analysis, i.e., Diffusion Maps
and Earth Mover's Distance, in a novel manner so that it is more tolerant to noise and honors the characteristic
geometry of the data. We also illustrate that this method can be used for a variety of applications in high
dimensional data analysis and pattern classification, such as quantifying shape deformations and discrimination
of acoustic waveforms.
Coherent noise removal in seismic data with dual-tree M-band wavelets
Show abstract
Seismic data and their complexity still challenge signal processing algorithms in several applications. The advent
of wavelet transforms has allowed improvements in tackling denoising problems. We propose here coherent noise
filtering in seismic data with the dual-tree M-band wavelet transform. They offer the possibility to decompose
data locally with improved multiscale directions and frequency bands. Denoising is performed in a deterministic
fashion in the directional subbands, depending of the coherent noise properties. Preliminary results show that
they consistently better preserve seismic signal of interest embedded in highly energetic directional noises than
discrete critically sampled and redundant separable wavelet transforms.
Special Session on Sparsity and Compressed Sampling
Average case analysis of multichannel sparse approximations using p-thresholding
Show abstract
This paper introduces p-thresholding, an algorithm to compute simultaneous sparse approximations of multichannel
signals over redundant dictionaries. We work out both worst case and average case recovery analyses of this algorithm and show that the latter results in much weaker conditions on the dictionary. Numerical simulations confirm our theoretical findings and show that
p-thresholding is an interesting low complexity alternative to simultaneous greedy or convex relaxation algorithms for processing sparse multichannel signals with balanced coefficients.
Analytic sensing: direct recovery of point sources from planar Cauchy boundary measurements
Show abstract
Inverse problems play an important role in engineering. A problem that often occurs in electromagnetics (e.g.
EEG) is the estimation of the locations and strengths of point sources from boundary data.
We propose a new technique, for which we coin the term "analytic sensing". First, generalized measures are
obtained by applying Green's theorem to selected functions that are analytic in a given domain and at the same
time localized to "sense" the sources. Second, we use the finite-rate-of-innovation framework to determine the
locations of the sources. Hence, we construct a polynomial whose roots are the sources' locations. Finally, the
strengths of the sources are found by solving a linear system of equations. Preliminary results, using synthetic
data, demonstrate the feasibility of the proposed method.
L0-based sparse approximation: two alternative methods and some applications
Show abstract
We propose two methods for sparse approximation of images under l2 error metric. First one performs an approximation
error minimization given a lp-norm of the representation through alternated orthogonal projections
onto two sets. We study the cases p = 0 (sub-optimal) and p = 1 (optimal), and find that the l0-AP method is
neatly superior, for typical images and overcomplete oriented pyramids. Given that l1-AP is optimal, this shows
that it is not equivalent in practical image processing conditions to minimize one or the other norm, contrarily
to what is often assumed. The second method is more powerful, and it performs gradient descent onto decreasingly
smoothed versions of the sparse approximation cost function, yielding a method previously proposed as a
heuristic. We adapt these techniques for being applied to image restoration, with very positive results.
Compressive phase retrieval
Show abstract
The theory of compressive sensing enables accurate and robust signal reconstruction from a number of measurements
dictated by the signal's structure rather than its Fourier bandwidth. A key element of the theory is
the role played by randomization. In particular, signals that are compressible in the time or space domain can
be recovered from just a few randomly chosen Fourier coefficients. However, in some scenarios we can only observe
the magnitude of the Fourier coefficients and not their phase. In this paper, we study the magnitude-only
compressive sensing problem and in parallel with the existing theory derive sufficient conditions for accurate
recovery. We also propose a new iterative recovery algorithm and study its performance. In the process, we
develop a new algorithm for the phase retrieval problem that exploits a signal's compressibility rather than its
support to recover it from Fourier transform magnitude measurements.
Annihilating filter-based decoding in the compressed sensing framework
Show abstract
Recent results in compressed sensing or compressive sampling suggest that a relatively small set of measurements
taken as the inner product with universal random measurement vectors can well represent a source that is
sparse in some fixed basis. By adapting a deterministic, non-universal and structured sensing device, this paper
presents results on using the annihilating filter to decode the information taken in this new compressed sensing
environment. The information is the minimum amount of nonadaptive knowledge that makes it possible to go
back to the original object. We will show that for a k-sparse signal of dimension n, the proposed decoder needs 2k
measurements and its complexity is of O(k2) whereas for the decoding based on the l1 minimization, the number
of measurements needs to be of O(k log(n)) and the complexity is of O(n3). In the case of noisy measurements,
we first denoise the signal using an iterative algorithm that finds the closest rank k and Toeplitz matrix to the
measurements matrix (in Frobenius norm) before applying the annihilating filter method. Furthermore, for a
k-sparse vector with known equal coefficients, we propose an algebraic decoder which needs only k measurements
for the signal reconstruction. Finally, we provide simulation results that demonstrate the performance of our
algorithm.
Sampling signals from a union of shift-invariant subspaces
Show abstract
We study a sampling problem where sampled signals come from a known union of shift-invariant subspaces
and the sampling operator is a linear projection of the sampled signals into a fixed shift-invariant subspace.
In practice, the sampling operator can be easily implemented by a multichannel uniform sampling procedure.
We present necessary and sufficient conditions for invertible and stable sampling operators in this framework,
and provide the corresponding minimum sampling rate. As an application of the proposed general sampling
framework, we study the specific problem of spectrum-blind sampling of multiband signals. We extend the previous results of Bresler et al. by showing that a large class of sampling kernels can be used in this sampling problem, all of which lead to stable sampling at the minimum sampling rate.
Image assimilation by geometric wavelet based reaction-diffusion equation
Show abstract
Denoising is always a challenge problem in natural image and geophysical data processing. From the point of view
of data assimilation, in this paper we consider the denoising of texture images in a scale space using a geometric
wavelet based nonlinear reaction-diffusion equation, in which a curvelet shrinkage is used as a regularization of
the diffusivity to preserve important features in the diffusion smoothing and a wave atom shrinkage is used as
a pseudo-observation in the reaction for enhancement of interesting oriented textures. We named the general
framework as image assimilation. The goal of the image assimilation is to link together these rich information
such as sparse constraint in multiscale geometric space in order to retrieve the state of images. As a byproduct,
we proposed a 2D wavelet-inspired numerical scheme for solving of the nonlinear diffusion. Experimental results
show the performance of the proposed model for texture-preserving denoising and enhancement.
Keynote Session
A helicopter view of the self-consistency framework for wavelets and other signal extraction methods in the presence of missing and irregularly spaced data
Show abstract
A common frustration in signal processing and, more generally, information recovery is the presence of irregularities
in the data. At best, the standard software or methods will no longer be directly applicable when
data are missing, incomplete or irregularly spaced (e.g., as with wavelets). Self-consistency is a very general
and powerful statistical principle for dealing with such problems. Conceptually it is extremely appealing, for it
is essentially a mathematical formalization of iterating
common-sense "trial-and-error" methods until no more
improvement is possible. Mathematically it is elegant, with one
fixed-point equation to solve and a general
projection theorem to establish optimality. Practically it is straightforward to program because it directly uses
the regular/complete-data method for iteration. Its major disadvantage is that it can be computationally intensive.
However, increasingly efficient (approximate) implementations are being discovered, such as for wavelet
de-noising with hard and soft thresholding. This brief overview summarizes the author's keynote presentation on
those points, based on joint work with Thomas Lee on wavelet applications and with Zhan Li on the theoretical
properties of the self-consistent estimators.
Poster Session
Discrete unitary transforms generated by moving waves
Show abstract
This paper describes a new class of discrete heap transforms which are unitary energy-preserving transforms
and induced by input signals. These transforms have a simple form of composition and fast algorithms for any
size of processed signals. We consider the heap transforms, defined by two-dimensional elementary rotations, as
satisfying the given decision equations. The main feature of each heap transform is the corresponding system of
basis functions, which represent themselves a family of interactive waves which are moving in the field generated
by the input signal. Properties and examples of heap transforms, which we also call discrete signal-induced heap
transforms, are described in detail.
New discrete unitary Haar-type heap transforms
Show abstract
This paper introduces a new class of discrete unitary transformations, the so-called discrete Haar-type heap
transformations (DHHT) which are induced by input signals and use the path similar to the traditional Haar
transformation. These transformations are fast and performed by simple rotations, can be composed for any
order, and their complete systems of basis functions represent themselves variable waves that are generated by
signals. The 2r-point discrete Haar transform is the particular case of the proposed transformations, when the
generator is the constant sequence {1, 1, 1, ..., 1}. These transformations can be used in many applications and
improve the results of the Haar transformation. As an example, the approximation of signals in the simple
compression process, when truncating the coefficients of the discrete Haar-type heap transform is illustrated.
High-dimensional data compression via PHLCT
Show abstract
The polyharmonic local cosine transform (PHLCT), presented by Yamatani and Saito in 2006, is a new tool for local
image analysis and synthesis. It can compress and decompress images with better visual fidelity, less blocking artifacts,
and better PSNR than those processed by the JPEG-DCT algorithm. Now, we generalize PHLCT to the high-dimensional
case and apply it to compress the high-dimensional data. For this purpose, we give the solution of the high-dimensional
Poisson equation with the Neumann boundary condition. In order to reduce the number of coefficients of PHLCT, we use
not only d-dimensional PHLCT decomposition, but also d-1, d-2, . . . , 1 dimensional PHLCT decompositions. We
find that our algorithm can more efficiently compress the high-dimensional data than the block DCT algorithm. We will
demonstrate our claim using both synthetic and real 3D datasets.
Stability of the multifractal spectra by transformations of discrete series
Show abstract
In this work we use α-bi-Lipschitz transformation of signals both from empirical and theoretical sources to obtain new tests for the accomplishment of the multifractal formalisms associated with many methods (Wavelet Leaders, Wavelet Transform Modulus Maxima, Multifractal Detrended Fluctuation Analysis, Box Counting, and other) and we give improvements of the present algorithms that result numerically more trustworthy. Moreover the multifractal spectrum does not change in the theory, but as the numeric implementation of the computations may differ for discrete series so we can analyze its variation to study the stability of the proposed algorithms to compute it.
In addition some single coefficients that have been proposed to quantify the whole irregularity of the signal are preserved by enough high α-bi-Lipschitz transformations.
We exhibit the performance of the tests and the improvements of this methods not only in signals generated from deterministic (or sometimes random) numerical processes performed with the computer but also against series from empirical sources in which the multifractal spectrum and the irregularity coefficient were proven of utility both from the analysis and the segmentation of the signal in significant parts as series of Longwave outgoing radiation of tropical regions (and the consequent forecasting applications of precipitations) and certain series of EEG (from patients with crisis of brain absences for instance) and the ability to distinguish (and perhaps to predict) the beginning of the consecutive stages.
Wavelet packets frames in multiresolution structures
Show abstract
Often, in signal processing applications, it is useful and convenient to have on the hand flexible tools providing time-frequency information from the wavelet coefficients. This means the well known wavelet packets techniques. The main purpose of this work is to explore and discuss several alternatives in this field, related to orthogonal spline wavelets.
Wavelet-based stereo images reconstruction using depth images
Show abstract
It is believed by many that three-dimensional (3D) television will be the next logical development toward a
more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission
of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a
single stream of monoscopic images and a second stream of associated images usually termed depth images
or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that
contains information about distance from camera to a certain point of the object as a function of the image
coordinates. By using this depth information and the original image it is possible to reconstruct a virtual
image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space
and finding their position in the desired view plane. One of the most significant advantages of the DIBR
is that depth maps can be coded more efficiently than two streams corresponding to left and right view
of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse
existing transmission channels for the transmission of 3D TV. This technique can also be applied for other
3D technologies such as multimedia systems.
In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic
images, which solves some of the shortcommings of the existing methods discussed above. We perform the
wavelet transform of both the luminance and depth images in order to obtain significant geometric features,
which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach
uses Markov random field smoothness prior for regularization of the estimated motion field.
The evaluation of the proposed reconstruction method is done on two video sequences which are typically
used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed
approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance
measures.
Estimation of chirp rates of music-adapted prolate spheroidal atoms using reassignment
Show abstract
We introduce a modified Matching Pursuit algorithm for estimating frequency and frequency slope of FM-modulated
music signals. The use of Matching Pursuit with constant frequency atoms provides coarse estimates which could be
improved with chirped atoms, more suited in principle to this kind of signals. Application of the reassignment method is suggested by its good localization properties for chirps.
We start considering a family of atoms generated by modulation and scaling of a prolate spheroidal wave
function. These functions are concentrated in frequency on intervals of a semitone centered at the frequencies of the
well-tempered scale.
At each stage of the pursuit, we search the atom most correlated with the signal. We then consider the spectral peaks at
each frame of the spectrogram and calculate a modified frequency and frequency slope using the derivatives of the
reassignment operators; this is then used to estimate the parameters of a cubic interpolation polynomial that models local
pitch fluctuations.
We apply the method both to synthetic and music signals.
Affine scaling transformation algorithms for harmonic retrieval in a compressive sampling framework
Show abstract
In this paper we investigate the use of the Affine Scaling Transformation (AST) family of algorithms in solving the
sparse signal recovery problem of harmonic retrieval for the DFT-grid frequencies case. We present the problem in the
more general Compressive Sampling/Sensing (CS) framework where any set of incomplete, linearly independent
measurements can be used to recover or approximate a sparse signal. The compressive sampling problem has been
approached mostly as a problem of l1 norm minimization, which can be solved via an associated linear programming
problem. More recently, attention has shifted to the random linear projection measurements case. For the harmonic
retrieval problem, we focus on linear measurements in the form of: consecutively located time samples, randomly
located time samples, and (Gaussian) random linear projections. We use the AST family of algorithms which is
applicable to the more general problem of minimization of the lp p-norm-like diversity measure that includes the
numerosity (p=0), and the l1 norm (p=1). Of particular interest in this paper is to experimentally find a relationship
between the minimum number M of measurements needed for perfect recovery and the number of components K of the
sparse signal, which is N samples long. Of further interest is the number of AST iterations required to converge to its
solution for various values of the parameter p. In addition, we quantify the reconstruction error to assess the closeness
of the AST solution to the original signal. Results show that the AST for p=1 requires 3-5 times more iterations to
converge to its solution than AST for p=0. The minimum number of data measurements needed for perfect recovery is
approximately the same on the average for all values of p, however, there is an increasing spread as p is reduced from
p=1 to p=0. Finally, we briefly contrast the AST results with those obtained using another l1 minimization algorithm
solver.
Short-term spectral analysis and synthesis improvements with oversampled inverse filter banks
Show abstract
The Short Term Fourier Transform (STFT) is a classical linear time-frequency (T-F) representation. Despites
its relative simplicity, it has become a standard tool for the analysis of non-stationary signals. Since it provides a
redundant representation, it raises some issues such as (i) "optimal" window choice for analysis, (ii) existence and
determination of an inverse transformation, (iii) performance of analysis-modification-synthesis, or reconstruction
of selected components of the time-frequency plane and (iv) redundancy controllability for low-cost applications, e.g. real-time computations. We address some of these issues, as well as the less often mentioned problem of transform symmetry in the inverse, through oversampled FBs and their optimized inverse(s) in a slightly more general setting than the discrete windowed Fourier transform.
Tight frame wavelets with equal norms highpass and bandpass filters
Show abstract
The paper presents new tight frame dyadic limit functions with dense time-frequency grid. The filterbank consists
of one lowpass filter and three bandpass and/or highpass filters. We add the requirement that the bandpass and
highpass filters be all of equal norms. All the filters in the paper are FIR and enjoy vanishing moments.
Wavelet-based denoising using local Laplace prior
Show abstract
Although wavelet-based image denoising is a powerful tool for image processing applications, relatively few
publications have addressed so far wavelet-based video denoising. The main reason is that the standard 3-D data
transforms do not provide useful representations with good energy compaction property, for most video data. For
example, the multi-dimensional standard separable discrete wavelet transform (M-D DWT) mixes orientations and
motions in its subbands, and produces the checkerboard artifacts. So, instead of M-D DWT, usually oriented transforms
suchas multi-dimensional complex wavelet transform (M-D DCWT) are proposed for video processing. In this paper we
use a Laplace distribution with local variance to model the statistical properties of noise-free wavelet coefficients. This
distribution is able to simultaneously model the heavy-tailed and intrascale dependency properties of wavelets. Using
this model, simple shrinkage functions are obtained employing maximum a posteriori (MAP) and minimum mean
squared error (MMSE) estimators. These shrinkage functions are proposed for video denoising in DCWT domain. The
simulation results shows that this simple denoising method has impressive performance visually and quantitatively.
Modeling statistical properties of wavelets using a mixture of bivariate cauchy models and its application for image denoising in complex wavelet domain
Show abstract
In this paper, we design a bivariate maximum a posteriori (MAP) estimator that supposes the prior of wavelet
coefficients as a mixture of bivariate Cauchy distributions. This model not only is a mixture but is also bivariate. Since
mixture models are able to capture the heavy-tailed property of wavelets and bivaraite distributions can model the
intrascale dependences of wavelet coefficients, this bivariate mixture probability density function (pdf) can better
capture statistical properties of wavelet coefficients. The simulation results show that our proposed technique achieves
better performance than other methods employing non mixture pdfs such as bivariate Cauchy pdf and circular symmetric Laplacian pdf visually and in terms of peak signal-to-noise ratio (PSNR). We also compare our algorithm with several recently published denoising methods and see that it is among the best reported in the literature.
A refining estimation for adaptive solution of wave equation based on curvelets
Show abstract
This paper presents a refining estimation to control the process of adaptive mesh refinement (AMR) based on the
curvelet transform. The curvelet is a recently developed geometric multiscale system that could provide optimal
approximation to curve-singularity functions and sparse representation of wavefront phenomena. Utilizing these
advantages, we attempt to introduce the curvelet transform into AMR as a criterion estimate of refinement for adaptive
solving of the wave equation. Numerical simulations show that the proposed method could optimally capture interesting
areas where refinement is needed, so that a high accuracy result is obtained.