Show all abstracts
View Session
- Front Matter: Volume 7246
- Microscopy
- Medical Imaging
- Inverse methods
- Sparse and Adaptive Signal Processing
- Segmentation
- Interpolation and Inpainting
- Mathematical Imaging
- Statistical Imaging
- Registration
- Image Processing Applications
- Interactive Paper Session
Front Matter: Volume 7246
Front Matter: Volume 7246
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7246, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing
Microscopy
Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy
Show abstract
We present an extension of the development of an alternating minimization (AM) method for the computation
of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract
both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by
rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative
phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as
to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the
magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De
Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of
the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in
the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a
methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM
algorithm is also presented. Both extensions of the AM method are investigated with simulations.
Medical Imaging
Smoothing fields of frames using conjugate norms on reproducing kernel Hilbert spaces
Show abstract
Diffusion tensor imaging provides structural information in medical images in the form of a symmetric positive
matrix that provides, at each point, the covariance of water diffusion in the tissue. We here describe a new
approach designed for smoothing this tensor by directly acting on the field of frames provided by the eigenvectors
of this matrix. Using a representation of fields of frames as linear forms acting on smooth tensor fields, we use
the theory of reproducing kernel Hilbert spaces to design a measure of smoothness based on kernels which is
then used in a denoising algorithm. We illustrate this with brain images and show the impact of the procedure
on the output of fiber tracking in white matter.
A new method for FMRI activation detection
Show abstract
The objective of fMRI data analysis is to detect the region of the brain that gets activated in response to a specific
stimulus presented to the subject. We develop a new algorithm for activation detection in event-related fMRI
data. We utilize a forward model for fMRI data acquisition which explicitly incorporates physiological noise,
scanner noise and the spatial blurring introduced by the scanner. After slice-by-slice image restoration procedure
that independently restores each data slice corresponding to each time index, we estimate the parameters of the
hemodynamic response function (HRF) model for each pixel of the restored data. In order to enforce spatial
regularity in our estimates, we model the prior distribution of the HRF parameters as a generalized Gaussian
Markov random field (GGMRF) model. We develop an algorithm to compute the maximum a posteriori (MAP)
estimates of the parameters. We then threshold the amplitude parameters to obtain the final activation map. We
illustrate our algorithm by comparing it with the widely used general linear model (GLM) method. In synthetic
data experiments, under the same probability of false alarm, the probability of correct detection for our method
is up to 15% higher than GLM. In real data experiments, through anatomical analysis and benchmark testing
using block paradigm results, we demonstrate that our algorithm produces fewer false alarms than GLM.
Bayesian multiresolution method for local X-ray tomography in dental radiology
Show abstract
Dental tomographic cone-beam X-ray imaging devices record truncated projections and reconstruct a region of
interest (ROI) inside the head. Image reconstruction from the resulting local tomography data is an ill-posed
inverse problem. A Bayesian multiresolution method is proposed for the local tomography reconstruction. The
inverse problem is formulated in a well-posed statistical form where a prior model of the tissues compensates
for the incomplete projection data. Tissues are represented in a reduced wavelet basis, and prior information
is modeled in terms of a Besov norm penalty. The number of unknowns in the inverse problem is reduced by
abandoning fine-scale wavelets outside the ROI. Compared to traditional voxel based reconstruction methods,
this multiresolution approach allows significant reduction in number of unknown parameters without loss of
reconstruction accuracy inside the ROI, as shown by two dimensional examples using simulated local tomography
data.
Inverse methods
Fast space-varying convolution and its application in stray light reduction
Show abstract
Space-varying convolution often arises in the modeling or restoration of images captured by optical imaging
systems. For example, in applications such as microscopy or photography the distortions introduced by lenses
typically vary across the field of view, so accurate restoration also requires the use of space-varying convolution.
While space-invariant convolution can be efficiently implemented with the Fast Fourier Transform (FFT),
space-varying convolution requires direct implementation of the convolution operation, which can be very computationally
expensive when the convolution kernel is large.
In this paper, we develop a general approach to the efficient implementation of space-varying convolution
through the use of matrix source coding techniques. This method can dramatically reduce computation by
approximately factoring the dense space-varying convolution operator into a product of sparse transforms. This
approach leads to a tradeoff between the accuracy and speed of the operation that is closely related to the
distortion-rate tradeoff that is commonly made in lossy source coding.
We apply our method to the problem of stray light reduction for digital photographs, where convolution
with a spatially varying stray light point spread function is required. The experimental results show that our
algorithm can achieve a dramatic reduction in computation while achieving high accuracy.
Joint deconvolution and imaging
Show abstract
We investigate a Wiener fusion method to optimally combine multiple estimates
for the problem of image deblurring given a known blur and a corpus of sharper training images.
Nearest-neighbor estimation of high frequency information from training images is fused
with a standard Wiener deconvolution estimate. Results show an improvement in sharpness
and decreased artifacts compared to either the standard Wiener filter or the nearest-neighbor
reconstruction.
Sparse and Adaptive Signal Processing
Dictionaries for sparse representation and recovery of reflectances
Show abstract
The surface reflectance function of many common materials varies slowly over the visible wavelength range. For
this reason, linear models with a small number of bases (5-8) are frequently used for representation and estimation
of these functions. In other signal representation and recovery applications, it has been recently demonstrated
that dictionary based sparse representations can outperform linear model approaches. In this paper, we describe
methods for building dictionaries for sparse estimation of reflectance functions. We describe a method for building
dictionaries that account for the measurement system; in estimation applications these dictionaries outperform
the ones designed for sparse representation without accounting for the measurement system. Sparse recovery
methods typically outperform traditional linear methods by 20-40% (in terms of RMSE).
Dantzig selector homotopy with dynamic measurements
Show abstract
The Dantzig selector is a near ideal estimator for recovery of sparse signals from linear measurements in the
presence of noise. It is a convex optimization problem which can be recast into a linear program (LP) for real
data, and solved using some LP solver. In this paper we present an alternative approach to solve the Dantzig
selector which we call "Primal Dual pursuit" or "PD pursuit". It is a homotopy continuation based algorithm,
which iteratively computes the solution of Dantzig selector for a series of relaxed problems. At each step the
previous solution is updated using the optimality conditions defined by the Dantzig selector. We will also discuss
an extension of PD pursuit which can quickly update the solution for Dantzig selector when new measurements
are added to the system. We will present the derivation and working details of these algorithms.
Sparsity regularization for image reconstruction with Poisson data
Show abstract
This work investigates three penalized-likelihood expectation maximization (EM) algorithms for image reconstruction
with Poisson data where the images are known a priori to be sparse in the space domain. The penalty
functions considered are the l1 norm, the l0 "norm", and a penalty function based on the sum of logarithms of
pixel values,(see equation in PDF) Our results show that the l1 penalized algorithm reconstructs scaled
versions of the maximum-likelihood (ML) solution, which does not improve the sparsity over the traditional ML
estimate. Due to the singularity of the Poisson log-likelihood at zero, the l0 penalized EM algorithm is equivalent
to the maximum-likelihood EM algorithm. We demonstrate that the penalty based on the sum of logarithms
produces sparser images than the ML solution. We evaluated these algorithms using experimental data from a
position-sensitive Compton-imaging detector, where the spatial distribution of photon-emitters is known to be
sparse.
Compressive coded aperture imaging
Show abstract
Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention
with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection
of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction.
However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical
constraints typically make it infeasible to actually measure many of the random projections described in the literature, and
therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In
video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this
work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence
of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines
coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane
array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based
sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by
reconstructing with high resolution the constituent images of a video sequence.
Multi-object segmentation using coupled nonparametric shape and relative pose priors
Show abstract
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our
method is motivated by the observation that neighboring or coupling objects in images generate configurations
and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs
coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate
kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape
distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on
such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm
based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted
objects in a number of applications. In particular for medical image analysis, we use our method to
extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging
segmentation problem. We also apply our technique to the problem of handwritten character segmentation.
Finally, we use our method to segment cars in urban scenes.
Segmentation
Sobolev gradients and joint variational image segmentation, denoising, and deblurring
Show abstract
We consider several variants of the active contour model without edges, extended here to the case of noisy and blurry images, in a multiphase and a multilayer level set approach. Thus, the models jointly perform denoising, deblurring and segmentation of images, in a variational formulation. To minimize in practice the proposed functionals, one of the most standard ways is to use gradient descent processes, in a time dependent approach. Usually, the L2 gradient descent of the functional is computed and discretized in practice, based on the L2 inner product. However, this computation often requires theoretically additional smoothness of the unknown, or stronger conditions.
One way to overcome this is to use the idea of Sobolev gradients. We compare in several experiments the L2 and H1 gradient descents for image segmentation using curve evolution, with applications to denoising and deblurring. The Sobolev gradient descent is preferable in many situations and gives smaller computational cost.
Resolving occlusion and segmentation errors in multiple video object tracking
Show abstract
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking.
The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate
measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles
only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling
position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on
processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of
Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the
appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high
positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling
accuracy on the tracking results.
Interpolation and Inpainting
Iterative demosaicking accelerated: theory and fast noniterative implementations
Show abstract
Color image demosaicking is a key process in the digital imaging pipeline. In this paper, we present a rigorous
treatment of a classical demosaicking algorithm based on alternating projections (AP). Since its publication, the
AP algorithm has been wildly cited and served as a benchmark in a flurry of papers in the demosaicking literature.
Despite its impressive performances, a relative weakness of the AP algorithm is its high computational complexity.
In our work, we provide a rigorous analysis of the convergence of the AP algorithm based on the concept of
contraction mapping. Furthermore, we propose an efficient noniterative implementation of the AP algorithm in
the polyphase domain. Numerical experiments show that the proposed noniterative implementation achieves the
same results obtained by the original AP algorithm at convergence, but is about an order of magnitude faster
than the latter.
Resolution and interpolation of multichannel long wave infrared camera data
Show abstract
We evaluate the performance of a multiple aperture camera with a target projector testbed and compare it to a
single lens LWIR camera with a similar field of view and pixel count. We measure a noise equivalent temperature
difference of 131 mK for the multiple aperture camera and 121 mK for the conventional camera which uses similar
uncooled 25 μm pixel technology. Spatial frequency response is analyzed using a collection of 4-bar targets with
different periods. After characterization, we remove aliasing from the multiple aperture data to resolve targets
as high as 0.192 cy/mrad.
Video inpainting algorithm using spatio-temporal consistency
Show abstract
A new video inpainting algorithm is proposed for removing unwanted or erroneous objects from video data. The
proposed algorithm fills a mask region with source blocks from unmasked areas, while keeping spatio-temporal
consistency. First, a 3-dimensional graph is constructed over consecutive frames. It defines a structure of nodes
over which the source blocks are pasted. Then, we form temporal block bundles using the motion information.
The best block bundles, which minimize an objective function, are arranged in the 3-dimensional graph. Extensive
simulation results demonstrate that the proposed algorithm can yield visually pleasing video inpainting results
even for dynamic sequences.
Mathematical Imaging
Effective curve registration using a novel solution method for overdetermined systems of polynomial equations
Show abstract
We propose a new method for registering a cloud of points in 2D onto a planar curve. This method does not
require the knowledge of an initial guess for the position of the point cloud and proceeds without having to
order, smooth out or otherwise process the points of the query point cloud in any way. The method consists
in representing the planar curve by an algebraic curve, and in fitting the algebraic curve to the points of the
point cloud by solving a corresponding over-constrained system of polynomial equations. The solution of this
system is obtained using a new solution method for polynomial systems of equations, which we introduce in this
paper. This solution method, which can be seen as an extension of the pseudo-inverse approach to solving linear
systems of equations, naturally handles over-contrained systems of equations in a robust fashion.
Image zooming with contour stencils
Show abstract
We introduce "contour stencils" as a simple method for detecting the local orientation of image contours and
apply this detection to image zooming. Our approach is motivated by the total variation along curves: small total
variation along a candidate curve suggests that this curve is a good approximation to the contours. Furthermore,
a relationship is shown between interpolation error and total variation. The contour stencil detection is used to
develop two image zooming methods. The first one, "contour stencil interpolation," is simple and computationally
efficient, yet competitive in a comparison against existing methods. The second method approaches zooming
as an inverse problem, using a graph regularization where the graph is determined by contour stencil detection.
Both methods extend naturally to vector-valued data and are demonstrated for grayscale and color images.
Aspects of 3D shape reconstruction
Show abstract
The ability to reconstruct the three dimensional (3D) shape of an object from multiple images of that object is
an important step in certain computer vision and object recognition tasks. The images in question can range
from 2D optical images to 1D radar range profiles. In each case, the goal is to use the information (primarily
invariant geometric information) contained in several images to reconstruct the 3D data. In this paper we apply
a blend of geometric, computational, and statistical techniques to reconstruct the 3D geometry, specifically the
shape, from multiple images of an object. Specifically, we deal with a collection of feature points that have been
tracked from image (or range profile) to image (or range profile) and we reconstruct the 3D point cloud up to
certain transformations-affine transformations in the case of our optical sensor and rigid motions (translations
and rotations) in the radar case. Our paper discusses the theory behind the method, outlines the computational
algorithm, and illustrates the reconstruction for some simple examples.
Statistical Imaging
Wavelet-based Poisson rate estimation using the Skellam distribution
Show abstract
Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements
often exhibit heteroscedastic behavior. In particular, time series components and other measurements
may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the
underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and
magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform
coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences
of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit
analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as
computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation
method performed in certain wavelet/filterbank transform domains. This indicates a promising potential
approach for denoising of Poisson counts in the above-mentioned applications.
Dictionary-based probability density function estimation for high-resolution SAR data
Show abstract
In the context of remotely sensed data analysis, a crucial problem is represented by the need to develop accurate
models for the statistics of pixel intensities. In this work, we develop a parametric finite mixture model for
the statistics of pixel intensities in high resolution synthetic aperture radar (SAR) images. This method is
an extension of previously existing method for lower resolution images. The method integrates the stochastic
expectation maximization (SEM) scheme and the method of log-cumulants (MoLC) with an automatic technique
to select, for each mixture component, an optimal parametric model taken from a predefined dictionary of
parametric probability density functions (pdf). The proposed dictionary consists of eight state-of-the-art SAR-specific
pdfs: Nakagami, log-normal, generalized Gaussian Rayleigh, Heavy-tailed Rayleigh, Weibull, K-root,
Fisher and generalized Gamma. The designed scheme is endowed with the novel initialization procedure and
the algorithm to automatically estimate the optimal number of mixture components. The experimental results
with a set of several high resolution COSMO-SkyMed images demonstrate the high accuracy of the designed
algorithm, both from the viewpoint of a visual comparison of the histograms, and from the viewpoint of
quantitive accuracy measures such as correlation coefficient (above 99,5%). The method proves to be effective
on all the considered images, remaining accurate for multimodal and highly heterogeneous scenes.
Uncorrelated versus independent elliptically-contoured distributions for anomalous change detection in hyperspectral imagery
Show abstract
The detection of actual changes in a pair of images is confounded by the inadvertent but pervasive differences
that inevitably arise whenever two pictures are taken of the same scene, but at different times and under different
conditions. These differences include effects due to illumination, calibration, misregistration, etc. If the actual
changes are assumed to be rare, then one can "learn" what the pervasive differences are, and can identify the
deviations from this pattern as the anomalous changes. A recently proposed framework for anomalous change
detection recasts the problem as one of binary classification between pixel pairs in the data and pixel pairs that
are independently chosen from the two images. When an elliptically-contoured (EC) distribution is assumed for
the data, then analytical expressions can be derived for the measure of anomalousness of change. However, these
expression are only available for a limited class of EC distributions. By replacing independent pixel pairs with
uncorrelated pixel pairs, an approximate solution can be found for a much broader class of EC distributions.
The performance of this approximation is investigated analytically and empirically, and includes experiments
comparing the detection of real changes in real data.
Photometry in UV astronomical images of extended sources in crowded field using deblended images in optical visible bands as Bayesian priors
Show abstract
Photometry of astrophysical sources, galaxies and stars, in crowded field images, if an old problem, is still
a challenging goal, as new space survey missions are launched, releasing new data with increased sensibility,
resolution and field of view. The GALEX mission, observes in two UV bands and produces deep sky images
of millions of galaxies or stars mixed together. These UV observations are of lower resolution than same field
observed in visible bands, and with a very faint signal, at the level of the photon noise for a substantial fraction
of objects. Our purpose is to use the better known optical counterparts as prior information in a Bayesian
approach to deduce the UV flux.
Photometry of extended sources has been addressed several times using various techniques: background
determination via sigma clipping, adaptative-aperture, point-spread-function photometry, isophotal photometry, to lists some. The Bayesian approach of using optical priors for solving the UV photometry has
already been applied by our team in a previous work. Here we describe the improvement of using the extended
shape inferred by deblending the high resolution optical images and not only the position of the optical
sources.
The resulting photometric accuracy has been tested with simulation of crowded UV fields added on top
of real UV images. Finally, this helps to converge to smaller and flat residual and increase the faint source
detection threshold. It thus gives the opportunity to work on 2nd order effects, like improving the knowledge of
the background or point-spread function by iterating on them.
Image denoising using locally learned dictionaries
Show abstract
In this paper we discuss a novel patch-based framework for image denoising through local geometric representations
of an image. We learn local data adaptive bases that best capture the underlying geometric information
from noisy image patches. To do so we first identify regions of similar structure in the given image and group
them together. This is done by the use of meaningful features in the form of local kernels that capture similarities
between pixels in a neighborhood. We then learn an informative basis (called a dictionary) for each
cluster that best describes the patches in the cluster. Such a data representation can be achieved by performing
a simple principal component analysis (PCA) on the member patches of each cluster. The number of principal
components to consider in a particular cluster is dictated by the underlying geometry captured by the cluster
and the strength of the corrupting noise. Once a dictionary is defined for a cluster, each patch in the cluster is
denoised by expressing it as a linear combination of the dictionary elements. The coefficients of such a linear
combination for any particular patch is determined in a regression framework using the local dictionary for the
cluster. Each step of our method is well motivated and is shown to minimize some cost function. We then
present an iterative extension of our algorithm that results in further performance gain. We validate our method
through experiments with simulated as well as real noisy images. These indicate that our method is able to
produce results that are quantitatively and qualitatively comparable to those obtained by some of the recently
proposed state of the art denoising techniques.
Registration
Image registration for multi-exposed HDRI and motion deblurring
Show abstract
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after
blending. Compared to usual matching problem, registration is more difficult when each image is captured under
different photographing conditions. In HDR imaging, we use long and short exposure images, which have different
brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy
image pair and the amount of motion blur varies from one image to another due to the different exposure times.
The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly
equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To
solve this problem, we applied probabilistic measure such as mutual information to represent similarity between
images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the
aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence
of luminance of mutual information, we proposed a fast and practically useful image registration technique in
multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over
90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The
effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring
cases using hand-held camera.
Comparison of subpixel image registration algorithms
Show abstract
Research into the use of multiframe superresolution has led to the development of algorithms for providing
images with enhanced resolution using several lower resolution copies. An integral component of these
algorithms is the determination of the registration of each of the low resolution images to a reference
image. Without this information, no resolution enhancement can be attained. We have endeavored to find
a suitable method for registering severely undersampled images by comparing several approaches. To test
the algorithms, an ideal image is input to a simulated image formation program, creating several
undersampled images with known geometric transformations. The registration algorithms are then applied
to the set of low resolution images and the estimated registration parameters compared to the actual values.
This investigation is limited to monochromatic images (extension to color images is not difficult) and only
considers global geometric transformations. Each registration approach will be reviewed and evaluated
with respect to the accuracy of the estimated registration parameters as well as the computational
complexity required. In addition, the effects of image content, specifically spatial frequency content, as
well as the immunity of the registration algorithms to noise will be discussed.
Image Processing Applications
Three-dimensional electronic unpacking of packed bags using 3-D CT images
Show abstract
We present a 3-D electronic unpacking technique for airport security images based on volume rendering techniques
developed for medical applications. Two electronic unpacking techniques are presented: (1) object-based unpacking and
(2) unpacking by bag-slicing. Both techniques provide photo-realistic 3-D views of contents inside a packed bag with
clearly marked threats. For the object-based unpacking, the 3-D objects within packed bags are unpacked (or isolated)
though object selection tools that cut away undesired regions to isolates the 3-D object from the background clutter.
With this selection tool, the operator is able to electronically unpack various 3-D objects and manipulate (rotate and
zoom) the 3-D photo-realistic views for the immediate classification of the suspect object. The unpacking by bag-slicing technique places arbitrary cut planes to show the content beyond the cut plane that can be stepped forward or backward electronically. The methods may be used to reduce the need for manual unpacking of suitcases.
Personal dietary assessment using mobile devices
Show abstract
Dietary intake provides valuable insights for mounting intervention programs for prevention of disease. With
growing concern for adolescent obesity, the need to accurately measure diet becomes imperative. Assessment
among adolescents is problematic as this group has irregular eating patterns and have less enthusiasm for recording
food intake. Preliminary studies among adolescents suggest that innovative use of technology may improve
the accuracy of diet information from young people. In this paper we describe further development of a novel
dietary assessment system using mobile devices. This system will generate an accurate account of daily food and
nutrient intake among adolescents. The mobile computing device provides a unique vehicle for collecting dietary
information that reduces burden on records that are obtained using more classical approaches. Images before
and after foods are eaten can be used to estimate the amount of food consumed.
Interactive Paper Session
Separation of limb and terminator on apparent contours of solar system small bodies
Show abstract
Segmentation of contours and silhouettes is a recurrent topic in image recognition and understanding. In this
paper we describe a new method used to divide in two parts (the limb and the terminator) the apparent silhouette
of an irregular astronomical body illuminated by a unique source, the Sun. One of the main objectives of the
asteroids and comets flyby is the detailed 3D reconstruction of such bodies. However the number of images
obtained during a flyby is limited, as well as the number of viewing geometries. In the 3D reconstruction we
must consider not only the camera motion but also the free rotation of the body. The local brightness variations
in the image vary with the rotation of the body and with the changing body-camera distance. The topography
at the surface of the body can vary from very smooth to highly chaotic. In the shape from silhouette 3D
reconstruction methods, limb profiles are used to retrieve the visual hull of the body. It is therefore required
to be able to separate the limb profiles from the terminator ones. In this communication, we present a new
method to perform this task based on the local measurement of the contour smoothness, which we define here
as "activity". Developed in the framework of the Rosetta mission our method has been tested on a large
set of asteroid and comet images taken during interplanetary missions. It looks robust to magnification and
enlightenment changes
Automated image processing and fusion for remote sensing applications
Show abstract
The ever increasing volumes and resolutions of remote sensing imagery have not only boosted the value of image-based analysis and visualization in scientific research and commercial sectors, but also introduced new challenges. Specifically, processing large volumes of newly acquired high-resolution imagery as well as fusing them
against existing imagery (for correction, update, and visualization) still remain highly subjective and labor-intensive
tasks, which has not been fully automated by the existing GIS software tools. This calls for the development of novel
computational algorithms to automate the routine image processing tasks involved in various remote sensing based
applications. In this paper, a suite of efficient and automated computational algorithms has been proposed and
developed to address the aforementioned challenge. It includes a segmentation algorithm to achieve the automatic
"cleaning" (i.e. segmenting out the valid pixels) of any newly acquired ortho-photo image, automatic feature point
extraction, image alignment by maximization of mutual information and finally smoothing/feathering the edges of the
imagery at the join zone. The proposed algorithms have been implemented and tested using practical large-scale GIS
imagery/data. The experimental results demonstrate the efficiency and effectiveness of the proposed algorithms and the
corresponding capability of fully automated segmentation, registration and fusion, which allows the end-user to bring
together image of heterogeneous resolution, projection, datum, and sources for analysis and visualization. The potential
benefits of the proposed algorithms include great reduction of the production time, more accurate and reliable results,
and user consistency within and across organizations.
Support vector machine for automatic pain recognition
Show abstract
Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to
everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific
expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video
frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are
computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the
results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.