Proceedings Volume 7533

Computational Imaging VIII

cover
Proceedings Volume 7533

Computational Imaging VIII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 27 January 2010
Contents: 9 Sessions, 29 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2010
Volume Number: 7533

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7533
  • Image Analysis
  • Remote Sensing I
  • Remote Sensing II
  • Biomedical Imaging I
  • Inverse Problems
  • Consumer Imaging
  • Denoising and Filtering
  • Interactive Paper Session
Front Matter: Volume 7533
icon_mobile_dropdown
Front Matter: Volume 7533
This PDF file contains the front matter associated with SPIE Proceedings Volume 7533, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Image Analysis
icon_mobile_dropdown
A regions of confidence based approach to enhance segmentation with shape priors
Vikram V. Appia, Balaji Ganapathy, Amer Abufadel, et al.
We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.
Human pose tracking from monocular video by traversing an image motion mapped body pose manifold
Saurav Basu, Joshua Poulin, Scott T. Acton
Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within ±4° of ground truth) to style variance.
Semi-automatic object geometry estimation for image personalization
Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.
A method for recognizing the shape of a Gaussian mixture from a sparse sample set
The motivating application for this research is the problem of recognizing a planar object consisting of points from a noisy observation of that object. Given is a planar Gaussian mixture model ρT (x) representing an object along with a noise model for the observation process (the template). Also given are points representing the observation of the object (the query). We propose a method to determine if these points were drawn from a Gaussian mixture ρQ(x) with the same shape as the template. The method consists in comparing samples from the distribution of distances of ρT (x) and ρQ(x), respectively. The distribution of distances is a faithful representation of the shape of generic Gaussian mixtures. Since it is invariant under rotations and translations of the Gaussian mixture, it provides a workaround to the problem of aligning objects before recognizing their shape without sacrificing accuracy. Experiments using synthetic data show a robust performance against type I errors, and few type II errors when the given template Gaussian mixtures are well distinguished.
Extraction of arbitrarily-shaped objects using stochastic multiple birth-and-death dynamics and active contours
Maria S. Kulikova, Ian H. Jermyn, Xavier Descombes, et al.
We extend the marked point process models that have been used for object extraction from images to arbitrarily shaped objects, without greatly increasing the computational complexity of sampling and estimation. The approach can be viewed as an extension of the active contour methodology to an a priori unknown number of objects. Sampling and estimation are based on a stochastic birth-and-death process defined in a space of multiple, arbitrarily shaped objects, where the objects are defined by the image data and prior information. The performance of the approach is demonstrated via experimental results on synthetic and real data.
Remote Sensing I
icon_mobile_dropdown
Symmetrized local co-registration optimization for anomalous change detection
The goal of anomalous change detection (ACD) is to identify what unusual changes have occurred in a scene, based on two images of the scene taken at different times and under different conditions. The actual anomalous changes need to be distinguished from the incidental differences that occur throughout the imagery, and one of the most common and confounding of these incidental differences is due to the misregistration of the images, due to limitations of the registration pre-processing applied to the image pair. We propose a general method to compensate for residual misregistration in any ACD algorithm which constructs an estimate of the degree of "anomalousness" for every pixel in the image pair. The method computes a modified misregistration-insensitive anomalousness by making local re-registration adjustments to minimize the local anomalousness. In this paper we describe a symmetrized version of our initial algorithm, and find significant performance improvements in the anomalous change detection ROC curves for a number of real and synthetic data sets.
High resolution SAR-image classification by Markov random fields and finite mixtures
In this paper we develop a novel classification approach for high and very high resolution polarimetric synthetic aperture radar (SAR) amplitude images. This approach combines the Markov random field model to Bayesian image classification and a finite mixture technique for probability density function estimation. The finite mixture modeling is done via a recently proposed dictionary-based stochastic expectation maximization approach for SAR amplitude probability density function estimation. For modeling the joint distribution from marginals corresponding to single polarimetric channels we employ copulas. The accuracy of the developed semiautomatic supervised algorithm is validated in the application of wet soil classification on several high resolution SAR images acquired by TerraSAR-X and COSMO-SkyMed.
Randomized group testing for acoustic source localization
William Mantzel, Justin Romberg, Karim Sabra
Undersea localization requires a computationally expensive partial differential equation simulation to test each candidate hypothesis location via matched filter. We propose a method of batch testing that effectively yields a test sequence output of random combinations of location-specific matched filter correlations, such that the computational run time varies with the number of tests instead of the number of locations. We show that by finding the most likely location that could have accounted for these batch test outputs, we are able to perform almost as well as if we had computed each location's matched filter. In particular, we show that we can reliably resolve the target's location up to the resolution of incoherence using only logarithmically many measurements when the number of candidate locations is less than the dimension of the matched filter. In this way, our random mask pattern not only performs substantially the same as cleverly designed deterministic masks in classical batch testing scenarios, but also naturally extends to other scenarios when the design of such deterministic masks may be less obvious.
Remote Sensing II
icon_mobile_dropdown
Blind deconvolution of depth-of-field limited full-field lidar data by determination of focal parameters
We present a new two-stage method for parametric spatially variant blind deconvolution of full-field Amplitude Modulated Continuous Wave lidar image pairs taken at different aperture settings subject to limited depth of field. A Maximum Likelihood based focal parameter determination algorithm uses range information to reblur the image taken with a smaller aperture size to match the large aperture image. This allows estimation of focal parameters without prior calibration of the optical setup and produces blur estimates which have better spatial resolution and less noise than previous depth from defocus (DFD) blur measurement algorithms. We compare blur estimates from the focal parameter determination method to those from Pentland's DFD method, Subbarao's S-Transform method and estimates from range data/the sampled point spread function. In a second stage the estimated focal parameters are applied to deconvolution of total integrated intensity lidar images improving depth of field. We give an example of application to complex domain lidar images and discuss the trade-off between recovered amplitude texture and sharp range estimates.
Biomedical Imaging I
icon_mobile_dropdown
Compressive inverse scattering using ultrashort pulses
Kyung Hwan Jin, Kanghee Lee, Jaewook Ahn, et al.
Inverse scattering refers the retrieval of the unknown constitutive parameters from measured scattered wave fields, and has many applications such as ultrasound imaging, optics, T-ray imaging, radar, and etc. Two distinct imaging strategies have been commonly used: narrow band inverse scattering approaches using a large number of transmitters and receivers, or wideband imaging approaches with smaller number of transmitters and receivers. In some biomedical imaging applications, the limited accessibility of scattered fields using externally located antenna arrays usually prefers the wideband imaging approaches. The main contribution of this paper is, therefore, to analyze the wideband inverse scattering problem from compressive sensing perspective. Specifically, the mutual coherence of the wideband imaging geometry is analyzed, which reveals a significant advantage to identify the sparse targets from very limited number of measurements.
Implementation and evaluation of a penalized alternating minimization algorithm for computational DIC microscopy
We present the implementation and evaluation of a penalized alternating minimization (AM) method1 for the computation of a specimen's complex transmittance function (magnitude and phase) from images captured with Differential Interference Contrast (DIC) microscopy. The magnitude of the transmittance function is constrained to be less than 1. The penalty is on the roughness of the complex transmittance function. Without the penalty, we show via simulation that the difference between the true and estimated transmittance function takes values in the null space of the DIC point-spread function, thereby characterizing the ill-posed nature of this inverse problem. The penalty effectively attenuates larger spatial frequencies that are in this null space. The algorithm is implemented on yeast cell images after proper normalization of the measured data. Preliminary results are promising.
Virtual surgical modification for planning tetralogy of Fallot repair
Jonathan Plasencia, Haithem Babiker, Randy Richardson, et al.
Goals for treating congenital heart defects are becoming increasingly focused on the long-term, targeting solutions that last into adulthood. Although this shift has motivated the modification of many current surgical procedures, there remains a great deal of room for improvement. We present a new methodological component for tetralogy of Fallot (TOF) repair that aims to improve long-term outcomes. The current gold standard for TOF repair involves the use of echocardiography (ECHO) for measuring the pulmonary valve (PV) diameter. This is then used, along with other factors, to formulate a Z-score that drives surgical preparation. Unfortunately this process can be inaccurate and requires a mid-operative confirmation that the pressure gradient across the PV is not excessive. Ideally, surgeons prefer not to manipulate the PV as this can lead to valve insufficiency. However, an excessive pressure gradient across the valve necessitates surgical action. We propose the use of computational fluid dynamics (CFD) to improve preparation for TOF repair. In our study, pre-operative CT data were segmented and reconstructed, and a virtual surgical operation was then performed to simulate post-operative conditions. The modified anatomy was used to drive CFD simulation. The pressure gradient across the pulmonary valve was calculated to be 9.24mmHg, which is within the normal range. This finding indicates that CFD may be a viable tool for predicting post-operative pressure gradients for TOF repair. Our proposed methodology would remove the need for mid-operative measurements that can be both unreliable and detrimental to the patient.
Numerical observer for cardiac motion assessment
Jovan G. Brankov, Thibault Marin, P. Hendrik Pretorius, et al.
In this paper, we present a numerical observer for assessment of cardiac motion in nuclear medicine. Numerical observers are used in medical imaging as a surrogate for human observers to automatically measure the diagnostic quality of medical images. The most commonly used quality measurement is the detection performance in a detection task. In this work, we present a new numerical observer aiming to measure image quality for the task of cardiac motiondefect detection in cardiac SPECT imaging. The proposed observer utilizes a linear discriminant on features extracted from cardiac motion, characterized by a deformable mesh model of the left ventricle and myocardial brightening. Simulations using synthetic data indicate that the proposed method can effectively capture the cardiac motion and provide an accurate prediction of the human observer performance.
Inverse Problems
icon_mobile_dropdown
Passive imaging with cross correlations in a discrete random medium
M. Moscoso, G. Papanicolaou, R.-H. Sun
The purpose of this paper is to study the potential and limitations of cross-correlation techniques using numerical simulations, and in particular, we intend to show (i) an estimate of the Green's function in different configurations and (ii) results for passive imaging. This problem seems especially interesting in seismology, nondestructive testing, structure health monitoring, and wireless sensor networks. To compute cross correlations of the impulse signals collected by the receivers, we consider scattering by discrete scatterers to generate impluse responses with targets and without targets. We compute the difference of the cross correlations with targets and the cross correlations without targets to estimate the backpropagator (Green's function) in the Kirchhoff migration functional. The migration functional is essential to compute images of targets. We run numerical simulations for different configurations to explore the limitations of this cross correlation methodology from the results of passive imaging.
Construction and exploitation of a 3D model from 2D image features
Karl Ni, Zachary Sun, Nadya Bliss, et al.
This paper proposes a trainable computer vision approach for visual object registration relative to a collection of training images obtained a priori. The algorithm first identifies whether or not the image belongs to the scene location, and should it belong, it will identify objects of interest within the image and geo-register them. To accomplish this task, the processing chain relies on 3-D structure derived from motion to represent feature locations in a proposed model. Using current state-of- the-art algorithms, detected objects are extracted and their two-dimensional sizes in pixel quantities are converted into relative 3-D real-world coordinates using scene information, homography, and camera geometry. Locations can then be given with distance alignment information. The tasks can be accomplished in an efficient manner. Finally, algorithmic evaluation is presented with receiver operating characteristics, computational analysis, and registration errors in physical distances.
Consumer Imaging
icon_mobile_dropdown
An optimal algorithm for reconstructing images from binary measurements
Feng Yang, Yue M. Lu, Luciano Sbaiz, et al.
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
Digital neutral density filter for moving picture cameras
Michael Schöberl, Alexander Oberdörster, Siegfried Fößel, et al.
Typical image sensors in digital cameras have a fixed sensitivity, and the amount of captured light energy is often controlled by adjusting exposure time and lens aperture. For high end motion imaging these settings are not available as they are used to set motion blur and depth of field, respectively. In many cases a proper exposure is achieved with additional optical filtering, using so called "neutral density" (ND) filters. We propose a digital equivalent of a neutral density filter, which can replace the handling of optical filters for camera systems. It consists of an adjusted sensor readout and in-camera processing of images. Instead of a single long exposure we capture N short exposures. These images are then combined by averaging. The short exposures reduce the sensitivity by a factor of N, while averaging reconstructs motion blur. In addition we also achieve a reduction of both dynamic and fixed pattern noise which leads to an overall increase in dynamic range. The digital ND filter can be used with regular image sensors and does not require hardware modifications.
Adaptive removal of show-through artifacts by histogram analysis
When scanning a document that is printed on both sides, the image on the reverse can show through with high luminance. We propose an adaptive method of removing show-through artifacts based on histogram analysis. Instead of attempting to measure the physical parameters of the paper and the scanning system, or making multiple scans, we analyze the color distribution to remove unwanted artifacts, using an image of the front of the document alone. First, we accumulate histogram information to find the lightness distribution of pixels in the scanned image. Using this data, we set thresholds on both luminance and chrominance to determine candidate regions of show-through. Finally, we classify these regions into foreground and background of the image on the front of the paper, and show-through from the back. The background and show-through regions become candidates for erasure, and they are adaptively updated as the process proceeds. This approach preserves the chrominance of the image on the front of the papers without introducing artifacts. It does not make the whole image brighter, which is what happens when a fixed threshold is used to remove show-through.
Automatic portion estimation and visual refinement in mobile dietary assessment
Insoo Woo, Karl Otsmo, SungYe Kim, et al.
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.
Denoising and Filtering
icon_mobile_dropdown
Motion blur removal in nonlinear sensors
Tomer Faktor, Tomer Michaeli, Yonina C. Eldar
We address the problem of motion blur removal from an image sequence that was acquired by a sensor with nonlinear response. Motion blur removal in purely linear settings has been studied extensively in the past. In practice however, sensors exhibit nonlinearities, which also need to be compensated for. In this paper we study the problem of joint motion blur removal and nonlinearity compensation. Two naive approaches for treating this problem are to apply the inverse of the nonlinearity prior to a deblurring stage or following it. These strategies require a preliminary motion estimation stage, which may be inaccurate for complex motion fields. Moreover, even if the motion parameters are known, we provide theoretical arguments and also show through simulations that theses methods yield unsatisfactory results. In this work, we propose an efficient iterative algorithm for joint nonlinearity compensation and motion blur removal. Our approach relies on a recently developed theory for nonlinear and nonideal sampling setups. Our method does not require knowledge of the motion responsible for the blur. We show through experiments the effectiveness of our method compared with alternative approaches.
SPIRAL out of convexity: sparsity-regularized algorithms for photon-limited imaging
The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be accomplished by minimizing a conventional l2-l1 objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f* admits a sparse representation. The optimization formulation considered in this paper uses a negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem. In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective function at each iteration and computationally efficient partition-based multiscale estimation methods.
Novel integro-differential equations in image processing and its applications
Prashant Athavale, Eitan Tadmor
Motivated by the hierarchical multiscale image representation of Tadmor et al.,1 we propose a novel integrodifferential equation (IDE) for a multiscale image representation. To this end, one integrates in inverse scale space a succession of refined, recursive 'slices' of the image, which are balanced by a typical curvature term at the finer scale. Although the original motivation came from a variational approach, the resulting IDE can be extended using standard techniques from PDE-based image processing. We use filtering, edge preserving smoothing to yield a family of modified IDE models with applications to image denoising and image deblurring problems. The IDE models depend on a user scaling function which is shown to dictate the BV * properties of the residual error. Numerical experiments demonstrate application of the IDE approach to denoising and deblurring. Finally, we also propose another novel IDE based on the (BV,L1) decomposition. We present numerical results for this IDE and its variant and examine its properties.
Interactive Paper Session
icon_mobile_dropdown
Band reduction for hyperspectral imagery processing
Feature reduction denotes the group of techniques that reduce high dimensional data to a smaller set of components. In remote sensing feature reduction is a preprocessing step to many algorithms intended as a way to reduce the computational complexity and get a better data representation. Reduction can be done by either identifying bands from the original subset (selection), or by employing various transforms that produce new features (extraction). Research has noted challenges in both directions. In feature selection, identifying an "ideal" spectral band subset is a hard problem as the number of bands is increasingly large, rendering any exhaustive search unfeasible. To counter this, various approaches have been proposed that combine a search algorithm with a criterion function. However, the main drawback of feature selection remains the rather narrow bandwidths covered by the selected bands resulting in possible information loss. In feature extraction, some of the most popular techniques include Principal Component Analysis, Independent Component Analysis, Orthogonal Subspace Projection, etc. While they have been used with success in some instances, the resulting bands lack a physical relationship to the data and are mostly produced using statistical strategies. We propose a new technique for feature reduction that exploits search strategies for feature selection to extract a set of spectral bands from a given imagery. The search strategy uses dynamic programming techniques to identify 'the best set" of features.
Identifying a walking human by a tensor decomposition based approach and tracking the human across discontinuous fields of views of multiple cameras
Takayuki Hori, Jun Ohya, Jun Kurumisawa
This paper proposes a method that identifies and tracks a walking human across discontinuous fields of views of multiple cameras for the purpose of video surveillance. A typical video surveillance system has multiple cameras, but there are several spaces within the surveillance area that are not within any of the camera's field of view. Also, there are discontinuities between the fields of views of adjacent cameras. In such a system, humans need to be tracked across discontinuous fields of views of multiple cameras. Our proposed model addresses this issue using the concepts of gait pattern, gait model, and motion signature. Each human's gait pattern is constructed and stored in a database. This gait pattern spans a tensor space that consists of three dimensions: person, image feature, and spatio-temporal data. A human's gait model can be constructed from the gait pattern using the "tensor decomposition based approach" described in this paper. When human(s) appears in one of the camera's field of a view (which is often discontinuous from the other camera's field of views), the human's motion signature is calculated and compared to each person in the database's gait model. The person with the gait model that is most similar to the motion signature is identified as same person. After the person is identified, the person is tracked within the field of view of the camera using the mean-shift algorithm based on color parameters. We conducted two experiments; the first experiment was identifying and tracking humans in a single video sequence, and experiments, the percentage of subjects that were correctly identified and tracked was better than that of two currently widely-used methods, PCA and nearest-neighbor. In the second experiment was the same as the first experiment but consisted of multiple-cameras with discontinuous views. The second experiment (human tracking across discontinuous images), shows the potential validity of the proposed method in a typical surveillance system.
Restitution of multiple overlaid components on extremely long series of solar corona images
A. Llebaria, J. Loirat, P. Lamy
This contribution describes the methods used to accurately disentangle the components observed on a very large series of images of the solar corona. This series consists of 12 years of continuous observations provided by the LASCO/C2 coronagraph aboard SOHO (the SOlar and Heliospheric Observatory). Continuously centred on the Sun, which is masked, the observed images display a blend of many components. The more conspicuous are the K-corona from the coronal plasma, the F-corona from the coronal dust and the instrumental straylight. All of them are optically thin but in the LASCO/C2 field of view only the K-corona is polarized. The set of observations is composed of two huge series of images: the "polarization series" (at least one observation every day) and the "white light series" (more than 50 images every day). The goal is to determine quantitatively the evolution of each image component during the 12 years. Assuming 1) a small and slow temporal evolution for the F-corona and straylight, 2) the 2D regularity of the F-corona and 3) the ability to deduce the influence of the SOHO-Sun distance, the F-corona function is determined from the polarized series and afterwards subtracted of the white light series to obtain the K-corona white light series.
Several approaches to solve the rotation illusion with wheel effect
Cheng Zhang, Rick Parent
The wheel effect (also called the Wagon-wheel effect) is a well-known rotation illusion in which a rotating wheel, when displayed as individual frames, appears to rotate differently from its true rotation due to temporal aliasing. In this paper, we propose several approaches to solve this problem for synthetic imagery in computer animation. First, we develop an algorithm to compute the frame number at which our visual perception starts to incorrectly interpret the wheel rotation. By making this critical frame number available, we can correct the wheel rotation by manipulating its geometry while viewers are unaware of the change. Our second approach is developed based on the Nyquist sampling theorem. We can increase the sample rate to capture the essential deviation that correctly depicts the wheel rotation to take care of the under-sampling issue. Our third approach is based on the traditional view that texture is often used to aid our motion perception. We further identity certain rules that can be applied to the textures to distinguish the real motion from the illusion. For each approach, we analyze both the advantages and disadvantages and suggest the potential applications.
Restoring the spatial resolution of refocus images on 4D light field
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
OASIS: a simulator to prepare and interpret remote imaging of solar system bodies
L. Jorda, S. Spjuth, H. U. Keller, et al.
We present a new tool, called "OASIS" (Optimized Astrophysical Simulator for Imaging Systems), whose aim is to generate synthetic calibrated images of solar system bodies. OASIS has been developed to support the operations and the scientific interpretation of visible images acquired by the OSIRIS visible camera aboard the Rosetta spacecraft, but it can be used to create synthetic images taken by the visible imaging system of any spacecraft. OASIS allows takes as input the shape model of the object, in the form of triangular facets defining its surface, geometric parameters describing the position and orientation of the objects included in the scene and of the observer, and instrumental parameters describing the geometric and radiometric properties of the camera. The rendering of the object is performed in several steps which involve: (i) sorting the triangular facets in planes perpendicular to the direction of the light source and to the direction of the line-of-sight, (ii) tracing rays from a given facet to the light source and to the observer to check if it is illuminated and in view from the observer, (iii) calculating the intersection between the projected coordinates of the facets and the pixels of the image, and finally (iv) radiometrically calibrating the images. The pixels of the final image contain the expected signal from the object in digital numbers (DN). We show in the article examples of synthetic images of the asteroid (2867) Steins created with OASIS, both for the preparation of the flyby and for the scientific interpretation of the acquired images later on.