Proceedings Volume 6060

Visualization and Data Analysis 2006

cover
Proceedings Volume 6060

Visualization and Data Analysis 2006

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 January 2006
Contents: 13 Sessions, 37 Papers, 0 Presentations
Conference: Electronic Imaging 2006 2006
Volume Number: 6060

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Flow Visualization
  • Volume Visualization
  • Visualization Theory
  • Lighting
  • Image Processing
  • Terrain/GIS Visualization
  • Applications
  • Interaction Techniques
  • InfoVis
  • Visualization Techniques I
  • Visualization Techniques II
  • Bioinformatics
  • Poster Session
Flow Visualization
icon_mobile_dropdown
Multiscale image based flow visualization
Alexandru C. Telea, Robert Strzodka
We present MIBFV, a method to produce real-time, multiscale animations of flow datasets. MIBFV extends the attractive features of the Image-Based Flow Visualization (IBFV) method, i.e. dense flow domain coverage with flow-aligned noise, real-time animation, implementation simplicity, and few (or no) user input requirements, to a multiscale dimension. We generate a multiscale of flow-aligned patterns using an algebraic multigrid method and use them to synthesize the noise textures required by IBFV. We demonstrate our approach with animations that combine multiple scale noise layers, in a global or level-of-detail manner.
Visualizing oceanic and atmospheric flows with streamline splatting
Yinlong Sun, Erich Ess, David Sapirstein, et al.
The investigation of the climate system is one of the most exciting areas of scientific research today. In the climate system, oceanic and atmospheric flows play a critical role. Because these flows are very complex in the span of temporal and spatial scales, effective computer visualization techniques are crucial to the analysis and understanding of the flows. However, the existing techniques and software are not sufficient to the demand of visualizing oceanic and atmospheric flows. In this paper, we use a new technique called streamline splatting to visualize 3D flows. This technique integrates streamline generation with the splatting method of volume rendering. It first generates segments of streamlines and then projects and splats the streamline segments onto the image plane. The projected streamline segments can be represented using a Hermite parametric model. Splatted curves are achieved by applying a Gaussian footprint function to the projected streamline segments and the results are blended together. Thus the user can see through a volumetric flow field and obtain a 3D representation view in one image. The proposed technique has been applied to visualizing oceanic and storm flows. This work has potential to be further developed into visualization software for regular PC workstations to help researchers explore and analyze climate flows.
View-dependent multi-resolutional flow texture advection
Existing texture advection techniques will produce unsatisfactory rendering results when there is a discrepancy between the resolution of the flow field and that of the output image. This is because many existing texture advection techniques such as Line Integral Convolution (LIC) are inherently none view-dependent, that is, the resolution of the output textures depends only on the resolution of the input field, but not the resolution of the output image. When the resolution of the flow field after projection is much higher than the screen resolution, aliasing will happen unless the flow textures are appropriately filtered through some expensive post processing. On the other hand, when the resolution of the flow field is much lower than the screen resolution, a blocky or blurred appearance will be present in the rendering because the flow texture does not have enough samples. In this paper we present a view-dependent multiresolutional flow texture advection method for structured recti- and curvi-linear meshes. Our algorithm is based on a novel intermediate representation of the flow field, called trace slice, which allows us to compute the flow texture at a desired resolution interactively based on the run-time viewing parameters. As the user zooms in and out of the field, the resolution of the resulting flow texture will adapt automatically so that enough flow details will be presented while aliasing is avoided. Our implementation utilizes mipmapping and programmable GPUs available on modern programmable graphics hardware.
Volume Visualization
icon_mobile_dropdown
Volumetric depth peeling for medical image display
David Borland, John P. Clarke, Julia R. Fielding, et al.
Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.
Adaptive border sampling for hardware texture-based volume visualization
This paper introduces a technique to properly sample volume boundaries in hardware texture-based Volume Visualization. Prior techniques render a volume with a set of uniformly-spaced proxy geometries that sample (and represent) a set of uniform-depth slices. While this is sufficient for the core of a volume, it does not consider a sample's partial overlap at the boundaries of a volume, and this failing can lead to significant artifacts at these boundaries. Increasing the sampling rate doesn't solve the problem - but the proper calculation will. While these artifacts might not be easily visible with large datasets, this paper expands on the fundamentals of visualization by presenting a correct handling of sampling at boundaries - which is missing from previous literature. Our technique computes the non-unit depth contributions of the volume at the boundaries. We use fragment programs to perform this adaptive border sampling to compute the partial sample contributions and to match sampling-planes at the volume boundaries with the sampling geometry in the core of the volume.
Ray-casting time-varying volume data sets with frame-to-frame coherence
Dani Tost, Sergi Grau, Maria Ferre, et al.
The goal of this paper is the proposal and evaluation of a ray-casting strategy that takes advantage of the spatial and temporal coherence in image-space as well as in object-space in order to speed up rendering. It is based on a double structure: in image-space, a temporal buffer that stores for each pixel the next instant of time in which the pixel must be recomputed, and in object-space a Temporal Run-Length Encoding of the voxel values through time. The algorithm skips empty and unchanged pixels through three different space-leaping strategies. It can compute the images sequentially in time or generate them simultaneously in batch. In addition, it can handle simultaneously several data modalities. Finally, an on-purpose out-of-core strategy is used to handle large datasets. The tests performed on two medical datasets and various phantom datasets show that the proposed strategy significantly speeds-up rendering.
Visualization Theory
icon_mobile_dropdown
Theoretical analysis of uncertainty visualizations
Although a number of theories and principles have been developed to guide the creation of visualizations, it is not always apparent how to apply the knowledge in these principles. We describe the application of perceptual and cognitive theories for the analysis of uncertainty visualizations. General principles from Bertin, Tufte, and Ware are outlined and then applied to the analysis of eight different uncertainty visualizations. The theories provided a useful framework for analysis of the methods, and provided insights into the strengths and weaknesses of various aspects of the visualizations.
A visualization framework for design and evaluation
Benjamin J. Blundell, Gary Ng, Steve Pettifer
The creation of compelling visualisation paradigms is a craft often dominated by intuition and issues of aesthetics, with relatively few models to support good design. The majority of problem cases are approached by simply applying a previously evaluated visualisation technique. A large body of work exists covering the individual aspects of visualisation design such as the human cognition aspects visualisation methods for specific problem areas, psychology studies and so forth, yet most frameworks regarding visualisation are applied after-the-fact as an evaluation measure. We present an extensible framework for visualisation aimed at structuring the design process, increasing decision traceability and delineating the notions of function, aesthetics and usability. The framework can be used to derive a set of requirements for good visualisation design and evaluating existing visualisations, presenting possible improvements. Our framework achieves this by being both broad and general, built on top of existing works, with hooks for extensions and customizations. This paper shows how existing theories of information visualisation fit into the scheme, presents our experience in the application of this framework on several designs, and offers our evaluation of the framework and the designs studied.
Lighting
icon_mobile_dropdown
Maximum entropy lighting for physical objects
Thomas Malzbender, Erik Ordentlich
This paper presents a principled method for choosing informative lighting directions for physical objects. An ensemble of images of an object or scene is captured, each with a known, predetermined lighting direction. Diffuse reflection functions are then estimated for each pixel across such an ensemble. Once these are estimated, the object or scene can be interactively relit as it would appear illuminated from an arbitrary lighting direction. We present two approaches for evaluating images as a function of lighting direction. The first uses image compressibility evaluated across a grid of samples in lighting space. The second uses image variance and prediction error variance, which are monotonically related to compressibility for Gaussian distributions. The advantage of the variance approach is that both image variance and prediction error variance can be analytically derived from the scene reflection functions, and evaluated at the rate of a few nanoseconds per lighting direction.
Pre-computed illumination for isosurfaces
Kevin M. Beason, Josh Grant, David C. Banks, et al.
Commercial software systems are available for displaying isosurfaces (also known as level sets, implicit surfaces, varieties, membranes, or contours) of 3D scalar-valued data at interactive rates, allowing a user to browse the data by adjusting the isovalue. We present a technique for applying global illumination to the resulting scene by precomputing the illumination for level sets and storing it in a 3D illumination grid. The technique permits globally illuminated surfaces to be rendered at interactive rates on an ordinary desktop computer with a 3D graphics card. We demonstrate the technique on datasets from magnetic resonance imaging (MRI) of the human brain, confocal laser microscopy of neural tissue in the mouse hippocampus, computer simulation of a Lennard-Jones fluid, and computer simulation of a neutron star.
Retro-rendering with vector-valued light: producing local illumination from the transport equation
Many rendering algorithms can be understood as numerical solvers for the light-transport equation. Local illumination is probably the most widely implemented rendering algorithm: it is simple, fast, and encoded in 3D graphics hardware. It is not, however, derived as a solution to the light-transport equation. We show that the light-transport equation can be re-interpreted to produce local illumination by using vector-valued light and matrix-valued reflectance. This result fills an important gap in the theory of rendering. Using this framework, local and global illumination result from merely changing the values of parameters in the governing equation, permitting the equation and its algorithmic implementation to remain fixed.
Image Processing
icon_mobile_dropdown
Bit-plane based analysis of integer wavelet coefficients for image compression
This paper presents bit-plane based statistical study for integer wavelet transforms commonly used in image compression. In each bit-plane, the coefficients were modeled as binary random variables. Experimental results indicate the probability of the significant coefficients (P1), in each bit-plane, monotonically increases from P1 ≈ 0 at the most significant bits (MSB) to P1≈ 0.5 at the least significant bits (LSB). Then, a parameterized model to predict P1 from the MSB to the LSB was proposed. Also, the correlation among the different bit-planes within the same coefficient was investigated. In addition, this study showed correlation of the significant coefficients in the same spatial orientation among different subbands. Finally, clustering within the each subband and across the different subband with the same spatial orientation was investigated. Our results show strong correlation of previously coded significant coefficients at higher levels and the significant coefficients in future passes at lower levels. The overall study of this paper is useful in understanding and enhancing existing wavelet-based image compression algorithms such as SPIHT and EBC.
Two-dimensional reduction PCA: a novel approach for feature extraction, representation, and recognition
R. M. Mutelo, L. C. Khor, W. L. Woo, et al.
We develop a novel image feature extraction and recognition method two-dimensional reduction principal component analysis (2D-RPCA)). A two dimension image matrix contains redundancy information between columns and between rows. Conventional PCA removes redundancy by transforming the 2D image matrices into a vector where dimension reduction is done in one direction (column wise). Unlike 2DPCA, 2D-RPCA eliminates redundancies between image rows and compresses the data in rows, and finally eliminates redundancies between image columns and compress the data in columns. Therefore, 2D-RPCA has two image compression stages: firstly, it eliminates the redundancies between image rows and compresses the information optimally within a few rows. Finally, it eliminates the redundancies between image columns and compresses the information within a few columns. This sequence is selected in such a way that the recognition accuracy is optimized. As a result it has a better representation as the information is more compact in a smaller area. The classification time is reduced significantly (smaller feature matrix). Furthermore, the computational complexity of the proposed algorithm is reduced. The result is that 2D-RPCA classifies image faster, less memory storage and yields higher recognition accuracy. The ORL database is used as a benchmark. The new algorithm achieves a recognition rate of 95.0% using 9×5 feature matrix compared to the recognition rate of 93.0% with a 112×7 feature matrix for the 2DPCA method and 90.5% for PCA (Eigenfaces) using 175 principal components.
Terrain/GIS Visualization
icon_mobile_dropdown
Energetically optimal travel across terrain: visualizations and a new metric of geographic distance with anthropological applications
We present a visualization and computation tool for modeling the caloric cost of pedestrian travel across three dimensional terrains. This tool is being used in ongoing archaeological research that analyzes how costs of locomotion affect the spatial distribution of trails and artifacts across archaeological landscapes. Throughout human history, traveling by foot has been the most common form of transportation, and therefore analyses of pedestrian travel costs are important for understanding prehistoric patterns of resource acquisition, migration, trade, and political interaction. Traditionally, archaeologists have measured geographic proximity based on "as the crow flies" distance. We propose new methods for terrain visualization and analysis based on measuring paths of least caloric expense, calculated using well established metabolic equations. Our approach provides a human centered metric of geographic closeness, and overcomes significant limitations of available Geographic Information System (GIS) software. We demonstrate such path computations and visualizations applied to archaeological research questions. Our system includes tools to visualize: energetic cost surfaces, comparisons of the elevation profiles of shortest paths versus least cost paths, and the display of paths of least caloric effort on Digital Elevation Models (DEMs). These analysis tools can be applied to calculate and visualize 1) likely locations of prehistoric trails and 2) expected ratios of raw material types to be recovered at archaeological sites.
Real-time 3D visualization of DEM combined with a robust DCT-based data-hiding method
A. Martin, G. Gesquiere, W. Puech, et al.
Using aerial photography, satellite imagery, scanned maps and Digital Elevation Models implies to make storage and visualization strategy choices. To obtain a three dimensional visualization, we have to link these images called texture with the terrain geometry named Digital Elevation Model. These information are usually stored in three different files (One for the DEM, one for the texture and one for the geo-referenced coordinates). In this paper we propose to store these information in only one file. In order to solve this problem, we present a technique for color data hiding of images, based on DC components of the DCT-coeffcients. In our application the images are the texture, and the elevation data are hidden in each block. This method mainly protects against JPEG compression and cropping.
Applications
icon_mobile_dropdown
Hierarchical causality explorer: making complemental use of 3D/2D visualizations
Shizuka Azuma, Issei Fujishiro, Hideyuki Horii
Hierarchical causality relationships reside ubiquitously in the reality. Since the relationships take intricate forms with two kinds of links - hierarchical abstraction and causal association, there exists no single visualization style that allows the user to comprehend them effectively. This paper introduces a novel information visualization framework which can change existing 3D and 2D display styles interactively according to the user's visual analysis demands. The two visualization styles play a complementary role, and the change in the style relies on morphing so as to maintain the user's cognitive map. Based on this framework, we have developed a general-purpose prototype system, which provides the user with an enriched set of functions not only for supporting fundamental information seeking, but bridging analytic gaps to accomplishing high-level analytic tasks such as knowledge discovery and decision making. The effectiveness of the system is illustrated with an application to the analysis of a nuclear-hazard cover-up problem.
InvIncrements: incremental software to support visual simulation
David C. Banks, Wilfredo Blanco
This paper describes incremental software to support interactive visual simulation. The software was used in the classroom so that students could modify a common prototype code to create diverse applications. In the prototype application, parameters of the simulation are controlled through the use of 3D widgets. The software, based on Open Inventor, has been tested in the classroom (Fall 2002) for Linux and Irix systems, and is available on the World Wide Web.
Interaction Techniques
icon_mobile_dropdown
Plot of plots and selection glass
Modern dynamic data visualization environments often feature complex displays comprised of many interactive components, such as plots, axes, and others. These components typically contain attributes or properties that can be manipulated programmatically or interactively. Component property manipulation is usually a two-stage process. The user first selects or in some way identifies the component to be revised and then invokes some other technique or procedure to modify the property of interest. Until recently, components typically have been manipulated one at a time, even if the same property is being modified in each component. How to effectively select multiple components interactively in multiple-view displays remains an open issue. This paper proposes modeling the display components with conventional data sets and reusing simple dynamic graphics, such as a scatter plot or a bar chart, as the graphical user interface to select these elements. This simple approach, called plot of plots, provides a uniform, flexible, and powerful scheme to select multiple display components. In addition, another approach called selection glass is also presented. The selection glass is a tool glass with click-on and click-through selection tool widgets for the selection of components. The availability of the plot of plots and selection glass provides a starting point to investigate new techniques to simultaneously modify the same properties on multiple components.
Navigation techniques for large-scale astronomical exploration
Navigating effectively in virtual environments at human scales is a difficult problem. However, it is even more difficult to navigate in large-scale virtual environments such as those simulating the physical Universe; the huge spatial range of astronomical simulations and the dominance of empty space make it hard for users to acquire reliable spatial knowledge of astronomical contexts. This paper introduces a careful combination of navigation and visualization techniques to resolve the unique problems of large-scale real-time exploration in terms of travel and wayfinding. For large-scale travel, spatial scaling techniques and constrained navigation manifold methods are adapted to the large spatial scales of the virtual Universe. We facilitate large-scale wayfinding and context awareness using visual cues such as power-of-10 reference cubes, continuous exponential zooming into points of interest, and a scalable world-in-miniature (WIM) map. These methods enable more effective exploration and assist with accurate context-model building, thus leading to improved understanding of virtual worlds in the context of large-scale astronomy.
InfoVis
icon_mobile_dropdown
Reducing InfoVis cluttering through non uniform sampling, displacement, and user perception
Enrico Bertini, Luigi Dell'Aquila, Giuseppe Santucci
Clutter affects almost any kind of visual technique and can obscure the structure present in the data even in small datasets, making it hard for users to find patterns and reveal relationships. In this paper we present a general strategy to analyze and reduce clutter using a special kind of sampling, together with an ad-hoc displacement technique and perceptual issues collected through a user study. The method, defined for 2D scatter plots, is flexible enough to be used in quite different contexts. In particular, in this paper we prove its usefulness against scatter plot, radviz, and parallel coordinates visualizations.
Diverse information integration and visualization
Susan L. Havre, Anuj Shah, Christian Posse, et al.
This paper presents and explores a technique for visually integrating and exploring diverse information. Researchers and analysts seeking knowledge and understanding of complex systems have increasing access to related, but diverse, data. These data provide an opportunity to consider entities of interest from multiple informational perspectives not available from any single, data or information type. These multiple perspectives are derived from diverse, but related data and integrated for simultaneous analysis. Our approach visualizes multiple entities across multiple perspectives where each perspective, or dimension, is an alternate partitioning of the entities. The partitioning may be based on inherent or assigned attributes such as meta-data or prior knowledge captured in annotations. The partitioning may also be directly derived from entity data; for example, clustering, or unsupervised classification, can be applied to multi-dimensional vector entity data to partition the entities into groups, or clusters. The same entities may be clustered on data from different experiment types or processing approaches. This reduction of diverse data/information on an entity to a series of partitions, or discrete (and unit-less) categories, allows the user to view the entities across diverse data without concern for data types and units. Parallel coordinate plots typically visualize continuous data across multiple dimensions. We adapt parallel coordinate plots for discrete values such as partition names to allow the comparison of entity patterns across multiple dimension for identifying trends and outlier entities. We illustrate this approach through a prototype, Juxter (short for Juxtaposer).
WordSpace: visual summary of text corpora
Ulrik Brandes, Martin Hoefer, Jürgen Lerner
In recent years several well-known approaches to visualize the topical structure of a document collection have been proposed. Most of them feature spectral analysis of a term-document matrix with influence values and dimensionality reduction. We generalize this approach by arguing that there are many reasonable ways to project the term-document matrix into low-dimensional space in which different features of the corpus are emphasized. Our main tool is a continuous generalization of adjacency-respecting partitions called structural similarity. In this way we obtain a generic framework in which influence weights in the term-document matrix, dimensionality-reducing projections, and the display of a target subspace may be varied according to nature of the text corpus.
Visualization Techniques I
icon_mobile_dropdown
Trees in a treemap: visualizing multiple hierarchies
This paper deals with the visual representation of a particular kind of structured data: trees where each node is associated with an object (leaf node) of a taxonomy. We introduce a new visualization technique that we call Trees In A Treemap. In this visualization edges can either be drawn as straight or orthogonal edges. We compare our technique with several known techniques. To demonstrate the usability of our visualization techniques, we apply them to find interesting patterns in decision trees and network routing data.
Focus-based filtering + clustering technique for power-law networks with small world phenomenon
François Boutin, Jérôme Thièvre, Mountaz Hascoët
Realistic interaction networks usually present two main properties: a power-law degree distribution and a small world behavior. Few nodes are linked to many nodes and adjacent nodes are likely to share common neighbors. Moreover, graph structure usually presents a dense core that is difficult to explore with classical filtering and clustering techniques. In this paper, we propose a new filtering technique accounting for a user-focus. This technique extracts a tree-like graph with also power-law degree distribution and small world behavior. Resulting structure is easily drawn with classical force-directed drawing algorithms. It is also quickly clustered and displayed into a multi-level silhouette tree (MuSi-Tree) from any user-focus. We built a new graph filtering + clustering + drawing API and report a case study.
Enhancing scatterplot matrices for data with ordering or spatial attributes
Qingguang Cui, Matthew O. Ward, Elke A. Rundensteiner
The scatterplot matrix is one of the most common methods used to project multivariate data onto two dimensions for display. While each off-diagonal plot maps a pair of non-identical dimensions, there is no prescribed mapping for the diagonal plots. In this paper, histograms, 1D plots and 2D plots are drawn in the diagonal plots of the scatterplots matrix. In 1D plots, the data are assumed to have order, and they are projected in this order. In 2D plots, the data are assumed to have spatial information, and they are projected onto locations based on these spatial attributes using color to represent the dimension value. The plots and the scatterplots are linked together by brushing. Brushing on these alternate visualizations will affect the selected data in the regular scatterplots, and vice versa. Users can also navigate to other visualizations, such as parallel coordinates and glyphs, which are also linked with the scatterplot matrix by brushing. Ordering and spatial attributes can also be used as methods of indexing and organizing data. Users can select an ordering span or a spatial region by interacting with 1D plots or with 2D plots, and then observe the characteristics of the selected data subset. 1D plots and 2D plots provide the ability to explore the ordering and spatial attributes, while other views are for viewing the abstract data. In a sense, we are linking what are traditionally seen as scientific visualization methods with methods from the information visualization and statistical graphics fields. We validate the usefulness of this integration by providing two case studies, time series data analysis and spatial data analysis.
Visualization Techniques II
icon_mobile_dropdown
Content-based text mapping using multi-dimensional projections for exploration of document collections
Rosane Minghim, Fernando Vieira Paulovich, Alneu de Andrade Lopes
This paper presents a technique for generation of maps of documents targeted at placing similar documents in the same neighborhood. As a result, besides being able to group (and separate) documents by their contents, it runs at very manageable computational costs. Based on multi-dimensional projection techniques and an algorithm for projection improvement, it results in a surface map that allows the user to identify a number of important relationships between documents and sub-groups of documents via visualization and interaction. Visual attributes such as height, color, isolines and glyphs as well as aural attributes (such as pitch), help add dimensions for integrated visual analysis. Exploration and narrowing of focus can be performed using a set of tools provided. This novel text mapping technique, named IDMAP (Interactive Document Map), is fully described in this paper. Results are compared with dimensionality reduction and cluster techniques for the same purposes. The maps are bound to support a large number of applications that rely on retrieval and examination of document collections and to complement the type of information offered by current knowledge domain visualizations.
Mapping texts through dimensionality reduction and visualization techniques for interactive exploration of document collections
Alneu de Andrade Lopes, Rosane Minghim, Vinícius Melo, et al.
The current availability of information many times impair the tasks of searching, browsing and analyzing information pertinent to a topic of interest. This paper presents a methodology to create a meaningful graphical representation of documents corpora targeted at supporting exploration of correlated documents. The purpose of such an approach is to produce a map from a document body on a research topic or field based on the analysis of their contents, and similarities amongst articles. The document map is generated, after text pre-processing, by projecting the data in two dimensions using Latent Semantic Indexing. The projection is followed by hierarchical clustering to support sub-area identification. The map can be interactively explored, helping to narrow down the search for relevant articles. Tests were performed using a collection of documents pre-classified into three research subject classes: Case-Based Reasoning, Information Retrieval, and Inductive Logic Programming. The map produced was capable of separating the main areas and approaching documents by their similarity, revealing possible topics, and identifying boundaries between them. The tool can deal with the exploration of inter-topics and intra-topic relationship and is useful in many contexts that need deciding on relevant articles to read, such as scientific research, education, and training.
Bioinformatics
icon_mobile_dropdown
Visualizing brain rhythms and synchrony
Kay A. Robbins, Dragana Veljkovic, Egle Pilipaviciute
Patterns of synchronized brain activity have been widely observed in EEGs and multi-electrode recordings, and much study has been devoted to understanding their role in brain function. We introduce the problem of visualization of synchronized behavior and propose visualization techniques for assessing temporal and spatial patterns of synchronization from data. We discuss spike rate plots, activity succession diagrams, space-time activity band visualization, and low-dimensional projections as methods for identifying synchronized behavior in populations of neurons and for detecting the possibly short-lived neuronal assemblies that produced them. We use wavelets conjunction with these visualization techniques to extract the frequency and temporal localization of synchronized behavior. Most of these techniques can be streamed, making them suitable for analyzing long-running experimental recordings as well as the output of simulation models.
Automatic feature-based surface mapping for brain cortices
Fabien Vivodtzev, David F. Wiley, Lars Linsen, et al.
We present a method that maps a complex surface geometry to an equally complicated, similar surface. One main objective of our effort is to develop technology for automatically transferring surface annotations from an atlas brain to a subject brain. While macroscopic regions of brain surfaces often correspond, the detailed surface geometry of corresponding areas can vary greatly. We have developed a method that simplifies a subject brain's surface forming an abstract yet spatially descriptive point cloud representation, which we can match to the abstract point cloud representation of the atlas brain using an approach that iteratively improves the correspondence of points. The generation of the point cloud from the original surface is based on surface smoothing, surface simplification, surface classification with respect to curvature estimates, and clustering of uniformly classified regions. Segment mapping is based on spatial partitioning, principal component analysis, rigid affine transformation, and warping based on the thin-plate spline (TPS) method. The result is a mapping between topological components of the input surfaces allowing for transfer of annotations.
Poster Session
icon_mobile_dropdown
Blogviz: mapping the dynamics of information diffusion in blogspace
Blogviz is a visualization model for mapping the transmission and internal structure of top links across the blogosphere. It explores the idea of meme propagation by assuming a parallel with the spreading of most cited URLs in daily weblog entries. The main goal of Blogviz is to unravel hidden patterns in the topics diffusion process. What's the life cycle of a topic? How does it start and how does it evolve through time? Are topics constrained to a specific community of users? Who are the most influential and innovative blogs in any topic? Are there any relationships amongst topic proliferators?
Organizing and visualizing database data using parallel coordinates
In this paper, we describe a data organization and axis grouping technique for managing parallel coordinate plots. A database visualization model is created as an intermediary between the data and the visualization. On the visualization side, axes within a parallel coordinate plot are put into groups which can be represented by a new axis in the plot, while the members of the group are hidden. Methods are presented for building these groups and displaying their axes, each with their own advantages and disadvantages. Lastly, a working system which uses these techniques to visualize data from a database is presented.
Visualizing 3D vector fields with splatted streamlines
Erich Ess, Yinlong Sun
We present a novel technique called streamline splatting to visualize 3D vector fields interactively. This technique integrates streamline generation with the splatting method of volume rendering. The key idea is to create volumetric streamlines using geometric streamlines and a kernel footprint function. To optimize the rendering speed, we represent the volumetric streamlines in terms of a series of slices perpendicular to the principal viewing direction. Thus 3D volume rendering is achieved by blending all slice textures with support of graphics hardware. This approach allows the user to visualize 3D vector fields interactively such as by rotation and zooming on regular PCs. This new technique may lead to better understanding of complex structures in 3D vector fields.
SRS browser: a visual interface to the sequence retrieval system
This paper presents a novel approach to the visual exploration and navigation of complex association networks of biological data sets, e.g., published papers, gene or protein information. The generic approach was implemented in the SRS Browser as an alternative visual interface to the highly used Sequence Retrieval System (SRS) [1]. SRS supports keyword-based search of about 400 biomedical databases. While the SRS presents search results as rank-ordered lists of matching entities, the SRS Browser displays entities and their relations for interactive exploration. A formal usability study was conducted to examine the SRS Browser interface's capabilities to support knowledge discovery and management.
Tracing parallel vectors
Jeffrey Sukharev, Xiaoqiang Zheng, Alex Pang
Feature tracking algorithms usually rely on operators for identifying regions of interest. One of these commonly used operators is to identify parallel vectors introduced by Peikert and Roth.4 In this paper, we propose a new and improved method for finding parallel vectors in 3D vector fields. Our method uses a two-stage approach where in the first stage we extract solution points from 2D faces using Newton-Raphson method, and in the second stage, we use analytical tangents to trace solution lines. The distinct advantage of our method over the previous method lies in the fact that our algorithm does not require a very fine grid to find all the important topological features. As a consequence, the extraction phase does not have to be at the same resolution as the original dataset. More importantly, the feature lines extracted are topologically consistent. We demonstrate the tracing algorithm with results from several datasets.
Output-sensitive volume tracking
Lian Jiang, XiaoLin Li
Feature tracking is a useful technique for studying the evolution of phenomena (or features) in time-varying scientific datasets. Time-varying datasets can be massive and are constantly becoming larger as more powerful machines are being used for scientific computations. To interactively explore such datasets feature tracking must be done efficiently. For massive datasets, which do not fit into memory, tracking should be done out-of-core. In this paper, we propose an "output-sensitive" feature tracking, which uses the pre-computed metadata to (1) enable out-of-core processing structured datasets, (2) expedite the feature tracking processing, and (3) make the feature tracking less threshold sensitive. With the assistance of the pre-computed metadata, the complexity of the feature extraction is improved from O(mlgm) to O(n), where m is the number of cells in a timestep and n is the number of cells in just the extracted features. Furthermore, the feature tracking's complexity is improved from O(nlgn) to O(nlgk), where k is the number of cells in a feature group. The metadata computation and feature tracking can easily be adapted to the out-of-core paradigm. The effectiveness and efficiency of this algorithm is demonstrated using experiments.
Visualization of force fields in protein structure prediction
The force fields used in molecular computational biology are not mathematically defined in such a way that their representation would facilitate a straightforward application of volume visualization techniques. To visualize energy, it is necessary to define a spatial mapping for these fields. Equipped with such a mapping, we can generate volume renderings of the internal energy states of a molecule. We describe our force field, the spatial mapping that we use for energy, and the visualizations that we produce from this mapping. We provide images and animations that offer insight into the computational behavior of the energy optimization algorithms that we employ.
Correspondence-based visualization techniques
A visual representation model is an abstract pattern used to create images which characterize quantitative information. By using a texture image to define a visual representation model, correspondence of color to denote similarity, and correspondence of image location over multiple images to associate information into collections, highly effective visualization techniques are made possible. One such technique for two-dimensional texture-based vector field visualization is vector field marquetry. Vector field marquetry uses a synthesized image representing direction as a conditioner for pixel replacement over a collection of vector field direction-magnitude portraits. The resulting synthesized image displays easily recognizable local and global features, vector direction, and magnitude. A related technique enabled by correspondence-based methods is the sparse representation of a vector field by a topological skeleton constructed from isodirection lines. Each vector in a vector field along an isodirection line points in the same direction. Isodirection lines subdivide the domain into regions of similar vectors, converge at critical points, and represent global characteristics of the vector field.