Proceedings Volume 8294

Visualization and Data Analysis 2012

Pak Chung Wong, David L. Kao, Ming C. Hao, et al.
cover
Proceedings Volume 8294

Visualization and Data Analysis 2012

Pak Chung Wong, David L. Kao, Ming C. Hao, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 14 December 2011
Contents: 11 Sessions, 37 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2012
Volume Number: 8294

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8294
  • Interactive Visualization
  • Visual Analytics
  • Visualization Techniques and Applications
  • Large Data Visualization
  • Evaluations
  • Geo-Temporal Visualizations
  • Visualization Algorithms
  • Bioinformatics Visualizations
  • Flow Visualization
  • Poster Session
Front Matter: Volume 8294
icon_mobile_dropdown
Front Matter: Volume 8294
This PDF file contains the front matter associated with SPIE Proceedings Volume 8294, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Interactive Visualization
icon_mobile_dropdown
StreamSqueeze: a dynamic stream visualization for monitoring of event data
Florian Mansmann, Milos Krstajic, Fabian Fischer, et al.
While in clear-cut situations automated analytical solution for data streams are already in place, only few visual approaches have been proposed in the literature for exploratory analysis tasks on dynamic information. However, due to the competitive or security-related advantages that real-time information gives in domains such as finance, business or networking, we are convinced that there is a need for exploratory visualization tools for data streams. Under the conditions that new events have higher relevance and that smooth transitions enable traceability of items, we propose a novel dynamic stream visualization called StreamSqueeze. In this technique the degree of interest of recent items is expressed through an increase in size and thus recent events can be shown with more details. The technique has two main benefits: First, the layout algorithm arranges items in several lists of various sizes and optimizes the positions within each list so that the transition of an item from one list to the other triggers least visual changes. Second, the animation scheme ensures that for 50 percent of the time an item has a static screen position where reading is most effective and then continuously shrinks and moves to the its next static position in the subsequent list. To demonstrate the capability of our technique, we apply it to large and high-frequency news and syslog streams and show how it maintains optimal stability of the layout under the conditions given above.
Interactive data-centric viewpoint selection
Han Suk Kim, Didem Unat, Scott B. Baden, et al.
We propose a new algorithm for automatic viewpoint selection for volume data sets. While most previous algorithms depend on information theoretic frameworks, our algorithm solely focuses on the data itself without off-line rendering steps, and finds a view direction which shows the data set's features well. The algorithm consists of two main steps: feature selection and viewpoint selection. The feature selection step is an extension of the 2D Harris interest point detection algorithm. This step selects corner and/or high-intensity points as features, which captures the overall structures and local details. The second step, viewpoint selection, takes this set and finds a direction that lays out those points in a way that the variance of projected points is maximized, which can be formulated as a Principal Component Analysis (PCA) problem. The PCA solution guarantees that surfaces with detected corner points are less likely to be degenerative, and it minimizes occlusion between them. Our entire algorithm takes less than a second, which allows it to be integrated into real-time volume rendering applications where users can modify the volume with transfer functions, because the optimized viewpoint depends on the transfer function.
Interactive analysis of situational awareness metrics
Derek Overby, Jim Wall, John Keyser
Digital systems are employed to maintain situational awareness of people in various contexts including emergency response, disaster relief, and military operations. Because these systems are often operated in wireless environments and are used to support real-time decision making, the accuracy of the SA data provided is important to measure and evaluate in the development of new systems. Our work has been conducted in conjunction with analysts in the evaluation and performance comparison of different systems designed to provide a high degree of situational awareness in military operations. To this end, we defined temporal and spatial metrics for measuring the accuracy of the SA data provided by each system. In this paper we discuss the proposed temporal and spatial metrics for SA data and show how we provided these metrics in a linked coordinated multiple view environment that enabled the analysts we worked with to effectively perform several critical analysis tasks. The temporal metric is designed to help determine when network performance has a significant effect on SA data, and therefore identify specific time periods in which individuals were provided inaccurate position data for their peers. Temporal context can be used to determine the local or global nature of any SA data inaccuracy, and the spatial metric can then be used to identify geographic effects on network performance of the wireless system. We discuss the interactive software implementation of our metrics and show how this analysis capability enabled the analysts to evaluate the observed effects of network latency and system performance on SA data during an exercise.
Visual Analytics
icon_mobile_dropdown
Incremental visual text analytics of news story development
Milos Krstajic, Mohammad Najm-Araghi, Florian Mansmann, et al.
Online news sources produce thousands of news articles every day, reporting on local and global real-world events. New information quickly replaces the old, making it difficult for readers to put current events in the context of the past. Additionally, the stories have very complex relationships and characteristics that are difficult to model: they can be weakly or strongly connected, or they can merge or split over time. In this paper, we present a visual analytics system for exploration of news topics in dynamic information streams, which combines interactive visualization and text mining techniques to facilitate the analysis of similar topics that split and merge over time. We employ text clustering techniques to automatically extract stories from online news streams and present a visualization that: 1) shows temporal characteristics of stories in different time frames with different level of detail; 2) allows incremental updates of the display without recalculating the visual features of the past data; 3) sorts the stories by minimizing clutter and overlap from edge crossings. By using interaction, stories can be filtered based on their duration and characteristics in order to be explored in full detail with details on demand. To demonstrate the usefulness of our system, case studies with real news data are presented and show the capabilities for detailed dynamic text stream exploration.
Guided text analysis using adaptive visual analytics
Chad A. Steed, Christopher T. Symons, Frank A. DeNap, et al.
This paper demonstrates the promise of augmenting interactive visualizations with semi-supervised machine learning techniques to improve the discovery of significant associations and insight in the search and analysis of textual information. More specifically, we have developed a system-called Gryffin-that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source publications related to national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinated views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the Department of Homeland Security's Fusion Centers, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in search and investigative analysis of textual information.
Visualization Techniques and Applications
icon_mobile_dropdown
Designing a better weather display
Colin Ware, Matthew Plumlee
The variables most commonly displayed on weather maps are atmospheric pressure, wind speed and direction, and surface temperature. But they are usually shown separately, not together on a single map. As a design exercise, we set the goal of finding out if it is possible to show all three variables (two 2D scalar fields and a 2D vector field) simultaneously such that values can be accurately read using keys for all variables, a reasonable level of detail is shown, and important meteorological features stand out clearly. Our solution involves employing three perceptual "channels", a color channel, a texture channel, and a motion channel in order to perceptually separate the variables and make them independently readable. We conducted an experiment to evaluate our new design both against a conventional solution, and against a glyph-based solution. The evaluation tested the abilities of novice subjects both to read values using a key, and to see meteorological patterns in the data. Our new scheme was superior especially in the representation of wind patterns using the motion channel, and it also performed well enough in the representation of pressure using the texture channel to suggest it as a viable design alternative.
Visualization feedback for musical ensemble practice: a case study on phrase articulation and dynamics
Trevor Knight, Nicolas Boulliot, Jeremy R. Cooperstock
We consider the possible advantages of visualization in supporting musical interpretation. Specifically, we investigate the use of visualizations in making a subjective judgement of a student's performance compared to reference "expert" performance for particular aspects of musical performance-articulation and dynamics. Our assessment criteria for the effectiveness of the feedback are based on the consistency of judgements made by the participants using each modality, that is to say, in determining how well the student musician matches the reference musician, the time taken to evaluate each pair of samples, and subjective opinion of perceived utility of the feedback. For articulation, differences in the mean scores assigned by the participants to the reference versus the student performance were not statistically significant for each modality. This suggests that while the visualization strategy did not offer any advantage over presentation of the samples by audio playback alone, visualization nevertheless provided sufficient information to make similar ratings. For dynamics, four of our six participants categorized the visualizations as helpful. The means of their ratings for the visualization-only and both-together conditions were not statistically different but were statistically different from the audio-only treatment, indicating a dominance of the visualizations when presented together with audio. Moreover, the ratings of dynamics under the visualization-only condition were significantly more consistent than the other conditions.
Exploring ensemble visualization
Madhura N. Phadke, Lifford Pinto, Oluwafemi Alabi, et al.
An ensemble is a collection of related datasets. Each dataset, or member, of an ensemble is normally large, multidimensional, and spatio-temporal. Ensembles are used extensively by scientists and mathematicians, for example, by executing a simulation repeatedly with slightly different input parameters and saving the results in an ensemble to see how parameter choices affect the simulation. To draw inferences from an ensemble, scientists need to compare data both within and between ensemble members. We propose two techniques to support ensemble exploration and comparison: a pairwise sequential animation method that visualizes locally neighboring members simultaneously, and a screen door tinting method that visualizes subsets of members using screen space subdivision. We demonstrate the capabilities of both techniques, first using synthetic data, then with simulation data of heavy ion collisions in high-energy physics. Results show that both techniques are capable of supporting meaningful comparisons of ensemble data.
Large Data Visualization
icon_mobile_dropdown
Parallel large data visualization with display walls
Luiz Scheidegger, Huy T. Vo, Jens Krüger, et al.
While there exist popular software tools that leverage the power of arrays of tiled high resolution displays, they usually require either the use of a particular API or significant programming effort to be properly configured. We present PVW (Parallel Visualization using display Walls), a framework that uses display walls for scientific visualization, requiring minimum labor in setup, programming and configuration. PVW works as a plug-in to pipeline-based visualization software, and allows users to migrate existing visualizations designed for a single-workstation, single-display setup to a large tiled display running on a distributed machine. Our framework is also extensible, allowing different APIs and algorithms to be made display wall-aware with minimum effort.
SDSS Log Viewer: visual exploratory analysis of large-volume SQL log data
Jian Zhang, Chaomei Chen, Michael S. Vogeley, et al.
User-generated Structured Query Language (SQL) queries are a rich source of information for database analysts, information scientists, and the end users of databases. In this study a group of scientists in astronomy and computer and information scientists work together to analyze a large volume of SQL log data generated by users of the Sloan Digital Sky Survey (SDSS) data archive in order to better understand users' data seeking behavior. While statistical analysis of such logs is useful at aggregated levels, efficiently exploring specific patterns of queries is often a challenging task due to the typically large volume of the data, multivariate features, and data requirements specified in SQL queries. To enable and facilitate effective and efficient exploration of the SDSS log data, we designed an interactive visualization tool, called the SDSS Log Viewer, which integrates time series visualization, text visualization, and dynamic query techniques. We describe two analysis scenarios of visual exploration of SDSS log data, including understanding unusually high daily query traffic and modeling the types of data seeking behaviors of massive query generators. The two scenarios demonstrate that the SDSS Log Viewer provides a novel and potentially valuable approach to support these targeted tasks.
Evaluations
icon_mobile_dropdown
Comparison of open-source visual analytics toolkits
John R Harger, Patricia J. Crossno
We present the results of the first stage of a two-stage evaluation of open source visual analytics packages. This stage is a broad feature comparison over a range of open source toolkits. Although we had originally intended to restrict ourselves to comparing visual analytics toolkits, we quickly found that very few were available. So we expanded our study to include information visualization, graph analysis, and statistical packages. We examine three aspects of each toolkit: visualization functions, analysis capabilities, and development environments. With respect to development environments, we look at platforms, language bindings, multi-threading/parallelism, user interface frameworks, ease of installation, documentation, and whether the package is still being actively developed.
Evaluation of progressive treemaps to convey tree and node properties
René Rosenbaum, Bernd Hamann
In this paper, we evaluate progressive treemaps (PTMs). Progressive refinement has a long tradition in image communication, but is a novel approach for information presentation. Besides technical benefits it also promises to provide advantages important for the conveyance of data properties. In this first user study in this domain, we focus on the additional value of progressive refinement for traditional treemaps to convey the topology of a given hierarchical data set and properties of its nodes. To achieve this, we compare the results gained for common squarified treemap displays with and without progression for various related tasks and set-ups. The results we obtained indicate that PTMs allow for a better conveyance of topological features and node properties in most set-ups. We also assessed the opinions of our study participants and found that PTMs also lead to a better confidence about the given answers and provide more assistance and user friendliness.
Evaluation of multivariate visualizations: a case study of refinements and user experience
Mark A. Livingston, Jonathan W. Decker
Multivariate visualization (MVV) aims to provide insight into complex data sets with many variables. The analyst's goal may be to understand how one variable interacts with another, to identify potential correlations between variables, or to understand patterns of a variable's behavior over the domain. Summary statistics and spatially abstracted plots of statistical measures or analyses are unlikely to yield insights into spatial patterns. Thus we focus our efforts on MVVs, which we hope will express key properties of the data within the original data domain. Further narrowing the problem space, we consider how these techniques may be applied to continuous data variables. One difficulty of MVVs is that the number of perceptual channels may be exceeded. We embarked on a series of evaluations of MVVs in an effort to understand the limitations of attributes that are used in MVVs. In a follow-up study to previously published results, we attempted to use our past results to inform refinements to the design of the MVVs and the study itself. Some changes improved performance, whereas others degraded performance. We report results from the follow-up study and a comparison of data collected from subjects who participated in both studies. On the positive end, we saw improved performance with Attribute Blocks, a MVV newly introduced to our on-going evaluation, relative to Dimensional Stacking, a technique we were examining previously. On the other hand, our refinement to Data-driven Spots resulted in greater errors on the task. Users' previous exposure to the MVVs enabled them to complete the task significantly faster (but not more accurately). Previous exposure also yielded lower ratings of subjective workload. We discuss these intuitive and counter-intuitive results and the implications for MVV design.
Geo-Temporal Visualizations
icon_mobile_dropdown
Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams
Ming Hao, Christian Rohrdantz, Halldór Janetzko, et al.
Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).
A self-adaptive technique for visualizing geospatial data in 3D with minimum occlusion
Geospatial data are often visualized as 2D cartographic maps with interactive display of detail on-demand. Integration of the 2D map, which represents high level information, with the location-specific detailed information is a key design issue in geovisualization. Solutions include multiple linked displays around the map which can impose cognitive load on the user as the number of links goes up; and separate overlaid windowed displays which causes occlusion of the map. In this paper, we present a self-adaptive technique which reveals the hidden layers of information in a single display, but minimizes occlusion of the 2D map. The proposed technique creates extra screen space by invoking controlled deformation of the 2D map. We extend our method to allow simultaneous display of multiple windows at different map locations. Since our technique is not dependent on the type of information to display, we expect it to be useful to both common users and the scientists. Case studies are provided in the paper to demonstrate the utility of the method in occlusion management and visual exploration.
Visualization Algorithms
icon_mobile_dropdown
Space/error tradeoffs for lossy wavelet reconstruction
Jonathan Frain, R. Daniel Bergeron
Discrete Wavelet Transforms have proven to be a very effective tool for compressing large data sets. Previous research has sought to select a subset of wavelet coefficients based on a given space constraint. These approaches require non-negligible overhead to maintain location information associated with the retained coefficients. Our approach identifies entire wavelet coefficient subbands that can be eliminated based on minimizing the total error introduced into the reconstruction. We can get further space reduction (with more error) by encoding some or all of the saved coefficients as a byte index into a floating point lookup table. We demonstrate how our approach can yield the same global sum error using less space than traditional MR implementations.
Configurable data prefetching scheme for interactive visualization of large-scale volume data
Byungil Jeong, Paul A. Navrátil, Kelly P. Gaither, et al.
This paper presents a novel data prefetching and memory management scheme to support interactive visualization of large-scale volume datasets using GPU-based isosurface extraction. Our dynamic in-core approach uses a span-space lattice data structure to predict and prefetch the portions of a dataset that are required by isosurface queries, to manage an application-level volume data cache, and to ensure load-balancing for parallel execution. We also present a GPU memory management scheme that enhances isosurface extraction and rendering performance. With these techniques, we achieve rendering performance superior to other in-core algorithms while using dramatically fewer resources.
A general approach for similarity-based linear projections using a genetic algorithm
James A. Mouradian, Bernd Hamann, René Rosenbaum
A widely applicable approach to visualizing properties of high-dimensional data is to view the data as a linear projection into two- or three-dimensional space. However, developing an appropriate linear projection is often difficult. Information can be lost during the projection process, and many linear projection methods only apply to a narrow range of qualities the data may exhibit. We propose a general-purpose genetic algorithm to develop linear projections of high-dimensional data sets which preserve a specified quality of the data set as much as possible. The obtained results show that the algorithm converges quickly and reliably for a variety of different data sets.
Image space adaptive volume rendering
Andrew Corcoran, John Dingliana
We present a technique for interactive direct volume rendering which provides adaptive sampling at a reduced memory requirement compared to traditional methods. Our technique exploits frame to frame coherence to quickly generate a two-dimensional importance map of the volume which guides sampling rate optimisation and allows us to provide interactive frame rates for user navigation and transfer function changes. In addition our ray casting shader detects any inconsistencies in our two-dimensional map and corrects them on the fly to ensure correct classification of important areas of the volume.
Bioinformatics Visualizations
icon_mobile_dropdown
Visualization of mappings between the gene ontology and cluster trees
Ilir Jusufi, Andreas Kerren, Vladyslav Aleksakhin, et al.
Ontologies and hierarchical clustering are both important tools in biology and medicine to study high-throughput data such as transcriptomics and metabolomics data. Enrichment of ontology terms in the data is used to identify statistically overrepresented ontology terms, giving insight into relevant biological processes or functional modules. Hierarchical clustering is a standard method to analyze and visualize data to find relatively homogeneous clusters of experimental data points. Both methods support the analysis of the same data set, but are usually considered independently. However, often a combined view is desired: visualizing a large data set in the context of an ontology under consideration of a clustering of the data. This paper proposes a new visualization method for this task.
Visualizing uncertainty in biological expression data
Clemens Holzhüter, Alexander Lex, Dieter Schmalstieg, et al.
Expression analysis of ~omics data using microarrays has become a standard procedure in the life sciences. However, microarrays are subject to technical limitations and errors, which render the data gathered likely to be uncertain. While a number of approaches exist to target this uncertainty statistically, it is hardly ever even shown when the data is visualized using for example clustered heatmaps. Yet, this is highly useful when trying not to omit data that is "good enough" for an analysis, which otherwise would be discarded as too unreliable by established conservative thresholds. Our approach addresses this shortcoming by first identifying the margin above the error threshold of uncertain, yet possibly still useful data. It then displays this uncertain data in the context of the valid data by enhancing a clustered heatmap. We employ different visual representations for the different kinds of uncertainty involved. Finally, it lets the user interactively adjust the thresholds, giving visual feedback in the heatmap representation, so that an informed choice on which thresholds to use can be made instead of applying the usual rule-of-thumb cut-offs. We exemplify the usefulness of our concept by giving details for a concrete use case from our partners at the Medical University of Graz, thereby demonstrating our implementation of the general approach.
Flow Visualization
icon_mobile_dropdown
Instant visitation maps for interactive visualization of uncertain particle trajectories
Kai Bürger, Roland Fraedrich, Dorit Merhof, et al.
Visitation maps are an effective means to analyze the frequency of similar occurrences in large sets of uncertain particle trajectories. A visitation map counts for every cell the number of trajectories passing through this cell, and it can then be used to visualize pathways of a certain visitation percentage. In this paper, we introduce an interactive method for the construction and visualization of high-resolution 3D visitation maps for large numbers of trajectories. To achieve this we employ functionality on recent GPUs to efficiently voxelize particle trajectories into a 3D texture map. In this map we visualize envelopes enclosing particle pathways that are followed by a certain percentage of particles using direct volume rendering techniques. By combining visitation map construction with GPU-based Monte-Carlo particle tracing we can even demonstrate the instant construction of a visitation map from a given vector field. To facilitate the visualization of safety regions around possible trajectories, we further generate Euclidean distance transform volumes to these trajectories on the fly. We demonstrate the application of our approach for visualizing the variation of stream lines in 3D flows due to different numerical integration schemes or errors introduced through data transformation operations, as well as for visualizing envelopes of probabilistic fiber bundles in DTI tractography.
Motion visualization in large particle simulations
Roland Fraedrich, Rüdiger Westermann
Interactive visualization of large particle sets is required to analyze the complicated structures and formation processes in astrophysical particle simulations. While some research has been done on the development of visualization techniques for steady particle fields, only very few approaches have been proposed to interactively visualize large time-varying fields and their dynamics. Particle trajectories are known to visualize dynamic processes over time, but due to occlusion and visual cluttering such techniques have only been reported for very small particle sets so far. In this paper we present a novel technique to solve these problems, and we demonstrate the potential of our approach for the visual exploration of large astrophysical particle sequences. We present a new hierarchical space-time data structure for particle sets which allows for a scale-space analysis of trajectories in the simulated fields. In combination with visualization techniques that adapt to the respective scales, clusters of particles with homogeneous motion as well as separation and merging regions can be identified effectively. The additional use of mapping functions to modulate the color and size of trajectories allows emphasizing various particle properties like direction, speed, or particle-specific attributes like temperature. Furthermore, tracking of interactively selected particle subsets permits the user to focus on structures of interest.
Animating streamlines with repeated asymmetric patterns for steady flow visualization
Chih-Kuo Yeh, Zhanping Liu, Tong-Yee Lee
Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.
Poster Session
icon_mobile_dropdown
X3DBio1: a visual analysis tool for biomolecular structure exploration
Hong Yi, Abhishek Singh, Yaroslava G. Yingling
Protein tertiary structure analysis provides valuable information on their biochemical functions. The structure-to-function relationship can be directly addressed through three dimensional (3D) biomolecular structure exploration and comparison. We present X3DBio1, a visual analysis tool for 3D biomolecular structure exploration, which allows for easy visual analysis of 2D intra-molecular contact map and 3D density exploration for protein, DNA, and RNA structures. A case study is also presented in this paper to illustrate the utility of the tool. X3DBio1 is open source and freely downloadable. We expect this tool can be applied to solve a variety of biological problems.
Increasing the perceptual salience of relationships in parallel coordinate plots
Jonathan M. Harter, Xunlei Wu, Oluwafemi S. Alabi, et al.
We present three extensions to parallel coordinates that increase the perceptual salience of relationships between axes in multivariate data sets: (1) luminance modulation maintains the ability to preattentively detect patterns in the presence of overplotting, (2) adding a one-vs.-all variable display highlights relationships between one variable and all others, and (3) adding a scatter plot within the parallel-coordinates display preattentively highlights clusters and spatial layouts without strongly interfering with the parallel-coordinates display. These techniques can be combined with one another and with existing extensions to parallel coordinates, and two of them generalize beyond cases with known-important axes. We applied these techniques to two real-world data sets (relativistic heavy-ion collision hydrodynamics and weather observations with statistical principal component analysis) as well as the popular car data set. We present relationships discovered in the data sets using these methods.
Comparative visualization of ensembles using ensemble surface slicing
Oluwafemi S. Alabi, Xunlei Wu, Jonathan M. Harter, et al.
By definition, an ensemble is a set of surfaces or volumes derived from a series of simulations or experiments. Sometimes the series is run with different initial conditions for one parameter to determine parameter sensitivity. The understanding and identification of visual similarities and differences among the shapes of members of an ensemble is an acute and growing challenge for researchers across the physical sciences. More specifically, the task of gaining spatial understanding and identifying similarities and differences between multiple complex geometric data sets simultaneously has proved challenging. This paper proposes a comparison and visualization technique to support the visual study of parameter sensitivity. We present a novel single-image view and sampling technique which we call Ensemble Surface Slicing (ESS). ESS produces a single image that is useful for determining differences and similarities between surfaces simultaneously from several data sets. We demonstrate the usefulness of ESS on two real-world data sets from our collaborators.
A performance assessment on the effectiveness of digital image registration methods
Steve Kacenjar, Bing Li, Alan Ostrow
Digital Image Correlation (DIC) of time-sequenced-imagery (TSI) is a very popular method in the study of medical, material deformation, and electronic packaging. Its use in processing the before-and-after images provides critical information about the scene deformation and structural differences between the imagery. Several correlation methods for implementing DIC have been developed and will be compared in this study. Each of these methods offer distinct trades offs with respect to processing complexity and lock-in accuracy. There are several factors that influence the effectiveness of these methods to provide robust operation and strongly localized correlation peaks. These factors include; camera positional stability during the time of image acquisitions, deformation of the object under study, and measurement noise. In addition, the signatures that are captured during DIC can often times be amplified through preprocessing and thus potentially enhancing DIC performance. This paper examines the impacts on two of these factors (measurement noise and image digital sharpening) using four popular correlation methods that are often implemented in DIC analyses.
An evaluation of rendering and interactive methods for volumetric data exploration in virtual reality environments
Nan Wang, Alexis Paljic, Philippe Fuchs
In this paper we evaluate one interaction method and four display techniques for exploring volumetric datasets in virtual reality immersive environments. We propose an approach based on the display of a subset of the volumetric data, as isosurfaces, and an interactive manipulation of the isosurfaces to allow the user to look for local feature in the datasets. We also studied the influence of four different rendering techniques for isosurface rendering in a virtual reality system. The study is based on a search and point task in a 3D temperature field. User precision, task completion time and user movement were evaluated during the test. The study allowed to choose the most suitable rendering mode for isosurface representation, and provided guidelines for data exploration tasks in immersive environments.
Efficient, dynamic data visualization with persistent data structures
Joseph A. Cottam, Andrew Lumsdaine
Working with data that is changing while it is being worked on, so called "dynamic data", presents unique challenges to a visualization and analysis framework. In particular, making rendering and analysis mutually exclusive can quickly lead to either livelock in the analysis, unresponsive visuals or incorrect results. A framework's data store is a common point of contention that often drives the mutual exclusion. Providing safe, synchronous access to the data store eliminates the livelock scenarios and responsive visuals while maintaining result correctness. Persistent data structures are a technique for providing safe, synchronous access. They support safe, synchronous access by directly supporting multiple versions of the data structure with limited data duplication. With a persistent data structure, rendering acts on one version of the data structure while analysis updates another, effectively double-buffering the central data store. Pre-rendering work based on global state (such as scaling all values relative to the global maximum) is also efficiently treated if independently modified versions can be merged. The Stencil visualization system uses persistent data structures to achieve task-based parallelism between analysis, pre-rendering and rendering work with little synchronization overhead. With efficient persistent data structures, performance gains of several orders of magnitude are achieved.
Radial visualizations for comparative data analysis
Geoffrey M. Draper, Matthew G. Styles, Richard F. Riesenfeld
SQiRL is a novel visualization system for querying and visualizing large multivariate data sets. Although initially designed for novice users, recent extensions to SQiRL facilitate more advanced analysis without sacrificing the simplicity that makes this visualization appealing to beginners. The default view provides a simple-to-learn interface for query evaluation. Intermediate users are provided a straightforward method for comparing the results of two queries. More advanced users can make use of a "radial crosstab," a new interactive visualization technique that melds the expressive power of traditional crosstabulation with a drag-and-drop canvas.
Exploiting major trends in subject hierarchies for large-scale collection visualization
Charles-Antoine Julien, Pierre Tirilly, John E. Leide, et al.
Many large digital collections are currently organized by subject; however, these useful information organization structures are large and complex, making them difficult to browse. Current online tools and visualization prototypes show small localized subsets and do not provide the ability to explore the predominant patterns of the overall subject structure. This research addresses this issue by simplifying the subject structure using two techniques based on the highly uneven distribution of real-world collections: level compression and child pruning. The approach is demonstrated using a sample of 130K records organized by the Library of Congress Subject Headings (LCSH). Promising results show that the subject hierarchy can be reduced down to 42% of its initial size, while maintaining access to 81% of the collection. The visual impact is demonstrated using a traditional outline view allowing searchers to dynamically change the amount of complexity that they feel necessary for the tasks at hand.
Visualization of multidimensional time
Luther A. Tychonievich, Robert P. Burton
Time generally is assumed to be a scalar: it can be sorted, is unidirectional, and has only a single dimension. In this work we demonstrate that vector-valued multidimensional time can be defined meaningfully, simulated efficiently, and visualized in an interactive manner. We present two particular simulations, providing a first look at what hypertime may be "like" from both a physical and a navigational perspective. Although similar in many ways to our experience, mT phenomena also differ from 1T phenomena on a fundamental level. Our visualization framework motivates observations of some of these differences and helps us identify a variety of open tasks that will further our understanding of the characteristics of time, whatever its dimensionality. Together, these results form a basis from which arbitrary space-time dimensionalities can be understood.
Degeneracy-aware interpolation of 3D diffusion tensor fields
Visual analysis of 3D diffusion tensor fields has become an important topic especially in medical imaging for understanding microscopic structures and physical properties of biological tissues. However, it is still difficult to continuously track the underlying features from discrete tensor samples, due to the absence of appropriate interpolation schemes in the sense that we are able to handle possible degeneracy while fully respecting the smooth transition of tensor anisotropic features. This is because the degeneracy may cause rotational inconsistency of tensor anisotropy. This paper presents such an approach to interpolating 3D diffusion tensor fields. The primary idea behind our approach is to resolve the possible degeneracy through optimizing the rotational transformation between a pair of neighboring tensors by analyzing their associated eigenstructure, while the degeneracy can be identified by applying a minimum spanning tree-based clustering algorithm to the original tensor samples. Comparisons with existing interpolation schemes will be provided to demonstrate the advantages of our scheme, together with several results of tracking white matter fiber bundles in a human brain.
Visualization and analysis of 3D gene expression patterns in zebrafish using web services
The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.
Vortex core detection: back to basics
Allen Van Gelder
Analyzing vortices in fluid flows is an important and extensively studied problem. Visualization methods are an important tool, and vortex cores, including vortex-core axes, are frequently objects for which visualization is attempted. A robust definition of vortex-core axis has eluded researchers for a decade. This paper reviews the criteria described in some early papers, as well as recent papers that concentrate on issues of unsteady flows, and attempts to build on their ideas. In particular, researchers have proposed criteria that are desirable for a vortex-core axis that correspond to nonlocal properties, yet current extraction methods are all based on local properties. Analysis is presented to support the thesis that inaccuracies observed in some popular early methods are due to a mixture of frequencies in the flow field in vortical regions. Such mixtures occur in steady flows, as well as unsteady (time-varying) flows. Thus, the fact that the flows are unsteady is not necessarily the primary reason for inaccuracies recently observed in vortex analysis of such flows. It is hypothesized that time-varying (unsteady) flows tend to be more complex, hence tend to have mixed frequencies more often than steady flows. We further conjecture that an "effective" lack of Galilean invariance may occur in steady or unsteady flows, due to the interaction of low frequencies with high frequencies.