Share Email Print

Proceedings Paper

Depth-from-trajectories for uncalibrated multiview video
Author(s): Paul A. Ardis; Amit Singhal; Christopher M. Brown
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

We propose a method for efficiently determining qualitative depth maps for multiple monoscopic videos of the same scene without explicitly solving for stereo or calibrating any of the cameras involved. By tracking a small number of feature points and determining trajectory correspondence, it is possible to determine correct temporal alignment as well as establish a similarity metric for fundamental matrices relating each trajectory. Modeling of matrix relations with a weighted digraph and performing Markov clustering results in a determination of emergent depth layers for feature points. Finally, pixels are segmented into depth layers based upon motion similarity to feature point trajectories. Initial experimental results are demonstrated on stereo benchmark and consumer data.

Paper Details

Date Published: 19 January 2009
PDF: 10 pages
Proc. SPIE 7252, Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques, 725209 (19 January 2009); doi: 10.1117/12.806816
Show Author Affiliations
Paul A. Ardis, Univ. of Rochester (United States)
Amit Singhal, Eastman Kodak Co. (United States)
Christopher M. Brown, Univ. of Rochester (United States)

Published in SPIE Proceedings Vol. 7252:
Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques
David P. Casasent; Ernest L. Hall; Juha Röning, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?