Share Email Print

Proceedings Paper

Integration of multiple view plus depth data for free viewpoint 3D display
Author(s): Kazuyoshi Suzuki; Yuko Yoshida; Tetsuya Kawamoto; Toshiaki Fujii; Kenji Mase
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.

Paper Details

Date Published: 6 March 2014
PDF: 10 pages
Proc. SPIE 9011, Stereoscopic Displays and Applications XXV, 901114 (6 March 2014); doi: 10.1117/12.2039166
Show Author Affiliations
Kazuyoshi Suzuki, Nagoya Univ. (Japan)
Yuko Yoshida, Chukyo TV Broadcasting Co., Ltd. (Japan)
Tetsuya Kawamoto, Chukyo TV Broadcasting Co., Ltd. (Japan)
Toshiaki Fujii, Nagoya Univ. (Japan)
Kenji Mase, Nagoya Univ. (Japan)

Published in SPIE Proceedings Vol. 9011:
Stereoscopic Displays and Applications XXV
Andrew J. Woods; Nicolas S. Holliman; Gregg E. Favalora, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?