Panoramic Video from Unstructured Camera Array

1,2F. Perazzi   2A. Sorkine-Hornung   2H. Zimmer   2P. Kaufmann   2O. Wang   3S. Watson   1,2M. Gross  

1ETH Zurich    2Disney Research Zurich    3Walt Disney Imagineering



Figure 1: Two panoramas created with our system. Top: a cropped frame from a 160 megapixel panoramic video generated from five input videos. The overlay on the right shows the full panorama, with the respective individual field of views of the input cameras highlighted by colored frames. Bottom: a crop from a 20 megapixel panorama created from a highly unstructured array consisting of 14 cameras.

Abstract

We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact-free panorama stitching is impeded by parallax between input views. Common strategies such as multi-level blending or minimum energy seams produce seamless results on quasi-static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair-wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non-overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.



Introduction

We present an algorithm based on three key observations. Firstly, for the analysis of parallax errors we found existing image comparison techniques to be not sufficiently robust or targeted towards the specific stitching artifacts we observed. Hence, we introduce a patch-based error metric defined on image gradients, which we designed to be especially sensitive to parallax errors in highly structured image regions, and which ensures visual similarity of content between the input videos and the output panorama. Secondly, the ability to compensate parallax errors between views depends on the spatial configuration of the individual field of views and the scene content, and the order that the parallax is compensated between images. We therefore propose a method that first analyzes these properties, and then computes an optimized ordering of pair-wise image warps, which results in improved quality of the paral lax removal. Our procedure remains efficient even for large numbers of input cameras, where brute-force approaches for finding an optimal warp order would be infeasible. Finally, local image warping accumulates globally, leading to significant spatial deformations of the panorama. Since these deformations are dependent on the per-frame scene content, they change for every output frame and hence result in noticeable temporal jitter. We resolve these global deformations and temporal instability by a weighted warp extrapolation from overlap to non-overlap image regions, and a final constrained relaxation step of each full panoramic output frame to a reference projection. We demonstrate panoramic video results captured with different types of cameras on arrays with up to 14 cameras, allowing the generation of panoramic video in the order of tens to over a hundred megapixels

Citation - BibTeX

F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Watson, M. Gross, Panoramic Video from Unstructured Camera Arrays, Computer Graphics Forum (Proc. Eurographics 2015), Vol. 34, No. 2, May 2015, Zurich, Switzerland. [ Pdf 9.0MB ][ BibTeX ][ Data ]