Techniques for synthesizing panoramic scenes are widespread. Such a scene can be automatically created from multiple displaced images by aligning and overlapping them using an image registration technique. The ability to generate panoramic scenes has many applications including the generation of virtual reality backgrounds, model-based video compression, and object recognition. These techniques–and consequently their associated applications share the restriction that all scenes are limited to a 360 degree view of the horizontal plane at the particular moment in time the images were taken. Until recently, there has been little motivation to develop techniques for the presentation of complete spherical views in real time—scenes that present the entire potential visible fields of view, through time. With the advent of space exploration and associated micro-gravity environments, “up” and “down” are relative terms and locally fixed points of reference are difficult to come by. It may be useful to rethink how video is captured and presented to a user working in such an environment employing extended notions of what a panorama is. The Panoramic Spherical Video (PSV) system described in this paper allows a user to view and pan through arbitrary angles of view including elevation and declination as well as providing the view in real time from an array of 16 synchronized CCD video cameras whose video output are selectively “stitched” together to provide a smooth transition between camera fields of view. In this way, the user can smoothly pan through all the fields of view that are generated by the system. All video processing is done in software–there are no moving parts.
The 2003 International Conference on Computational Science and its Applications, ICCSA 03, Montreal, Canada May 18-21, 2003