Techniques for synthesizing panoramic scenes are widespread. Such a scene can be automatically created from multiple displaced images by aligning and overlapping them using an image registration technique. The ability to generate panoramic scenes has many applications including the generation of virtual reality backgrounds, model-based video compression, and object recognition. These techniques–and consequently their associated applications share the restriction that all scenes are limited to a 360° view of the horizontal plane at the particular moment in time the images were taken. Until recently, there has been little motivation to develop techniques for the presentation of complete spherical views in real time—scenes that present the entire potential visible fields of view, through time. With the advent of space exploration and associated micro-gravity environments, “up” and “down” are relative terms and locally fixed points of reference are difficult to come by. It may be useful to rethink how video is captured and presented to a user working in such an environment employing extended notions of what a panorama is. We have built a prototype camera which allows a user to view and pan/tilt through arbitrary angles of view including elevation and declination. This camera provides the view in real time from an network of seven synchronized CCD video cameras whose video outputs are selectively “stitched” together to provide a smooth transition between different camera fields of view. In this way, the user can smoothly pan/tilt through all the fields of view that are generated by the system. All video processing is done in software–there are no moving parts.
55th International Astronautical Congress, Vancouver, Canada Oct 4-8, 2004