Screen Space Spherical Harmonics Occlusion Sampling
Sebastian Herholz, Timo Schairer and Wolfgang Straßer
In this work we present the idea of combining Screen Space Directional Occlusion with the concept of Spherical Harmonics lighting evaluated per pixel in real time for dynamic scenes. Projecting the occlusion function to the Spherical Harmonics domain in real time requires sampling the volume of a pixels upper hemisphere using only a relatively small number of samples. In order to minimize noise and aliasing effects resulting from the sampling process we use a 4×4 interleaved/jittered sampling pattern for the sampling directions. To eliminate remaining directional noise, fast low-pass filtering of the occlusion function is performed in the SH domain by applying the spherical equivalent of a Gaussian kernel represented as Zonal Harmonics. This method of calculating the Spherical Harmonics representation of the local per-pixel occlusion has only marginally higher cost compared to a Screen Space Ambient Occlusion implementation with similar sampling. The amount of directionality depends on the bandwidth of the Spherical Harmonics projection decreasing gracefully to the non-directional effect of standard Ambient Occlusion. Our approach for directional lighting results in colored shadow effects similar to Screen Space Directional Occlusion, yet separating occlusion and lighting calculation. This separation allows the straight-forward integration of our method into modern 3D graphics engines that use Spherical Harmonics lighting and Screen Space Ambient Occlusion for diffuse global illumination effects.
Conference: Siggraph 2011
SIFT vs. SOFT – A Comparison Of Feature And Correlation Based Rotation Estimation For Panoramic Images
Timo Schairer, Sebastian Herholz, Benjamin Huhle and Wolfgang Straßer
Orientation estimation based on image data is a key technique in many applications. Robust estimates are possible in case of omnidirectional images due to the large field of view of the camera. Traditionally, techniques based on local image features have been applied to this kind of problem. Another very efficient technique is to formulate the problem in terms of correlation on the sphere and to solve it in Fourier space. While both methods claim to provide accurate and robust estimates, a quantitative comparison has not been reported yet. In this paper we evaluate the two approaches in terms of accuracy, image resolution and robustness to noise by comparing the estimated rotations of virtual as well as real images to ground-truth data.
Conference: 3DTV Con 2010
LibGaze: Real-time gaze-tracking of freely moving observers for wall-sized displays
Herholz, S., L. L. Chuang, T. G. Tanner, H. H. Bülthoff and R. W. Fleming
We present a mobile system for tracking the gaze of an observer in real-time as they move around freely and interact with a wall-sized display. The system combines a head-mounted eye tracker with a mo- tion capture system for tracking markers attached to the eye tracker. Our open-source software library libGaze provides routines for calibrating the sys- tem and computing the viewers position and gaze direction in real-time. The modular architecture of our system supports simple replacement of each of the main components with alternative technology. We use the system to perform a psychophysical user-study, designed to measure how users visually explore large displays. We find that observers use head move- ments during gaze shifts, even when these are well within the range that can be com- fortably reached by eye movements alone. This suggests that free movement is important in nor- mal gaze behaviour,motivating further applications in which the tracked user is free to move.
Real-time gaze-tracking for freely-moving observers
Herholz, S., T. G. Tanner, L. H. Canto-Pereira, R. W. Fleming and H. H. Bülthoff
We have developed a real-time mobile gaze-tracker, by combining a high-speed eye-tracker (Eyelink II, 500Hz) with head- and body-tracking (VICON, 200Hz). The position of the observer’s gaze on the screen can be measured continuously with an accuracy of >1.0 deg as they walk around and make head movements in a natural way. The system is modular, i.e. individual components can be easily replaced (e.g., different eye and head tracking systems can be used). The system is primarily developed for interaction in front of wall-sized displays. For validation, the system has been tested with displays of different sizes (from 2.2×1.8m to 5.2×2.5m), and several applications, including psychophysical experiments and a multi-resolution gaze-contingent display.
Programming models for heterogeneous multi-core processor architectures
This diploma thesis deals with heterogeneous multi-core-processor architectures. These architectures distinguish from common processor-architectures by having more than one core. These multiple cores do not have the same core-architecture. They differ from each other. These hererogenous cores can work in parallel to be able to work on more than one task at the same time.
This paper will outline some concepts and strategies for developing on top of that kind of processor-architectures. These concepts should ease the development of code for software-developers, which runs efficiently on this environments. Examples for these concepts will be presented on the IBM Cell processor. It was designed by a cooperation of Sony, Toshiba and IBM. The first target systems for the processor will be the Sony “Playstation3″ and the Mercury ”Cell Blades” both released in 2006.