Research Projects:

Scene Content Selected by Active Vision
Stimulus-Driven Visual Selective Attention
Variable Resolution Displays
Gaze-Contingent LOD Rendering in Virtual Reality



Scene Content Selected by Active Vision


The primate visual system actively selects visual information from the environment for detailed processing through mechanisms of visual attention and saccadic eye movements. This study examines the statistical properties of the scene content selected by active vision. Eye movements were recorded while participants free-viewed digitized images of natural and artificial scenes. Fixation locations were determined for each image and image patches were extracted around the observed fixation locations. Measures of local contrast, local spatial correlation and spatial frequency content were calculated on the extracted image patches. Replicating previous results, local contrast was found to be greater at the points of fixation when compared to either the contrast for image patches extracted at random locations or at the observed fixation locations using an image-shuffled database. Contrary to some results and in agreement with other results in the literature, a significant decorrelation of image intensity is observed between the locations of fixation and other neighboring locations. A discussion and analysis of methodological techniques is given that provides an explanation for the discrepancy in results. The results of our analyses indicate that both the local contrast and correlation at the points of fixation are a function of image type and, furthermore, that the magnitude of these effects depend on the levels of contrast and correlation present overall in the images. Finally, the largest effect sizes in local contrast and correlation are found at distances of approximately 1 degree of visual angle, which agrees well with measures of optimal spatial scale selectivity in the visual periphery where visual information for potential saccade targets is processed.

The following paper describes this research:

Parkhurst, D. J., and Niebur, E. (2003). Scene content selected by active vision. Spatial Vision, 6(2), 125-154.





Stimulus-Driven Visual Selective Attention


We are interested in how attention is allocated when people view complex natural scenes. We used a biologically-motivated computational model of stimulus-driven visual selective attention to measure the degree to which attentional allocation in such conditions is dependent on the stimulus properties. We recorded eye movements of human participants freeviewing images of natural scenes as a measure of attentional allocation. A good correspondance between stimlus salience and the observed eye movements was observed. The following paper describes the results of this research in detail:

Parkhurst, Law, and Niebur (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42(1), 107-123.

People involved:
Derrick Parkhurst
Klinton Law




Variable Resolution Displays


Gaze-contingent variable resolution displays present a high level of detail at the viewer's point of gaze and lower resolution in the periphery. Variable resolution techniques take advantage of the well known fact that the human visual system sensitivity is greatest at the point of gaze and rapidly falls off in the periphery. By rendering detail only where it can be processed effectively, computational resources, that are typically wasted displaying "unseen" detail in the periphery, are saved and can be used for other purposes.

Applications of variable resolution displays range from virtual reality to internet image transmission. For example, in virtual reality, many resources are required to render a detailed virtual scene. Using a variable resolution display allows otherwise wasted computational resources to be shifted to increase frame update rates. In internet image transmission applications, communication bandwidths are often limited and varialbe resolution displays can be used to save on communication resources by transmitting only the most important information (that at the point of gaze, or that which the user specifies).

We've investigated the behavioral consequences of using a gaze-contingent variable resolution display with a visual search paradigm. Although there can be dramatic behavioral consequences (e.g. altered search performance, abnormal eye movements, etc.) under the appropriate conditions, these effects can be minimized or eliminated. Our results are described in the following paper:

Parkhurst, D., Culurciello, E., & Niebur, E. (2000). Evaluating variable resolution diplays with visual search: Task performance and eye movements. Proceedings of the ACM Eye Tracking Reseach and Applications Symposium, 1, 105-109.

We've also conducted a review of the behavioral literature, evaluated practical constraints and conducted a theoretical analysis of the potential computational savings that variable resolutions displays can achieve. We've integrated these issues into a recent review paper:

Parkhurst, D. and Niebur, E. (2002). Variable-Resolution Displays: A theoretical, practical and behavioral evaluation.Human Factors, 44(4), 611--29.

People involved in this project:
Derrick Parkhurst
Eugenio Culurciello
Ernst Niebur




Gaze-Contingent LOD Rendering in Virtual Reality


Velocity-based Level of Detail Rendering
We are currently investigating the behavioral consequences of using velocity-based variable resolution display techniques in virtual reality. We've adapted the
Unreal rendering engine to display meshes at various levels of detail, depending on the velocity of the objects across the visual field. Given that visual sensitivity to the details of moving objects is significantly reduced, computation resources can be shifted to increase frame rates by rendering moving objects in less detail. Many thanks go out to the generous support of those at Epic Games for making this research possible.

A recent unpublished lab technical report describes our efforts in this project:
Parkhurst, D., and Niebur, E. (2001). Evaluating velocity-based level of detail rendering of virtual environments using visual search. In Lab Technical Report 2001-01, pgs. 1--6. [PDF]


Gaze-Contingent Level of Detail Rendering
We are currently investigating the behavioral consequences of using gaze-contingent variable resolution display techniques in virtual reality. We've adapted the Unreal rendering engine to display meshes at various levels of detail, depending on the user's point of gaze. Significant savings are seen in the way of increased frame rates using this technique because "unseen" detail in the visual periphery is not rendered. Again, many thanks go out to the generous support of those at Epic Games for making this research possible.

A recent unpublished lab technical report describes our efforts in this project:
Parkhurst, D., Law, I., and Niebur, E. (2001). Evaluating gaze-contingent level of detail rendering of virtual environments using visual search. In Lab Technical Report 2001-02, pgs. 1--6. [PDF]

People involved:
Derrick Parkhurst
Irwin Law
Ernst Niebur





Go Back To Computational Neuroscience Lab