Modeling the role of salience in the allocation of overt visual attention
A biologically motivated computational model of bottom-up visual selective
attention was used to examine the degree to which stimulus salience guides the
allocation of attention. Human eye movements were recorded while participants
viewed a series of digitized images of complex natural and artificial scenes.
Stimulus dependence of attention, as measured by the correlation between computed
stimulus salience and fixation locations, was found to be significantly greater
than that expected by chance alone and furthermore was greatest for eye movements
that immediately follow stimulus onset. The ability to guide attention of three
modeled stimulus features (color, intensity and orientation) was examined and
found to vary with image type. Additionally, the effect of the drop in visual
sensitivity as a function of eccentricity on stimulus salience was examined,
modeled, and shown to be an important determiner of attentional allocation.
Overall, the results indicate that stimulus-driven, bottom-up mechanisms
contribute significantly to attentional guidance under natural viewing
paper describes this research:
Parkhurst, Law, and Niebur (2002). Modeling the role of salience in the
allocation of overt visual attention. Vision Research, 42(1), 107-123.
Developing a model of visual selective attention for dynamic natural scenes
We are developing a computational model of stimulus-driven visual selective
attention based on what is known about visual information processing in the
primate visual system. The model has the ability to make predictions for
attentional allocation in static and dynamic natural scenes.
The model calculates stimulus salience based on color,
intensity, orientation and motion. A schematic diagram of the model
architecture is shown to the right.
Click on the images below to display an MPEG video sequence of a
swinging pendulum and the resulting dynamic salience map generated by
A description of our progress and results can be found in
the following dissertation:
Parkhurst, D. (2002). Selective attention in natural vision: Using computational models to
quantify stimulus-driven attentional allocation. Ph.D. thesis, The Johns Hopkins University,
Testing real-time models of overt shifts of visual attention
We are currently investigating the mechanisms responsible for generating overt
attentional shifts (i.e. eye movements) and covert attentional shifts
within a salience map representation. To address this question we have implemented
a real-time model of visual selective attention that takes input from a webcam
(Logitech Quickcam), runs in real time (15-30 fps), and generates eye movements.
These eye movements are used to pan and tilt the camera (the camera is mounted on
a TrackerPod device) so that the camera actively tracks visually salient stimuli.
Using simplified, real-time models are especially useful in testing different
attentional mechanisms with real world stimuli.
Developing attention-based video compression
We are currently investigating a number of variable-resolution display
techniques in an MPEG-4 video compression application that take
advantage of the basic fact that normal viewers only attend to
relatively small portions of natural images at any one time. We use a
computational model of visual selective attention to predict where in each
video frame that viewers are likely to fixate and we maintain high
resolution at those locations and reduce the resolution at all other
locations. We are currently examining the degree of compression that is
obtainable using this technique by comparing visual quality estimates and
eye movement measures obtained when human participants view compressed
and non-compressed video sequences.
Conducting psychophysical experiments on the Internet
We are interested in how people allocate attention when
viewing natural scenes. To examine this question, we have
designed a number of on-line psychophysical experiments where
people can participate over the Internet. These experiments
utilize the participant's own browser to display visual stimuli
and record their responses. You can participate in one of
our experiments by going to this page.
Go Back To Computational Neuroscience Lab