We sought to determine how local and global features within an image intera
ct by examining whether orientation discrimination thresholds could be modi
fied by contextual information. In particular, we Investigated how local or
ientation signals within an image are pooled together, and whether this poo
ling process is dependent on the global orientation content present in the
image. We find that observers' orientation judgments depend on surround con
textual information, with performance being optimal when the center and sur
round stimuli are clearly distinct. In cases where the center and surround
were not clearly segregated, we report two sets of results. If there was an
ambiguity regarding the perception of a global structure (i.e. a small mis
match between local cues), observers' performance was impaired. If there wa
s no mismatch and local and global cues were consistent with the perception
of a single surface, observers performed as well as in the distinct surfac
es case. Although some of our results can be largely accounted for by inter
actions between differently oriented filters, other aspects are more diffic
ult to reconcile with this explanation. We suggest that low level filtering
constrains observers' performance, and that influences arising from image
segmentation modify how local orientation signals are pooled together. (C)
2001 Elsevier Science Ltd. All rights reserved.