Modulation of reach-to-grasp parameters: semantic category, volumetric properties and distracter interference?

Citation
A. Kritikos et al., Modulation of reach-to-grasp parameters: semantic category, volumetric properties and distracter interference?, EXP BRAIN R, 138(1), 2001, pp. 54-61
Citations number
35
Categorie Soggetti
Neurosciences & Behavoir
Journal title
EXPERIMENTAL BRAIN RESEARCH
ISSN journal
00144819 → ACNP
Volume
138
Issue
1
Year of publication
2001
Pages
54 - 61
Database
ISI
SICI code
0014-4819(200105)138:1<54:MORPSC>2.0.ZU;2-H
Abstract
In the two experiments of this study, we assessed the influence of target s ize and semantic category on the expression of reach-to-grasp kinematic par ameters. Moreover, we investigated the influence of size and semantic categ ory of distracters on reaches to the target. The experimental objects repre sented living and nonliving categories and wide and narrow grasp sizes. Par ticipants reached for and picked up mid-sagittally placed targets, which we re either alone or flanked by distracters congruent or incongruent to seman tic category and size of the target. In experiment 1, movement duration was faster to living objects. We could not replicate this, however, in experim ent 2. Conversely, significant and reliable Category x Size interactions fo r grasp were obtained in experiment 1 and replicated in experiment 2. The p attern of the means in these interactions coincided with the absolute volum etric properties of the stimuli, indicating that the size of the stimuli wa s the main determinant of the expression of kinematic parameters. We conclu de that volumetric properties such as size, rather than semantic category, are the crucial features in the programming and execution of movement to ta rgets. As regards the category and size of the distracter, interference eff ects were evident: both category and size exerted a comparable influence on reaches to the target. The direction of interference, however, was not sys tematic. The interference effects Lire discussed in the context of visual s earch models of attention.