In the last few years, anatomical and physiological studies have provided n
ew insights into the organization of the parieto-frontal network underlying
visually guided arm-reaching movements in at least three domains. (1) Netw
ork architecture. It has been shown that the different classes of neurons e
ncoding information relevant to reaching are not confined within individual
cortical areas, but are common to different areas, which are generally lin
ked by reciprocal association connections. (2) Representation of informatio
n. There is evidence suggesting that reach-related populations of neurons d
o not encode relevant parameters within pure sensory or motor "reference fr
ames", but rather combine them within hybrid dimensions. (3) Visuomotor tra
nsformation. It has been proposed that the computation of mo tor commands f
or reaching occurs as a simultaneous recruitment of discrete populations of
neurons sharing similar properties in different cortical areas, rather tha
n as a serial process from vision to movement, engaging different areas at
different times. The goal of this paper was to link experimental (neurophys
iological and neuroanatomical) and computational aspects within an integrat
ed framework to illustrate how different neuronal populations in the pariet
o-frontal network operate a collective and distributed computation for reac
hing. In this framework, all dynamic (tuning, combinatorial, computational)
properties of units are determined by their location relative to three mai
n functional axes of the network, the visual-to-somatic, position-direction
, and sensory-motor axis. The visual-to-somatic axis is defined by gradient
s of activity symmetrical to the central sulcus and distributed over both f
rontal and parietal cortices. At least four sets of reach-related signals (
retinal, gaze, arm position/movement direction, muscle output) are represen
ted along this axis. This architecture defines informational domains where
neurons combine different inputs. The position-direction axis is identified
by the regular distribution of information over large populations of neuro
ns processing both positional and directional signals (concerning the arm,
gaze, visual stimuli, etc.) Therefore, the activity of gaze- and arm-relate
d neurons can represent virtual three-dimensional (3D) pathways for gaze sh
ifts or hand movement. Virtual 3D pathways are thus defined by a combinatio
n of directional and positional information. The sensory-motor axis is defi
ned by neurons displaying different temporal relationships with the differe
nt reach-related signals, such as target presentation, preparation for inte
nded arm movement, onset of movements, etc. These properties reflect the co
mputation performed by local networks, which are formed by two types of pro
cessing units: matching and condition units. Matching units relate differen
t neural representations of virtual 3D pathways for gaze or hand, and can p
redict motor commands and their sensory consequences. Depending on the unit
s involved, different matching operations can be learned in the network, re
sulting in the acquisition of different visuo-motor transformations, such a
s those underlying reaching to foveated targets, reaching to extrafoveal ta
rgets, and visual tracking of hand movement trajectory. Condition units lin
k these matching operations to reinforcement contingencies and therefore ca
n shape the collective neural recruitment along the three axes of the netwo
rk. This will result in a progressive match of retinal, gaze, arm, and musc
le signals suitable for moving the hand toward the target.