Many feedforward neural network architectures have the property that t
heir overall input-output function is unchanged by certain weight perm
utations and sign flips. In this paper, the geometric structure of the
se equioutput weight space transformations is explored for the case of
multilayer perceptron networks with tanh activation functions (simila
r results hold for many other types of neural networks). It is shown t
hat these transformations form an algebraic group isomorphic to a dire
ct product of Weyl groups. Results concerning the root spaces of the L
ie algebras associated with these Weyl groups are then used to derive
sets of simple equations for minimal sufficient search sets in weight
space. These sets, which take the geometric forms of a wedge and a con
e, occupy only a minute fraction of the volume of weight space. A sepa
rate analysis shows that large numbers of copies of a network performa
nce function optimum weight vector are created by the action of the eq
uioutput transformation group and that these copies all lie on the sam
e sphere. Some implications of these results for learning are discusse
d.