In this paper we investigate the feed-forward learning problem. The well-kn
own ill-conditioning which is present in most feed-forward learning problem
s is shown to be the result of the structure of the network. Also, the well
-known problem that weights between 'higher' layers in the network have to
settle before 'lower' weights can converge is addressed. We present a solut
ion to these problems by modifying the structure of the network through the
addition of linear connections which carry shared weights. We call the new
network structure the linearly augmented feed-forward network, and it is s
hown that the universal ap proximation theorems are still valid. Simulation
experiments show the validity of the new method, and demonstrate that the
new network is less sensitive to local minima and learns faster than the or
iginal network.