We provide a radically elementary proof of the universal approximation
property of the one-hidden layer perceptron based on the Taylor expan
sion and the Vandermonde determinant. It works for both L-q and unifor
m approximation on compact sets. This approach naturally yields some b
ounds for the design of the hidden layer and convergence results (incl
uding some rates) for the derivatives. A partial answer to Hornik's co
njecture on the universality of the bias is proposed. An extension to
vector valued functions is also carried out. (C) 1997 Elsevier Science
Ltd.