This paper examines the function approximation properties of the "random ne
ural-network model" or GNN, The output of the GNN can be computed from the
firing probabilities of selected neurons. We consider a feedforward Bipolar
GNN (BGNN) model which has both "positive and negative neurons" in the out
put layer, and prove that the BGNN is a universal function approximator, Sp
ecifically, for any f is an element of C([0, 1](s)) and any epsilon > 0, we
show that there exists a feedforward BGNN which approximates I uniformly w
ith error less than epsilon. We also show that after some appropriate clamp
ing operation on its output, the feedforward GNN is also a universal functi
on approximator.