Consider a multilayer perceptron (MLP) with d inputs, a single hidden sigmo
idal layer and a linear output. By adding an additional d inputs to the net
work with values set to the square of the first d inputs, properties remini
scent of higher-order neural networks and radial basis function networks (R
BFN) are added to the architecture with little added expense in terms of we
ight requirements. Of particular interest, this architecture has the abilit
y to form localized features in a d-dimensional space with a single hidden
node but can also span large volumes of the input space; thus, the architec
ture has the localized properties of an RBFN but does not suffer as badly f
rom the curse of dimensionality. I refer to a network of this type as a SQu
are Unit Augmented, Radially Extended, MultiLayer Perceptron (SQUARE-MLP or
SMLP).