This paper proposes and demonstrates a framework for Sigma-Pi networks
such that the combinatorial increase in product terms is avoided. Thi
s is achieved by only implementing a subset of the possible product te
rms (sub-net Sigma-Pi), Application of a dynamic weight pruning algori
thm enables redundant weights to be removed and replaced during the le
arning process, hence permitting access to a larger weight space than
employed at network initialization, More than one learning rate is app
lied to ensure that the inclusion of higher order descriptors does not
result in over description of the training set (memorization), The ap
plication of such a framework is tested using a problem requiring Sign
ificant generalization ability. Performance of the resulting sub-net S
igma-Pi network is compared to that returned by optimal multi-layer pe
rceptrons and general Sigma-Pi solutions.