A new method of inter-neuron communication called incremental communic
ation is presented. In the incremental communication method, instead o
f communicating the whole value of a variable, only the increment or d
ecrement of its previous value is sent on a communication link, The in
cremental value may be either a fixed-point or a floating-point value,
Multilayer feedforward network architecture is used to illustrate the
effectiveness of the proposed communication scheme. The method is app
lied to three different learning problems and the effect of the precis
ion of incremental input-output values of the neurons on the convergen
ce behavior is examined, It is shown through simulation that for some
problems even four-bit precision in fixed- and/or floating-point repre
sentations is sufficient for the network to converge, With 8-12 bit pr
ecisions almost the same results are obtained as that with the convent
ional communication using 32-bit precision, The proposed method of com
munication can lead to significant savings in the intercommunication c
ost for implementations of artificial neural networks on parallel comp
uters as well as the interconnection cost of direct hardware realizati
ons. The method can be Incorporated into most of the current learning
algorithms in which inter-neuron communications are required, Moreover
, it can be used along with the other limited precision strategies for
representation of variables suggested in literature.