In neural networks, convergence in iterative learning is a common prob
lem. For fast learning one should be able to control the rate of conve
rgence. In the present paper, the single-layer perceptron model for tw
o classes is considered where the rate of convergence is studied with
several choices of the gain term in the updation rule. Experimental re
sults on a number of two-class problems are reported.