Neurofuzzy networks are often used to model linear or nonlinear processes,
as they can provide some insights into the underlying processes and can be
trained using experimental data. As the training of the networks involves i
ntensive computation, it is often performed off line. However, it is well k
nown that neurofuzzy networks trained off line may not be able to cope succ
essully with time-varying processes. To overcome this problem, the weights
of the networks are trained on line. In this paper, an on-line training alg
orithm with a computation time that is linear in the number of weights is d
erived by making full use of the local change proper ty of neurofuzzy netwo
rks. It is shown that the estimated weights converge to that obtained from
the least-squares method, and that the range of the input domain can be ext
ended without retraining the network. Furthermore, it has a better ability
in tracking time-varying systems than the recursive least-squares method, s
ince in the proposed algorithm a positive definite submatrix is added to th
e relevant part of the covariance matrix. The performance of the proposed a
lgorithm is illustrated by simulation examples and compared with that obtai
ned using the recursive feast-squares method.