Regularized (stabilized) versions of exponential and linear forgetting
in parameter tracking are shown to be dual to each other. Both are de
rived by solving basically the same Bayesian decision problem where Ku
llback-Leibler divergence is used to measure (quasi)distance between p
robability distributions of estimated parameters. The type of forgetti
ng depends solely on the order of arguments in Kullback-Leibler diverg
ence. This general view indicates under which conditions one technique
is superior to the other. Applied to the case of ARX models, the appr
oach results in a class of regularized or stabilized forgetting strate
gies that are naturally robust with respect to poor system excitation.
Copyright (C) 1996 Elsevier Science Ltd.