Pruning is one of the effective techniques for improving the generaliz
ation error of neural networks. Existing pruning techniques are derive
d mainly from the viewpoint of energy minimization, which is commonly
used in gradient-based learning methods. In recurrent networks, extend
ed Kalman filter (EKF)-based training has been shown to be superior to
gradient-based learning methods in terms of speed. This article expla
ins a pruning procedure for recurrent neural networks using EKF traini
ng. The sensitivity of a posterior probability is used as a measure of
the importance of a weight instead of error sensitivity since posteri
or probability density is readily obtained from this training method.
The pruning procedure is tested using three problems: (1) the predicti
on of a simple linear time series, (2) the identification of a nonline
ar system, and (3) the prediction of an exchange-rate time series. Sim
ulation results demonstrate that the proposed pruning method is able t
o reduce the number of parameters and improve the generalization abili
ty of a recurrent network.