Aritificial neural networks may be used for a function approximator which i
ncludes not only deterministic but also probabilistic model. Conditional va
riance estimation using a neural network is a good example of probabilistic
model approximation, because conditional variance, which is a function of
input variable, is an important parameter to describe a Gaussian probabilis
tic model. The majority of learning algorithms are based on a concept of li
kelihood maximization or expectation maximization method. This article pres
ents an alternative learning algorithm based on a different concept for a m
ultilayer perceptron. The proposed variance learning algorithm can be regar
ded as a kind of modified delta rule, where delta is determined by an itera
tive estimation algorithm, which is also proposed in this article. The prop
osed learning algorithm has stochastic property because the delta is stocha
stically determined by the estimation algorithm. Relationships of delta to
the transient and steady state of the learning process are also stochastic.
First, the iterative variance estimation algorithm is explained. Second, t
he transient state behavior is investigated to have an insight into converg
ence and stability properties with respect to delta. Third, the steady stat
e analysis is described to show the relationship of delta to steady state e
rror bound. Theoretical analysis on steady state behavior produces analytic
formula for steady state error bound of the variance learning algorithm in
terms of the delta. Finally, multilayer perceptron using the proposed lear
ning algorithm is simulated for the demonstration of variance estimation.