The objective of this paper is to propose neural networks for the stud
y of dynamic identification and prediction of a fermentation system wh
ich produces mainly 2,3-butanediol (2,3-BDL). The metabolic products o
f the fermentation, acetic acid, acetoin, ethanol, and 2,3-BDL were me
asured on-line via a mass spectrometer modified by the insertion of a
dimethylvinylsilicone membrane probe. The measured data at different s
ampling times were included as the input and output nodes, at differen
t learning batches, of the network. A fermentation system is usually n
onlinear and dynamic in nature. Measured fermentation data obtained fr
om the complex metabolic pathways are often difficult to be entirely i
ncluded in a static process model, therefore, a dynamic model was sugg
ested instead. In this work, neural networks were provided by a dynami
c learning and prediction process that moved along the time sequence b
atchwise. In other words, a scheme of two-dimensional moving window (n
umber of input nodes by the number of training data) was proposed for
reading in new data while forgetting part of the old data. Proper size
of the network including proper number of input/output nodes were det
ermined by trained with the real-time fermentation data. Different num
ber of hidden nodes under the consideration of both learning performan
ce and computation efficiency were tested. The data size for each lear
ning batch was determined. The performance of the learning factors suc
h as the learning coefficient eta and the momentum term coefficient al
pha were also discussed. The effect of different dynamic learning inte
rvals, with different starting points and the same ending point, both
on the learning and prediction performance were studied. On the other
hand, the effect of different dynamic learning intervals, with the sam
e starting point and different ending points, was also investigated. T
he size of data sampling interval was also discussed. The performance
from four different types of transfer functions, x/(1 + \x\), sgn(x) .
x(2)/(1 + x(2)), 2/(1 + e(-x)) - 1, and 1/(1 + e(-x)) was compared. A
scaling factor b was added to the transfer function and the effect of
this factor on the learning was also evaluated. The prediction result
s from the time-delayed neural networks were also studied.