In the present paper, data-scaling problems in feedforward neural-network t
raining are discussed. These problems appear when the experimental data to
be learned vary across a wide interval, and when, after the data has been s
caled, a part of the information in the data is lost. To solve these proble
ms, a parametric output function of the neurons is proposed here. It allows
the data-scaling region to be increased by the introduction of two new par
ameters. During the process of backpropagation learning, the relative squar
e error is minimized. In this way, the loss of information is avoided, sinc
e the modified neural network can be trained to account equally for the big
gest and the smallest values in the training data set. Two examples of neur
al-network models of biotechnological processes are presented. A comparison
with the classical feedforward neural-network models is made. Different ap
proaches used in training with the new parameters are discussed. (C) 1999 E
lsevier Science Ltd. All rights reserved.