Data-scaling problems in neural-network training

Citation
P. Koprinkova et M. Petrova, Data-scaling problems in neural-network training, ENG APP ART, 12(3), 1999, pp. 281-296
Citations number
12
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
ISSN journal
09521976 → ACNP
Volume
12
Issue
3
Year of publication
1999
Pages
281 - 296
Database
ISI
SICI code
0952-1976(199906)12:3<281:DPINT>2.0.ZU;2-0
Abstract
In the present paper, data-scaling problems in feedforward neural-network t raining are discussed. These problems appear when the experimental data to be learned vary across a wide interval, and when, after the data has been s caled, a part of the information in the data is lost. To solve these proble ms, a parametric output function of the neurons is proposed here. It allows the data-scaling region to be increased by the introduction of two new par ameters. During the process of backpropagation learning, the relative squar e error is minimized. In this way, the loss of information is avoided, sinc e the modified neural network can be trained to account equally for the big gest and the smallest values in the training data set. Two examples of neur al-network models of biotechnological processes are presented. A comparison with the classical feedforward neural-network models is made. Different ap proaches used in training with the new parameters are discussed. (C) 1999 E lsevier Science Ltd. All rights reserved.