NEURAL-NETWORK STUDIES - 3 - VARIABLE SELECTION IN THE CASCADE-CORRELATION LEARNING ARCHITECTURE

Citation
Vv. Kovalishyn et al., NEURAL-NETWORK STUDIES - 3 - VARIABLE SELECTION IN THE CASCADE-CORRELATION LEARNING ARCHITECTURE, Journal of chemical information and computer sciences, 38(4), 1998, pp. 651-659
Citations number
36
Categorie Soggetti
Computer Science Interdisciplinary Applications","Computer Science Information Systems","Computer Science Interdisciplinary Applications",Chemistry,"Computer Science Information Systems
ISSN journal
00952338
Volume
38
Issue
4
Year of publication
1998
Pages
651 - 659
Database
ISI
SICI code
0095-2338(1998)38:4<651:NS-3-V>2.0.ZU;2-0
Abstract
Pruning methods for feed-forward artificial neural networks trained by the cascade-correlation learning algorithm are proposed. The cascade- correlation algorithm starts with a small network and dynamically adds new nodes until the analyzed problem has been solved. This feature of the algorithm removes the requirement to predefine the architecture o f the neural network prior to network training. The developed pruning methods are used to estimate the importance of large sets of initial v ariables for quantitative structure-activity relationship studies and simulated data sets. The calculated results are compared with the perf ormance of fixed-size back-propagation neural networks and multiple re gression analysis and are carefully validated using different training /test set protocols, such as leave-one-out and full cross-validation p rocedures. The results suggest that the pruning methods can be success fully used to optimize the set of variables for the cascade-correlatio n learning algorithm neural networks. The use of variables selected by the elaborated methods provides an improvement of neural network pred iction ability compared to that calculated using the unpruned sets of variables.