HOW OWE ARCHITECTURES ENCODE CONTEXTUAL EFFECTS IN ARTIFICIAL NEURAL NETWORKS

Citation
N. Pican et F. Alexandre, HOW OWE ARCHITECTURES ENCODE CONTEXTUAL EFFECTS IN ARTIFICIAL NEURAL NETWORKS, Mathematics and computers in simulation, 41(1-2), 1996, pp. 63-74
Citations number
18
Categorie Soggetti
Computer Sciences",Mathematics,"Computer Science Interdisciplinary Applications","Computer Science Software Graphycs Programming
ISSN journal
03784754
Volume
41
Issue
1-2
Year of publication
1996
Pages
63 - 74
Database
ISI
SICI code
0378-4754(1996)41:1-2<63:HOAECE>2.0.ZU;2-9
Abstract
Artificial neural networks (ANNs) are widely used for classification t asks where discriminant cues and also contextual parameters are propos ed as ANN inputs. When the input space is too large to enable a robust , time limited learning, a classical solution consists in designing a set of ANNs for different context domains. We have proposed a new lear ning algorithm, the lateral contribution learning algorithm (LCLA), ba sed on the backpropagation learning algorithm, which allows for such a solution with a reduced learning time and more efficient performances thanks to lateral influences between networks. This attractive, but h eavy solution has been improved thanks to the orthogonal weight estima tor (OWE) technique, an original architectural technique which, under light constraints, merges the set of ANNs in one ANN whose weights are dynamically estimated, for each example, by others ANNs, fed by the c ontext. This architecture allows to give a very rich and interesting i nterpretation of the weight landscape. We illustrate this interpretati on with two examples: a mathematical function estimation and a process modelization used in neurocontrol.