Fault-tolerant training of neural networks in the presence of MOS transistor mismatches

Citation
As. Ogrenci et al., Fault-tolerant training of neural networks in the presence of MOS transistor mismatches, IEEE CIR-II, 48(3), 2001, pp. 272-281
Citations number
36
Categorie Soggetti
Eletrical & Eletronics Engineeing
Journal title
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING
ISSN journal
10577130 → ACNP
Volume
48
Issue
3
Year of publication
2001
Pages
272 - 281
Database
ISI
SICI code
1057-7130(200103)48:3<272:FTONNI>2.0.ZU;2-6
Abstract
Analog techniques are desirable for hardware implementation of neural netwo rks due to their numerous advantages such as small size, low power, and hig h speed. However, these advantages are often offset by the difficulty in th e training of analog neural network circuitry. In particular, training of t he circuitry by software based on hardware models is impaired by statistica l variations in the integrated circuit production process, resulting in per formance degradation. In this paper, a new paradigm of noise injection duri ng training for the reduction of this degradation is presented. The variati ons at the outputs of analog neural network circuitry are modeled based on the transistor-level mismatches occurring between identically designed tran sistors, Those variations are used as additive noise during training to inc rease the fault tolerance of the trained neural network. The results of thi s paradigm are confirmed via numerical experiments and physical measurement s and are shown to be superior to the case of adding random noise during tr aining.