RESIDUE SYSTOLIC IMPLEMENTATIONS FOR NEURAL NETWORKS

Citation
Cn. Zhang et al., RESIDUE SYSTOLIC IMPLEMENTATIONS FOR NEURAL NETWORKS, NEURAL COMPUTING & APPLICATIONS, 3(3), 1995, pp. 149-156
Citations number
10
Categorie Soggetti
Computer Sciences, Special Topics","Computer Science Artificial Intelligence
ISSN journal
09410643
Volume
3
Issue
3
Year of publication
1995
Pages
149 - 156
Database
ISI
SICI code
0941-0643(1995)3:3<149:RSIFNN>2.0.ZU;2-9
Abstract
In this work we propose two techniques for improving VLSI implementati ons for artificial neural networks (ANNs). By making use of two kinds of processing elements (PEs), one dedicated to the basic operations (a ddition and multiplication) and another to evaluate the activation fun ction, the total time and cost for the VLSI array implementation of AN Ns can be decreased by a factor of two compared with previous work. Ta king the advantage of residue number system, the efficiency of each PE can be further increased. Two RNS-based array processor designs are p roposed. The first is built by look-up tables, and the second is const ructed by binary adders accomplished by the mixed-radix conversion (MR C), such that the hardwares are simple and high speed operations are o btained. The proposed techniques are general enough to be extended to cover other forms of loading and learning algorithms.