MIXING FLOATING-POINT AND FIXED-POINT FORMATS FOR NEURAL-NETWORK LEARNING ON NEUROPROCESSORS

Citation
D. Anguita et Ba. Gomes, MIXING FLOATING-POINT AND FIXED-POINT FORMATS FOR NEURAL-NETWORK LEARNING ON NEUROPROCESSORS, Microprocessing and microprogramming, 41(10), 1996, pp. 757-769
Citations number
28
Categorie Soggetti
Computer Sciences","Computer Science Hardware & Architecture
ISSN journal
01656074
Volume
41
Issue
10
Year of publication
1996
Pages
757 - 769
Database
ISI
SICI code
0165-6074(1996)41:10<757:MFAFFF>2.0.ZU;2-9
Abstract
We examine the efficient implementation of back-propagation (BP) type algorithms on TO [3], a vector processor with a fixed-point engine, de signed for neural network simulation, Using Matrix Back Propagation (M BP) [2] we achieve an asymptotically optimal performance on TO (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line BP algorithm. We use a mixture of fixed- and floating-point operations in order to guarantee both high efficiency and fast convergence. Though the most expensive computations are imple mented in fixed-point, we achieve a rate of convergence that is compar able to the floating-point version, The time taken for conversion betw een fixed- and floating-point is also shown to be reasonably low.