The composition and performance of music is a plural activity that com
bines the outcomes of a number of procedures, many of which involve fu
nctions that operate in parallel. in terms of sound-synthesis operatio
ns, a significant number of generative and signal-processing operation
s involve a combination of concurrent elements, ranging from the produ
ction of simultaneous notes by a single instrument to the superimposit
ion of totally independent outputs, where a number of different compon
ents contribute to the audio spectrum. The traditional computer proces
sor is a serial device, restricted for the most part to the execution
of instructions as a single stream of events. Thus processes that requ
ire the aggregation of functions executed in parallel must be simulate
d by some means of cyclical tasking and data accumulation. In the case
of digital audio synthesis and signal processing applications, the re
sultant effects on overall processor performance quickly become signif
icant, thus limiting the number of individual components that can be h
andled in real time. About ten years ago, the Music Technology Group a
t the University of Durham started a series of investigations into the
construction of computing architectures for audio applications that e
mbraced a significant degree of true parallelism, based in the first i
nstance on the INMOS Transputer. This article describes some of the mo
st important outcomes of this particular Line of investigation, and hi
ghlights aspects that hold a particular relevance for future designs o
f parallel audio processors.