The paper presents a taxonomy of the existing forms of parallel comput
er architectures, based on the characteristics of the hardware archite
cture and the abstract machine layered upon it. The abstract machine r
eflects the programming models provided. The main classes of hardware
architectures are: physically shared memory systems and distributed me
mory systems. Distributed memory systems may be remote memory access a
rchitectures or message passing architectures. The major forms of abst
ract machine architecture are: message passing systems and logically s
hared memory architectures. Three solutions for logically shared memor
y architectures are known (1) distributed shared memory architectures,
(2) multi-threaded architectures, and (3) virtual shared memory archi
tectures. All three types are discussed in detail under the aspects of
performance, programmability, and scalability, and their correspondin
g programming paradigms are characterized. The implications of the thr
ee concepts on node architecture and the requirements of latency minim
ization or latency hiding are discussed and illustrated by examples ta
ken from pioneering realizations of the three kinds of architecture su
ch as DASH, T, and MANNA.