Program transformation systems are always based on some underlying mod
el of cost. In sequential computation, this cost system is intuitive a
nd hence almost transparent. In parallel computation this is not the c
ase, primarily because of properties of communication, such as latency
and congestion. These are hard to model abstractly because they depen
d on details of the decomposition of computations into threads, their
placement on processors, and the way in which communication is handled
in the interconnect. We present a framework for classifying cost mode
ls and the transformation systems they induce, and illustrate in the c
ontext of three parallel programming models. All restrict the flexibil
ity of the programming system or limit the architectures to which they
apply. The best trade-off between such restrictions and the expressiv
eness of the transformation system is an important open problem. The f
ramework suggested here may be useful in the search for other useful c
ost models.