Irregular computation problems underlie many important scientific appl
ications. Although these problems are computationally expensive, and s
o would seem appropriate for parallel machines, their irregular and un
predictable run-time behavior makes this type of parallel program diff
icult to write and adversely affects run-time performance. This paper
explores three issues - partitioning, mutual exclusion, and data trans
fer - crucial to the efficient execution of irregular problems on dist
ributed-memory machines. Unlike previous work, we studied the same pro
grams running in three alternative systems on the same hardware base (
a Thinking Machines CM-5): the CHAOS irregular application library, Tr
ansparent Shared Memory (TSM), and eXtensible Shared Memory (XSM). CHA
OS and XSM performed equivalently for all three applications. Both sys
tems were somewhat (13%) to significantly faster (991%) than TSM.