THE ADVANTAGES OF MULTIPLE PARALLELIZATIONS IN COMBINATORIAL SEARCH

Citation
La. Crowl et al., THE ADVANTAGES OF MULTIPLE PARALLELIZATIONS IN COMBINATORIAL SEARCH, Journal of parallel and distributed computing, 21(1), 1994, pp. 110-123
Citations number
18
Categorie Soggetti
Computer Sciences","Computer Science Theory & Methods
ISSN journal
07437315
Volume
21
Issue
1
Year of publication
1994
Pages
110 - 123
Database
ISI
SICI code
0743-7315(1994)21:1<110:TAOMPI>2.0.ZU;2-6
Abstract
Applications typically have several potential sources of parallelism, and in choosing a particular parallelization, the programmer must bala nce the benefits of each source of parallelism with the corresponding overhead. The trade-offs are often difficult to analyze, as they may d epend on the hardware architecture, software environment, input data, and properties of the algorithm. An example of this dilemma occurs in a wide range of problems that involve processing trees, wherein proces sors can be assigned either to separate subtrees, or to parallelizing the work performed on individual tree nodes. We explore the complexity of the trade-offs involved in this decision by considering alternativ e parallelizations of combinatorial search, examining the factors that determine the best-performing implementation for this important class of problems. Using subgraph isomorphism as a representative search pr oblem, we show how the density of the solution space, the number of so lutions desired, the number of available processors, and the underlyin g architecture all affect the choice of an efficient parallelization. Our experiments, which span seven different shared-memory multiprocess ors and a wide range of input graphs, indicate that relative performan ce depends on each of these factors. On some machines and for some inp uts, a sequential depth-first search of the solution space, applying s imple loop-level parallelism at each node in the search tree, performs best. On other machines or other inputs, parallel tree search perform s best. In still other cases, a hybrid solution, containing both paral lel tree search and loop parallelism, works best. We present a quantit ative analysis that explains these results and present experimental da ta culled from thousands of program executions that validates the anal ysis. From these experiences we conclude that there is no one ''best'' parallelization that suffices over a range of machines, inputs, and p recise problem specifications. As a corollary, we provide quantitative evidence that programming environments and languages should not focus exclusively on flat data parallelism, since nested parallelism or hyb rid forms of parallelism may be required for an efficient implementati on of some applications. (C) 1994 Academic Press, Inc.