Logic programs are highly amenable to parallelization, and their level of a
bstraction relieves the programmer of many of the most difficult and error-
prone details of parallel programming. However, tuning the performance of a
parallel logic program is nontrivial. While working with programmers we no
ticed that they evolved strategies based on observed parallel performance.
This paper illustrates some pitfalls inherent in this approach, using simpl
e examples whose behaviour does not depend upon a particular task schedulin
g algorithm, and which are mostly non-speculative and therefore of general
interest. It has two aims: to make parallel logic programmers more aware of
such pitfalls, and to pose a challenge to future runtime analysis tools.