Boppana (1989) proves a lower bound separating the PRIORITY and the CO
MMON PRAM models that is optimal to within a constant Factor. However,
an essential ingredient in his proof is a problem with an enormously
large input domain. In this paper, I achieve the same lower bound with
the improvement that is applies even when the computational problem i
s defined on a much more reasonably sized input domain. My new techniq
ues provide a greater understanding of the partial information a proce
ssor learns about the input. In addition, I define a new measure of th
e dependency that a function has on a variable and develop new set the
oretic techniques to replace the use of Ramsey theory (which has force
d the domain size to be large).