This paper is concerned with recurrence relations that arise frequentl
y in the analysis of divide-and-conquer algorithms. In order to solve
a problem instance of size x, such an algorithm invests an amount of w
ork a(x) to break the problem into subproblems of sizes h1(x), h2(x),.
..,h(k)(x), and then proceeds to solve the subproblems. Our particular
interest is in the case where the sizes h(i)(x) are random variables;
this may occur either because of randomization within the algorithm o
r because the instances to be solved are assumed to be drawn from a pr
obability distribution. When the h(i) are random variables the running
time of the algorithm on instances of size x is also a random variabl
e T(x). We give several easy-to-apply methods for obtaining fairly tig
ht bounds on the upper tails of the probability distribution of T(x),
and present a number of typical applications of these bounds to the an
alysis of algorithms. The proofs of the bounds are based on an interes
ting analysis of optimal strategies in certain gambling games.