We derive optimal gambling and investment policies for cases in which
the underlying stochastic process has parameter values that are unobse
rved random variables. For the objective of maximizing logarithmic uti
lity when the underlying stochastic process is a simple random walk in
a random environment, we show that a state-dependent control is optim
al, which is a generalization of the celebrated Kelly strategy: the op
timal strategy is to bet a fraction of current wealth equal to a linea
r function of the posterior mean increment. To approximate more genera
l stochastic processes, we consider a continuous-time analog involving
Brownian motion. To analyze the continuous-time problem, we study the
diffusion limit of random walks in a random environment. We prove tha
t they converge weakly to a Kiefer process, or tied-down Brownian shee
t. We then find conditions under which the discrete-time process conve
rges to a diffusion, and analyze the resulting process. We analyze in
detail the case of the natural conjugate prior, where the success prob
ability has a beta distribution, and show that the resulting limit dif
fusion can be viewed as a rescaled Brownian motion. These results allo
w explicit computation of the optimal control policies for the continu
ous-time gambling and investment problems without resorting to continu
ous-time stochastic-control procedures. Moreover they also allow an ex
plicit quantitative evaluation of the financial value of randomness, t
he financial gain of perfect information and the financial cost of lea
rning in the Bayesian problem.