This article describes a new Metropolis-like transition rule, the multiple-
try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using t
his transition rule together with adaptive direction sampling. we propose a
novel method for incorporating local optimization steps into a MCMC sample
r in continuous state-space. Numerical studies show that the new method per
forms significantly better than the traditional Metropolis-Hastings (M-H) s
ampler. With minor tailoring in using the rule, the multiple-try method can
also be exploited to achieve the effect of a griddy Gibbs sampler without
having to bear with griddy approximations, and the effect of a hit-and-run
algorithm without having to figure out the required conditional distributio
n in a random direction.