State-dependent importance sampling for regularly varying random walks

Citation
J. Blanchet, et Liu, Jingchen, State-dependent importance sampling for regularly varying random walks, Advances in applied probability , 40(2), 2008, pp. 1104-1128
ISSN journal
00018678
Volume
40
Issue
2
Year of publication
2008
Pages
1104 - 1128
Database
ACNP
SICI code
Abstract
Consider a sequence (Xk: k . 0) of regularly varying independent and identically distributed random variables with mean 0 and finite variance. We develop efficient rare-event simulation methodology associated with large deviation probabilities for the random walk (Sn: n . 0). Our techniques are illustrated by examples, including large deviations for the empirical mean and path-dependent events. In particular, we describe two efficient state-dependent importance sampling algorithms for estimating the tail of Sn in a large deviation regime as n . .. The first algorithm takes advantage of large deviation approximations that are used to mimic the zero-variance change of measure. The second algorithm uses a parametric family of changes of measure based on mixtures. Lyapunov-type inequalities are used to appropriately select the mixture parameters in order to guarantee bounded relative error (or efficiency) of the estimator. The second example involves a path-dependent event related to a so-called knock-in financial option under heavy-tailed log returns. Again, the importance sampling algorithm is based on a parametric family of mixtures which is selected using Lyapunov bounds. In addition to the theoretical analysis of the algorithms, numerical experiments are provided in order to test their empirical performance.