Uniform convergence of exact large deviations for renewal reward processes

Authors
Citation
Chi, Zhiyi, Uniform convergence of exact large deviations for renewal reward processes, Annals of applied probability , 17(3), 2007, pp. 1019-1048
ISSN journal
10505164
Volume
17
Issue
3
Year of publication
2007
Pages
1019 - 1048
Database
ACNP
SICI code
Abstract
Let (Xn,.Yn) be i.i.d. random vectors. Let W(x) be the partial sum of Yn just before that of Xn exceeds x>0. Motivated by stochastic models for neural activity, uniform convergence of the form sup.c.I|a(c,.x)Pr.{W(x).cx}.1|=o(1), x.., is established for probabilities of large deviations, with a(c,.x) a deterministic function and I an open interval. To obtain this uniform exact large deviations principle (LDP), we first establish the exponentially fast uniform convergence of a family of renewal measures and then apply it to appropriately tilted distributions of Xn and the moment generating function of W(x). The uniform exact LDP is obtained for cases where Xn has a subcomponent with a smooth density and Yn is not a linear transform of Xn. An extension is also made to the partial sum at the first exceedance time.