Ergodic theory for controlled Markov chains with stationary inputs

Citation
Chen, Yue et al., Ergodic theory for controlled Markov chains with stationary inputs, Annals of applied probability , 28(2), 2018, pp. 79-111
ISSN journal
10505164
Volume
28
Issue
2
Year of publication
2018
Pages
79 - 111
Database
ACNP
SICI code
Abstract
Consider a stochastic process X on a finite state space X={1,.,d}. It is conditionally Markov, given a real-valued .input process. .. This is assumed to be small, which is modeled through the scaling, .t=..1t,0...1, where .1 is a bounded stationary process. The following conclusions are obtained, subject to smoothness assumptions on the controlled transition matrix and a mixing condition on . : (i) A stationary version of the process is constructed, that is coupled with a stationary version of the Markov chain X. obtained with ..0. The triple (X,X.,.) is a jointly stationary process satisfying P{X(t).X.(t)}=O(.). Moreover, a second-order Taylor-series approximation is obtained: P{X(t)=i}=P{X.(t)=i}+.2.(2)(i)+o(.2),1.i.d, with an explicit formula for the vector .(2).Rd . (ii) For any m.1 and any function f:{1,.,d}.R.Rm, the stationary stochastic process Y(t)=f(X(t),.(t)) has a power spectral density Sf that admits a second-order Taylor series expansion: A function S(2)f:[..,.].Cm.m is constructed such that Sf(.)=S.f(.)+.2S(2)f(.)+o(.2),..[..,.] in which the first term is the power spectral density obtained with .=0. An explicit formula for the function S(2)f is obtained, based in part on the bounds in (i). The results are illustrated with two general examples: mean field games, and a version of the timing channel of Anantharam and Verdu.