This work develops asymptotically optimal controls for discrete-time singul
arly perturbed Markov decision processes (MDPs) having weak and strong inte
ractions. The focus is on finite-state-space-M-DP problems. The state space
of the underlying Markov chain can be decomposed into a number of recurren
t classes or a number of recurrent classes and a group of transient states.
Using a hierarchical control approach, continuous-time limit problems that
are much simpler to handle than the original ones are derived. Based on th
e optimal solutions for the limit problems, nearly optimal decisions for th
e original problems are obtained. The asymptotic optimality of such control
s is proved and the rate of convergence is provided. Infinite horizon probl
ems are considered; both discounted costs and long-run average costs are ex
amined.