Consider: a-finite-state Markov chain where the transition probabilities di
ffer by orders of magnitude. This Markov chain has an "attractor state," i.
e., from any state of the Markov chain there exists:a sample path of signif
icant probability-to the-attractor state. There also exists a "rare set," w
hich is accessible from the attractor state only by sample paths of very. s
mall probability. The problem is to estimate the probability that starting
from the attractor state, the Maykov-chain hits the rare set before returni
ng to the attractor state. Examples of this setting arise in the case of re
liability models with highly reliable components as well as in the case of-
queueing networks with low traffic. Importance-sampling is a commonly used
simulation technique for the fast estimation of rare-event probabilities. I
t-involves simulating the Markov chain under a new probability-measure that
emphasizes the most likely paths to therare set.-Previous research focused
on developing importance-sampling schemes fora special case of Markov chai
ns that did not include "high-probability cycles." We show, through example
s that the Markov chains,used to model many commonly encountered systems do
have high-probability cycles, and existing importance-sampling schemes can
lead to infinite variance in simulating such systems, We then develop the
insight that in the:presence. of high-probability cycles care should be tak
en in allocating:the new transition probabilities so that the variance accu
mulated over these cycles does not increase without bounds. Based on this o
bservation we develop two importance-sampling techniques that have the boun
ded; relative error property, i.e., the simulation run-length required to e
stimate the, rare-event probability to a fixed degree of accuracy remains b
ounded the event of interest becomes more rare.