We consider the problem of maximizing a weighted sum of expected rewards in
steady-state in multiclass loss networks under dynamic routing and admissi
on control, with Poisson arrivals and exponentially distributed holding tim
es. By aggregating the underlying Markov decision process, we derive linear
programming relaxations that contain the achievable performance region und
er all admissible policies and lead to a series of progressively tighter up
per bounds. These relaxations allow stronger bounds at the expense of highe
r computational times. We further propose a series of routing and admission
control policies from the relaxations that outperform, in computational ex
periments, other heuristic policies, such as the dynamic alternative routin
g with trunk reservation and the least busy alternative routing, variations
of which are used in practice. The suboptimality guarantees defined as bes
t bound/best policy range from 0 to 2.5% under symmetry conditions (symmetr
ic network topology, arrival rates, link capacities, rewards), and from 4%
to 10% under asymmetry conditions. We discuss the qualitative behavior of t
hese policies and obtain insights about their performance.