By now it is known that several problems in the robustness analysis and syn
thesis of control systems are NP-complete or NP-hard. These negative result
s force us to modify our notion of "solving" a given problem. If we cannot
solve a problem exactly because it is NP-hard, then we must settle for solv
ing it approximately. If we cannot solve all instances of a problem, we mus
t settle for solving "almost all" instances of a problem. Gn approach that
is recently gaining popularity is that of using randomized algorithms. The
notion of a randomized algorithm as defined here is somewhat different from
that in the computer science literature, and enlarges the class of problem
s that can be efficiently solved. We begin with the premise that many probl
ems in robustness analysis and synthesis can be formulated as the minimizat
ion of an objective function with respect to the controller parameters. It
is argued that, in order to assess the performance of a controller as the p
lant varies over a prespecified family, it is better to use the average per
formance of the controller as the objective function to be minimized, rathe
r than its worst-case performance, as the worst-case objective function usu
ally leads to rather conservative designs. Then it is shown that a property
from statistical learning theory known as uniform convergence of empirical
means (UCEM) plays an important role in allowing us to construct efficient
randomized algorithms for a wide variety of controller synthesis problems.
In particular, whenever the UCEM property holds, there exists an efficient
(i.e., polynomial-time) randomized algorithm. Using very recent results in
VC-dimension theory, it is shown that the UCEM property holds in several p
roblems such as robust stabilization and weighted H-2/H-infinity-norm minim
ization. Hence it is possible to serve such problems efficiently using rand
omized algorithms.