The automation of complex systems can lead to situations of man-machin
e mismatch, propitious to human errors and accidents. Many types of co
unter-measures are foreseen like operator training, man-system interfa
ce improvement and use of on-line expert systems. Improving the collab
oration between operators and automation is another kind of measure. T
he philosophy of assistance considered in this article consists of pre
serving or supporting the operators control and risk management strate
gies and of maintaining opportunities for learning-by-doing, while dev
eloping safety envelopes aimed at filtering any erroneous command and
allowing automated responses in the absence of human command The pract
ice of such a philosophy requires a technological intelligence capable
of adapting itself to operators. Systems with such properties belong
to the class of adaptive systems. Without claiming to be exhaustive, t
his article reviews research and Practical applications which use this
paradigm Adaptive systems use models of their environnment or of thei
r users (including their behaviours or intentions) in order to adapt t
hemselves dynamically to the actions and strategies of the users. Most
of them also demonstrate anticipation capabilities. Adoptive assistan
ce takes advantage of these properties, by creating conditions propiti
ous to a human-like collaboration between agents. This paradigm featur
es indeed some properties of human interactions, in which each actor b
uilds and uses models of his partners in order to understand and antic
ipate their activity. The intelligibility, the predictability and fina
lly the reliability of the interactions however depend on these models
' coherence and expressiveness. As any assistance system is itself fal
lible or of limited application, the last part of the article discusse
s some side-effects of adaptive assistance, in terms of induced risks
and user acceptance. It concludes by recognising the promising but als
o very demanding character of the line of research.