In many language processing tasks, most of the sentences generally convey r
ather simple meanings. Moreover, these tasks have a limited semantic domain
that can be properly covered with a simple lexicon and a restricted syntax
. Nevertheless, casual users are by no means expected to comply with any ki
nd of formal syntactic restrictions due to the inherent "spontaneous" natur
e of human language. In this work, the use of error-correcting-based learni
ng techniques is proposed to cope with the complex syntactic variability wh
ich is generally exhibited by natural language. In our approach, a complex
task is modeled in terms of a basic finite state model, F, and a stochastic
error model, E. F should account for the basic (syntactic) structures unde
rlying this task, which would convey the meaning. E should account for gene
ral vocabulary variations, word disappearance, superfluous words, and so on
. Each "natural" user sentence is thus considered as a corrupted version (a
ccording to E) of some "simple" sentence of L(F). Adequate bootstrapping pr
ocedures are presented that incrementally improve the "structure" of F whil
e estimating the probabilities for the operations of E. These techniques ha
ve been applied to a practical task of moderately high syntactic variabilit
y, and the results which show the potential of the proposed approach are pr
esented.