Efficient learning of DFA is a challenging research problem in grammatical
inference. It is known that both exact and approximate (in the PAC sense) i
dentifiability of DFA is hard. Pitt has posed the following open research p
roblem: "Are DFA PAC-identifiable if examples are drawn from the uniform di
stribution, or some other known simple distribution ?" (Pitt, in Lecture No
tes in Artificial Intelligence, 397, pp. 18-44, Springer-Verlag, 1989). We
demonstrate that the class of DFA whose canonical representations have loga
rithmic Kolmogorov complexity is efficiently PAC learnable under the Solomo
noff Levin universal distribution (m). We prove that the class of DFA is ef
ficiently learnable under the PACS (PAC learning with simple examples) mode
l (Denis, D'Halluin & Gilleron, STACS'96-Proceedings of the 13th Annual Sym
posium on the Theoretical Aspects of Computer Science, pp. 231-242, 1996) w
herein positive and negative examples are sampled according to the universa
l distribution conditional on a description of the target concept. Further,
we show that any concept that is learnable under Gold's model of learning
from characteristic samples, Goldman and Mathias' polynomial teachability m
odel, and the model of learning from example based queries is also learnabl
e under the PACS model.