We solve an open problem of Maass and Turan, showing that the optimal mista
ke-bound when learning a given concept class without membership queries is
within a constant factor of the optimal number of mistakes plus membership
queries required by an algorithm that can ask membership queries. Previousl
y known results imply that the constant factor in our bound is best possibl
e.
We then show that, in a natural generalization of the mistake-bound model,
the usefulness to the learner of arbitrary "yes-no" questions between trial
s is very limited. We show that several natural structural questions about
relatives of the mistake-bound model can be answered through the applicatio
n of this general result. Most of these results can be interpreted as sayin
g that learning in apparently less powerful (and more realistic) models is
not much more difficult than learning in more powerful models.