In active learning models the value function is necessarily convex in
the priors. Hence, in combination with a concave objective, the decisi
on Problem need not become concave so that nonregularity problems are
inherent. This paper considers an objective that unambiguously implies
a quasi-convex decision problem and highlights the effect of the inhe
rent nonregularities on active learning. A trigger policy for learning
is shown to be optimal: the minimum amount of learning is optimal unt
il uncertainty surpasses a critical value. At this trigger point the m
aximum amount of learning is chosen, uncertainty falls temporarily, an
d the cycle then repeats itself.