Research on bias in machine learning algorithms has generally been con
cerned with the impact of bias on predictive accuracy. We believe that
there are other factors that should also play a role in the evaluatio
n of bias. One such factor is the stability of the algorithm; in other
words, the repeatability of the results. If we obtain two sets of dat
a from the same phenomenon, with the same underlying probability distr
ibution, then we would like our learning algorithm to induce approxima
tely the same concepts from both sets of data. This paper introduces a
method for quantifying stability, based on a measure of the agreement
between concepts. We also discuss the relationships among stability,
predictive accuracy, and bias.