The field of knowledge engineering has been one of the most visible su
ccesses of AI to date. Knowledge acquisition is the main bottleneck in
the knowledge engineer's work. Machine-learning tools have contribute
d positively to the process of trying to eliminate or open up this bot
tleneck, but how do we know whether the field is progressing? How can
we determine the progress made in any of its branches? How can we be s
ure of an advance and take advantage of it? This article proposes a be
nchmark as a classificatory, comparative, and metric criterion for mac
hine-learning tools. The benchmark centers on the knowledge engineerin
g viewpoint, covering some of the characteristics the knowledge engine
er wants to find in a machine-learning tool. The proposed model has be
en applied to a set of machine-learning tools, comparing expected and
obtained results. Experimentation validated the model and led to inter
esting results.