Existing methods for exploiting flawed domain theories depend on the use of
a sufficiently large set of training examples for diagnosing and repairing
flaws in the theory. In this paper, we offer a method of theory reinterpre
tation that makes only marginal use of training examples. The idea is as fo
llows: Often a small number of flaws in a theory can completely destroy the
theory's classification accuracy. Yet it is clear that valuable informatio
n is available even from such flawed theories. For example, an instance wit
h several independent proofs in a slightly flawed theory is certainly more
likely to be correctly classified as positive than an instance with only a
single proof.
This idea can be generalized to a numerical notion of "degree of provedness
" which measures the robustness of proofs or refutations for a given instan
ce. This "degree of provedness" can be easily computed using a "soft" inter
pretation of the theory. Given a ranking of instances based on the values s
o obtained, all that is required to classify instances is to determine some
cutoff threshold above which instances are classified as positive. Such a
threshold can be determined on the basis of a small set of training example
s.
For theories with a few localized flaws, we improve the method by "reharden
ing": interpreting only parts of the theory softly, while interpreting the
rest of the theory in the usual manner. Isolating those parts of the theory
that should be interpreted softly can be done on the basis of a small numb
er of training examples.
Softening, with or without rehardening, can be used by itself as a quick wa
y of handling theories with suspected flaws where few training examples are
available. Additionally softening and rehardening can be used in conjuncti
on with other methods as a meta-algorithm for determining which theory revi
sion methods are appropriate for a given theory.