Although neural networks have shown very good performance in many applicati
on domains, one of their main drawbacks lies in the incapacity to provide a
n explanation for the underlying reasoning mechanisms.
The "explanation capability" of neural networks can be achieved by the extr
action of symbolic knowledge. In this paper, we present a new method of ext
raction that captures nonmonotonic rules encoded in the network, and prove
that such a method is sound.
We start by discussing some of the main problems of knowledge extraction me
thods. We then discuss how these problems may be ameliorated. To this end,
a partial ordering on the set of input vectors of a network is defined, as
well as a number of pruning and simplification rules. The pruning rules are
then used to reduce the search space of the extraction algorithm during a
pedagogical extraction, whereas the simplification rules are used to reduce
the size of the extracted set of rules. We show that, in the case of regul
ar networks, the extraction algorithm is sound and complete.
We proceed to extend the extraction algorithm to the class of non-regular n
etworks, the general case. We show that non-regular networks always contain
regularities in their subnetworks. As a result, the underlying extraction
method for regular networks can be applied, but now in a decompositional fa
shion. In order to combine the sets of rules extracted from each subnetwork
into the final set of rules, we use a method whereby we are able to keep t
he soundness of the extraction algorithm.
Finally, we present the results of an empirical analysis of the extraction
system, using traditional examples and real-world application problems. The
results have shown that a very high fidelity between the extracted set of
rules and the network can be achieved. (C) 2001 Elsevier Science B.V. AII r
ights reserved.