Neural networks are noted for their learning and generalizing capabili
ties. However, their advancement and applicabilities are severely limi
ted by the low comprehensibility of their internal knowledge. Previous
ly, the authors have proposed a rule-mapped neural network model, by i
ncorporating domain knowledge initially. This paper suggests a tool na
med 'knowledge matrix', that produces symbolic interpretations of the
network's response to an input. These interpretations can be shown to
enhance the reasoning power of the system. Moreover, the system knowle
dge can be refined explicably. The proposed approach is tested with a
Chinese character structure recognition problem. This is an attempt to
model a human learning process that is commonly observed in many situ
ations. For example, a trainee may be given a set of provisional rules
at the very beginning which is expected to be moderated in accordance
with his forthcoming experience.