The performance of neural networks is known to be sensitive to the ini
tial weight setting and architecture (the number of hidden layers and
neurons in these layers). This shortcoming can be alleviated if some a
pproximation of the target concept in terms of a logical description i
s available. The paper reports a successful attempt to initialize neur
al networks using decision-tree generators. The TBNN (tree-based neura
l net) system compares very Favourably with other learners in terms of
classification accuracy for unseen data, and it is also computational
ly less demanding than the back propagation algorithm applied to a ran
domly initialized multilayer perceptron. The behavior of the system is
first studied for specially designed artificial data. Then, its perfo
rmance is demonstrated by a real-world application.