Conventional supervised learning in neural networks is carried out by perfo
rming unconstrained minimization of a suitably defined cost function. This
approach has certain drawbacks, which can be overcome by incorporating addi
tional knowledge in the training formalism. In this paper, two types of suc
h additional knowledge are examined: Network specific knowledge (associated
with the neural network irrespectively of the problem whose solution is so
ught) or problem specific knowledge (which helps to solve a specific learni
ng task). A constrained optimization framework is introduced for incorporat
ing these types of knowledge into the learning formalism. We present three
examples of improvement in the learning behaviour of neural networks using
additional knowledge in the context of our constrained optimization framewo
rk. The two network specific examples are designed to improve convergence a
nd learning speed in the broad class of feedforward networks, while the thi
rd problem specific example is related to the efficient factorization of 2-
D polynomials using suitably constructed sigma-pi networks.