This paper introduces a constrained second-order network with a multip
le objective learning algorithm that forms closed hyperellipsoidal dec
ision boundaries for one-class classification. The network architectur
e has uncoupled constraints that give independent control over each de
cision boundary's size, shape, position, and orientation. The architec
ture together with the learning algorithm guarantee the formation of p
ositive definite eigenvalues for closed hyperellipsoidal decision boun
daries. The learning algorithm incorporates two criteria, one that see
ks to minimize classification mapping error and another that seeks to
minimize the size of the decision boundaries. We consider both additiv
e combinations and multiplicative combinations of the individual crite
ria, and we present empirical evidence for selecting functional forms
of the individual objectives that are bounded and normalized. The resu
lting multiple objective criterion allows the decision boundaries to i
ncrease or decrease in size as necessary to achieve both within-class
generalization and out-of-class generalization without requiring the u
se of non-target patterns in the training set. The resulting network l
earns compact closed decision boundaries when trained with target data
only. We show results of applying the network to the Iris data set (F
isher (1936), Annals of Eugenics, 7(2), 179-188). Advantages of this a
pproach include its inherent ability for one-class generalization, fre
edom from characterizing the non-target class, and the ability to form
closed decision boundaries for multi-modal classes that are more comp
lex than hyperspheres without requiring inversion of large matrices. P
ublished by Elsevier Science Ltd.