We identify processes for structuring neural networks by reference to
two classes of interacting mappings, one generating provisional outcom
es (''trial solutions'') and the other generating idealized representa
tions, which we call ghost images. These mappings create an evolution
both of the provisional outcomes and ghost images, which in turn influ
ence a parallel evolution of the mappings themselves. The ghost image
models may be conceived as a generalization of the self-organizing neu
ral network models of Kohonen. Alternatively, they may be viewed as a
generalization of certain relaxation/restriction procedures of mathema
tical optimization. Hence indirectly they also generalize aspects of p
enalty based neural models, such as those proposed by Hopfield and Tan
k. Both avenues of generalization are ''context free'', without relian
ce on specialized theory, such as models of perception or mathematical
duality. From a neural network standpoint, the ghost image framework
makes it possible to extend previous Kohonen-based optimization approa
ches to incorporate components beyond a visually oriented frame of ref
erence. This added level of abstraction yields a basis for solving opt
imization problems expressed entirely in symbolic (''non-visual'') mat
hematical formulations. At the same time it allows penalty function id
eas in neural networks to be extended to encompass other concepts spri
nging from a mathematical optimization perspective, including parametr
ic deformation and surrogate contractions. This paper demonstrates the
efficacy of ghost image processes as a foundation for creating new op
timization approaches by providing specific examples of such methods f
or covering, packing, generalized covering, fixed charge and multidime
nsional knapsack problems. Preliminary computational results for multi
dimensional knapsack problems are also presented.