Slow learning of neural-network function approximators can frequently be at
tributed to interference, which occurs when learning in one area of the inp
ut space causes unlearning in another area. To mitigate the effect of unlea
rning, this paper develops an algorithm that adjusts the weights of an arbi
trary, nonlinearly parameterized network such that the potential for future
interference during learning is reduced. This is accomplished by the reduc
tion of a biobjective cost function that combines the approximation error a
nd a term that measures interference. Analysis of the algorithm's convergen
ce properties shows that learning with this algorithm reduces future unlear
ning. The algorithm can be used either during on-line learning or can be us
ed to condition a network to have immunity from interference during a futur
e learning stage. A simple example demonstrates how interference manifests
itself in a network and how less interference can lead to more efficient le
arning. Simulations demonstrate how this new learning algorithm speeds trai
ning in various situations due to the extra cost function term.