Several scenarios of interacting neural networks which are trained either i
n an identical or in a competitive way are solved analytically. In the case
of identical training each perceptron receives the output of its neighbor.
The symmetry of the stationary state as well as the sensitivity to the use
d training algorithm are investigated. Two competitive perceptrons trained
on mutually exclusive learning aims and a perceptron which is trained on th
e opposite of its own output are examined analytically. An ensemble of comp
etitive perceptrons is used as decision-making algorithms in a model of a c
losed market (El Farol Bar problem or the Minority Game. In this game, a se
t of agents who have to make a binary decision is considered.); each networ
k is trained on the history of minority decisions. This ensemble of percept
rons relaxes to a stationary state whose performance can be better than ran
dom.