Neural networks with radial basis functions are considered., and the Shanno
n information in their output concerning input. The role of information-pre
serving input transformations is discussed when the network is specified by
the maximum information principle and by the maximum likelihood principle.
A transformation is found which simplifies the input structure in the sens
e that it minimizes the entropy in the class of all information-preserving
transformations. Such transformation need not be unique - under some assump
tions it may be any minimal sufficient statistics.