We study a model of unsupervised learning where the real-valued data vector
s are isotropically distributed, except for a single symmetry-breaking bina
ry direction B epsilon {- 1, + 1}(N), Onto which the projections have a Gau
ssian distribution. We show that a candidate vector J undergoing Gibbs lear
ning in this discrete space, approaches the perfect match J = B exponential
ly. In addition to the second-order "retarded learning" phase transition fo
r unbiased distributions, we show that first-order transitions can also occ
ur. Extending the known result that the center of mass of the Gibbs ensembl
e has Bayes-optimal performance, we show that taking the sign of the compon
ents of this vector (clipping) leads to the vector with optimal performance
in the binary space. These upper hounds are shown generally not to be satu
rated with the technique of transforming the components of a special contin
uous vector, except in asymptotic limits and in a special linear case. Simu
lations are presented which are in excellent agreement with the theoretical
results.