Visual classification is the way we relate to different images in our envir
onment as if they were the same, while relating differently to other collec
tions of stimuli (e.g., human vs, animal faces). It is still not clear, how
ever, how the brain forms such classes, especially when introduced with new
or changing environments. To isolate a perception-based mechanism underlyi
ng class representation, we studied unsupervised classification of an incom
ing stream of simple images. Classification patterns were clearly affected
by stimulus frequency distribution, although subjects were unaware of this
distribution. There was a common bias to locate class centers near the most
frequent stimuli and their boundaries near the least frequent stimuli. Res
ponses were also faster for more frequent stimuli. Using a minimal, biologi
cally based neural-network model, we demonstrate that a simple, self-organi
zing representation mechanism based on overlapping tuning curves and slow H
ebbian learning suffices to ensure classification. Combined behavioral and
theoretical results predict large tuning overlap, implicating posterior inf
ero-temporal cortex as a possible site of classification.