For over 30 years researchers in computer vision have been proposing new me
thods for performing low-level vision tasks such as detecting edges and cor
ners. One key element shared by most methods is that they represent local i
mage neighborhoods as constant in color or intensity with deviations modele
d as noise. Due to computational considerations that encourage the use of s
mall neighborhoods where this assumption holds, these methods remain popula
r. This research models a neighborhood as a distribution of colors. Our goa
l is to show that the increase inaccuracy of this representation translates
into higher-quality results for low-level vision tasks on difficult, natur
al images, especially as neighborhood size increases. We emphasize large ne
ighborhoods because small ones often do not contain enough information. We
emphasize color because it subsumes gray scale as an image range and becaus
e it is the dominant form of human perception. We discuss distributions in
the context of detecting edges, corners, and junctions, and we show results
for each.