Separating an object in an image from its background is a central problem (
called segmentation) in pattern recognition and computer vision. In this pa
per, we study the computational complexity of the segmentation problem, ass
uming that the sought object forms a connected region in an intensity image
. We show that the optimization problem of separating a connected region in
a grid of M x N pixels is NP-hard under the interclass variance, a criteri
on that is often used in discriminant analysis. More importantly, we consid
er the basic case in which the object is bounded by two x-monotone curves (
i.e., the object itself is x-monotone), and present polynomial-time algorit
hms for computing the optimal segmentation. Our main algorithm for exact op
timal segmentation by two x-monotone curves runs in O(N-4) time; this algor
ithm is based on several techniques such as a parametric optimization formu
lation, a hand-probing algorithm for the convex hull of an unknown planar p
oint set, and dynamic programming using fast matrix searching. Our efficien
t approximation scheme obtains an epsilon -approximate solution in O(epsilo
n N--1(2) log L) time, where epsilon is any fixed constant with 1 > epsilon
> 0, and L is the total sum of the absolute values of the brightness level
s of the image.