To aid in the display, manipulation, and analysis of biomedical image
data, they usually need to be converted to data of isotropic discretiz
ation through the process of interpolation, Traditional techniques con
sist of direct interpolation of the grey values [1], When user interac
tion is called fur in image segmentation, as a consequence of these in
terpolation methods, the user needs to segment a much greater (typical
ly 4-10x) amount of data, To mitigate this problem, a method called sh
ape-based interpolation of binary data was developed [2], Besides sign
ificantly reducing user time, this method has been shown to provide mo
re accurate results than grey-level interpolation [2]-[5], We proposed
[6] an approach for the interpolation of grey data of arbitrary dimen
sionality that generalized the shape-based method from binary to grey
data, This method has characteristics similar to those of the binary s
hape-based method. In particular, we showed preliminary evidence [6],
[7] that it produced more accurate results than conventional grey-leve
l interpolation methods. In this paper, concentrating on the three-dim
ensional (3-D) interpolation problem, we compare statistically the acc
uracy of eight different methods: nearest-neighbor; linear grey-level,
grey-level cubic spline [8], grey-level modified cubic spline [9], Go
shtasby et nl, [10], and three methods from the grey-level shape-based
class [6], A population of patient magnetic resonance and computed to
mography images, corresponding to different parts of the human anatomy
, coming from different three-dimensional (3-D) imaging applications,
are utilized for comparison. Each slice in these data sets is estimate
d by each interpolation method and compared to the original slice at t
he same location using three measures: mean-squared difference, number
of sites of disagreement, and largest difference. The methods are sta
tistically compared pairwise based on these measures. The shape-based
methods statistically significantly outperformed all other methods in
all measures in all applications considered here with a statistical re
levance ranging from 10% to 32% (mean = 15%) for mean-squat-ed differe
nce.