We introduce a new wavelet image coding framework using context-based zerot
ree quantization, where an unique and efficient method for optimization of
zerotree quantization is proposed. Because of the localization properties o
f wavelets, when a wavelet coefficient is to be quantized, the best quantiz
er is expected to be designed to match the statistics of the wavelet coeffi
cients in its neighborhood, that is, the quantizer should be adaptive both
in space and frequency domain. Previous image coders tended to design quant
izers in a band or a class level, which limited their performances as it is
difficult for the localization properties of wavelets to be exploited. Con
trasting with previous coders, we propose to trace the localization propert
ies with the combination of the tree-structured wavelet representations and
adaptive models which are spatial-varying according to the local statistic
s. In the paper, we describe the proposed coding algorithm, where the spati
al-varying models are estimated from the quantized causal neighborhoods and
the zerotree pruning is based on the Lagrangian cost that can be evaluated
from the statistics nearby the tree. In this way, optimization of zerotree
quantization is no longer a joint optimization problem as in SFQ. Simulati
on results demonstrate that the coding performance is competitive, and some
times is superior to the best results of zerotree-based coding reported in
SFQ.