T. Linder et al., RATES OF CONVERGENCE IN THE SOURCE-CODING THEOREM, IN EMPIRICAL QUANTIZER DESIGN, AND IN UNIVERSAL LOSSY SOURCE-CODING, IEEE transactions on information theory, 40(6), 1994, pp. 1728-1740
Citations number
30
Categorie Soggetti
Information Science & Library Science","Engineering, Eletrical & Electronic
Rate of convergence results are established for vector quantization. C
onvergence rates are given for an increasing vector dimension and/or a
n increasing training set size. In particular, the following results a
re shown for memoryless real-valued sources with bounded support at tr
ansmission rate R: (1) If a vector quantizer with fixed dimension k is
designed to minimize the empirical mean-square error (MSE) with respe
ct to m training vectors, then its MSE for the true source converges i
n expectation and almost surely to the minimum possible MSE as O(squar
e-root log m/m); (2) The MSE of an optimal k-dimensional vector quanti
zer for the true source converges, as the dimension grows, to the dist
ortion-rate function D(R) as O(square-root log k/k); (3) There exists
a fixed-rate universal lossy source coding scheme whose per-letter MSE
on n real-valued source samples converges in expectation and almost s
urely to the distortion-rate function D(R) as O(square-root log log n/
log n); (4) Consider a training set of n real-valued source samples bl
ocked into vectors of dimension k, and a k-dimension vector quantizer
designed to minimize the empirical MSE with respect to the m = left pe
rpendicularn/kright perpendicular training vectors. Then the per-lette
r MSE of this quantizer for the true source converges in expectation a
nd almost surely to the distortion-rate function D(R) as O(square-root
log log n/log n), if one chooses k = left perpendicular(1/R)(1 - epsi
lon)log nright perpendicular for any epsilon is-an-element-of (0, 1).