PREDICTIVE RESIDUAL VECTOR QUANTIZATION

Citation
Sa. Rizvi et Nm. Nasrabadi, PREDICTIVE RESIDUAL VECTOR QUANTIZATION, IEEE transactions on image processing, 4(11), 1995, pp. 1482-1495
Citations number
28
Categorie Soggetti
Engineering, Eletrical & Electronic
ISSN journal
10577149
Volume
4
Issue
11
Year of publication
1995
Pages
1482 - 1495
Database
ISI
SICI code
1057-7149(1995)4:11<1482:PRVQ>2.0.ZU;2-W
Abstract
This paper presents a new vector quantization technique called predict ive residual vector quantization (PRVQ). It combines the concepts of p redictive vector quantization (PVQ) and residual vector quantization ( RVQ) to implement a high performance VQ scheme with low search complex ity. The proposed PRVQ consists of a vector predictor, designed by a m ultilayer perceptron, and an RVQ that is designed by a multilayer comp etitive neural network, A major task in our proposed PRVQ design is th e joint optimization of the vector predictor and the RVQ codebooks, In order to achieve this, a new design based on the neural network learn ing algorithm is introduced. This technique is basically a nonlinear c onstrained optimization where each constituent component of the PRVQ s cheme is optimized by minimizing an appropriate stage error function w ith a constraint on the overall error, This technique makes use of a L agrangian formulation and iteratively solves a Lagrangian error functi on to obtain a locally optimal solution. This approach is then compare d to a jointly designed and a closed-loop design approach, In the join tly designed approach, the predictor and quantizers are jointly optimi zed by minimizing only the overall error, In the closed-loop design, h owever, a predictor is first implemented; then the stage quantizers ar e optimized for this predictor in a stage-by-stage fashion, Simulation results show that the proposed PRVQ scheme outperforms the equivalent RVQ (operating at the same bit rate) and the unconstrained VQ by 2 an d 1.7 dB, respectively, Furthermore, the proposed PRVQ outperforms the PVQ in the rate-distortion sense with significantly lower codebook se arch complexity.