RESIDUAL VECTOR QUANTIZATION USING A MULTILAYER COMPETITIVE NEURAL-NETWORK

Citation
Sa. Rizvi et Nm. Nasrabadi, RESIDUAL VECTOR QUANTIZATION USING A MULTILAYER COMPETITIVE NEURAL-NETWORK, IEEE journal on selected areas in communications, 12(9), 1994, pp. 1452-1459
Citations number
17
Categorie Soggetti
Telecommunications,"Engineering, Eletrical & Electronic
ISSN journal
07338716
Volume
12
Issue
9
Year of publication
1994
Pages
1452 - 1459
Database
ISI
SICI code
0733-8716(1994)12:9<1452:RVQUAM>2.0.ZU;2-R
Abstract
This paper presents a new technique for designing a jointly optimized residual vector quantizer (RVQ), In conventional stage-by-stage design procedure, each stage codebook is optimized for that particular stage distortion and does not consider the distortion from the subsequent s tages, However, the overall performance can be improved if each stage codebook is optimized by minimizing the distortion from the subsequent stage quantizers as well as the distortion from the previous stage qu antizers. This can only be achieved when stage codebooks are jointly d esigned for each other, In this paper, the proposed codebook design pr ocedure is based on a multilayer competitive neural network where each layer of this network represents one stage of the RVQ, The weight con necting these layers form the corresponding stage codebooks of the RVQ , The joint design problem of the RVQ's codebooks (weights of the mult ilayer competitive neural network) is formulated as a nonlinearly cons trained optimization task which is based on a Lagrangian error functio n, This Lagrangian error function includes an the constraints that are imposed by the joint optimization of the codebooks, The proposed proc edure seeks a locally optimal solution by iteratively solving the equa tions for this Lagrangian error function, Simulation results show an i mprovement in the performance of an RVQ when designed using the propos ed joint optimization technique as compared to the stage-by-stage desi gn, where both generalized Lloyd algorithm (GLA) and the Kohonen learn ing algorithm (KLA) were used to design each stage codebook independen tly, as well as the conventional joint-optimization technique,