Adaptive Encoding of a Videoconference Image Sequence Via Neural Networks

Document Type

Article

Publication Date

9-1-1992

Abstract

A new method for encoding a videoconference image sequence, termed adaptive neural net vector quantisation (ANNVQ), has been derived. It is based on Kohonen's self-organised feature maps, a neural network type clustering algorithm. The new method differs from it, in that after training the initial codebook, a modified form of adaptation resumes, in order to respond to scene changes and motion. The main advantages are high image quality with modest bit rates and effective adaptation to motion and scene changes, with the capability to quickly adjust the instantaneous bit rate in order to keep the image quality constant. This is a good match to packet switched networks where variable bit rate and uniform image quality are highly desirable. Simulation experiments have been carried out with 4 × 4 blocks of pixels from an image sequence consisting of 20 frames of size 112 × 96 pixels each. With a codebook size of 512, ANNVQ results in high image quality upon image reconstruction, with peak signal-to-noise ratio (PSNR) of about 36 to 37 dB, at coding bit rates of about 0.50 bit/pixel. This compares quite favourably with classical vector quantisation at a similar bit rate. Moreover, this value of PSNR remains approximately constant, even when encoding image frames with considerable motion.

This document is currently not available here.

Share

COinS