on modelling the hybrid video coding for analysis and
play

On Modelling the Hybrid Video Coding for Analysis and Future - PDF document

The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 ICICS2007: 6 th International Conference on Information, Communications and Signal Processing, 10-13 December 2007, Singapore


  1. The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 ICICS’2007: 6 th International Conference on Information, Communications and Signal Processing, 10-13 December 2007, Singapore On Modelling the Hybrid Video Coding for Analysis and Future Development (Invited Paper) Invited Speaker: Prof. Wan-Chi Siu Wan-Chi Siu and Ko-Cheung Hui Centre for Signal Processing Department of Electronic and Information Engineering The Hong Kong Polytechnic University 1 The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 Outline 1. Introduction 2. Former Models of The Autocorrelation of Block-Based Motion Prediction Errors 3. The Proposed Model 4. Experimental Results 5. Conclusion and Further Development 2 1

  2. The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 1. Introduction Most of the work for the design and optimization of the hybrid video codecs is carried out experimentally . It is always desirable to have a proper theoretical treatment of the motion-compensated video coding system, which is extremely useful for making an analysis of the codecs available nowadays, and for designing new and efficient codecs for future applications. It requires many assumptions and simplifications. A simple Markov model with the assumption of wide-sense stationary signals may not work as well. 3 The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 Recall the Hybrid video coding[1-2 ] which is the most popular approach for video coding, and it has also been adopted by most recent video coding standards. It makes uses of some efficient motion estimation algorithms[4-7] to form block based motion compensation frame difference (MCFD) signals and subsequently to be coded by the discrete cosine transform (DCT) or integer cosine transform (ICT)[8]. 4 2

  3. The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 Hybrid Video Coding Regulator Regulator Source Compressed video bit stream + Frame Frame VLC VLC 2D-DCT 2D-DCT Quantizer Quantizer Buffer Buffer 011010…111 011010…111 Memory coder coder Memory - Dequantizer Dequantizer Inverse Inverse 2D-DCT 2D-DCT + Predicted frame + Motion Motion Frame Frame Compensation Compensation Memory Memory Motion vectors Motion Motion Estimation Estimation 5 The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 Hybrid Video Coding Regulator Error Source Frame Compressed video bit stream + Frame VLC 2D-DCT Quantizer Buffer 011010…111 Memory coder - Dequantizer Inverse 2D-DCT + Predictive frame + Motion Motion Motion Frame Compensation Compensation Compensation Memory Motion vectors Motion Estimation 6 3

  4. The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 A proper theoretical treatment of motion-compensated video coding is valuable for the design and analysis of state-of-the-art video codecs, even though most research works were carried out experimentally. The CP-model: In 1987, the first comprehensive rate- distortion analysis of motion-compensated prediction (MCP) was presented [9]. After this initial analysis, a number of researchers investigated this subject in depth and developed many different techniques for efficiency improvement [10-21]. 7 The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 2. Former Models of The Autocorrelation of Block-Based Motion Predication Errors Let us firstly define the covariance estimates of an NxN square matrix, S , as follows: S [ s v,0 , s v,1 , …, s v,N-1 ] = [ s h,0 , s h,1 , …, s h , N-1 ] T where rows s h,n and columns s v,n of S are realizations of S h and S v vectors, respectively, and usually N=8, representing the block size. An estimate of the covariance variance matrix in the horizontal direction is defined as C h = E ( S h S h T ), and its elements in vector form are  1 1 N 1    T T c s s S S h h , n h , n N N  n 0 8 4

  5. The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 Chen and Pang[16,19] proposed a theoretical model (the CP model) to represent the autocorrelation function of the residual errors of a motion compensated frame. The residual errors were regarded as random variables in both horizontal and vertical directions. (i) The probability density function ( pdf ) was assumed to be uniformly distributed over an interval range, and (ii) an impulse at the origin was included. This impulse represents the finite probability that motion vectors could have absolute error. A compound covariance sequence of the prediction errors, C(I), was defined as shown below:          I      C ( I ) A 1 A ( I ) C I C I (1) 1 2 where I is the pixel separation in x-dimension, and  (I) is the Kronecker delta function with  (0)=1 and  (I)=0 for I  0; with A = 0.5 and ρ = 0.95 for motion-compensated frame difference. 9 The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 This model assumes that the prediction errors of a block are the sum of two uncorrelated zero-mean WSS processes : C I = C I1 + C I2 , or         2 1 N 1 0 0 .. 0 1 ..         2 1 .. N 0 1 0 .. 0                2 3 C A 1 .. N ( 1 A ) 0 0 1 .. 0 I                             N 1 N 2 N 3    .. 1  0 0 0 .. 1 The first component, C 1 (I), in eqn. (1) represents the autocorrelation of a first order autoregressive process , AR(1), with ρ = 0.95. The second component represents the white noise with a flat power spectrum [16]. However, this model deviates significantly from experimental results . 10 5

  6. The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 In [17], Niehsen and Brünig confirmed that the statistical means and standard deviations of the errors may change significantly from block to block. Hence they proposed another compound covariance model (the NB model) empirically, which takes the overlapped block motion estimation into account. The compound covariance of the prediction error, C e (I), was defined as ( with two correlation functions ): 1 st order 2 nd order     2   I    I C I c 1 c (2) e 0 1 where c, ρ 0 and ρ 1 are model parameters. Model parameters, c=0.17, ρ 0 =0.91 and ρ 1 =0.38, were chosen to fit their empirical covariance in the l 1 -norm sense. According to their experimental results, their model closely fitted the characteristic of practical signals. The major disadvantage of this model, however, is that it lacks a theoretical basis , and thus its use for other analytical purposes is limited. 11 The Hong Kong Polytechnic University Department of Electronic and Information Engineering Prof. W.C. Siu December 2007 3. The Proposed Model For the sake of simplicity, our model is also based on the first-order Autoregressive model (AR(1)) [22] and with image correlation coefficient equal to ρ . Let us consider a block of pixels f t (i,j) in a frame at time t. The block-based motion compensation uses a matched block f t-1 (i+u,j+v) in a reference frame at time t-1 for prediction. The motion prediction error is then given by, e(i,j) = f t (i,j) – f t-1 (i+u,j+v) (3) where (u,v) represent the motion vector of the block. 12 6

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend