DOC PREVIEW
UT Arlington EE 5359 - AVS-M Video Standard

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

2.1.1. Sequence: The sequence layer consists of a set of mandatory and optional downloaded system parameters. The mandatory parameters are necessary to initialize decoder systems. The optional parameters are used for other system settings at the discretion of the network provider. Sometimes user data can optionally be contained in the sequence header. The sequence layer provides an entry point into the coded video. Sequence headers should be placed in the bitstream to support user access appropriately for the given distribution medium. Repeat sequence headers may be inserted to support random access. Sequences are terminated with a sequence end code.2.1.2. Picture: The picture layer provides the coded representation of a video frame [2] [4] [5]. It comprises of a header with mandatory and optional parameters and optionally with user data. Three types of pictures are defined by AVS:2.1.3. Slice: The slice structure provides the lowest-layer mechanism for resynchronizing the bitstream in case of transmission error. Slices comprise of a series of MBs. Slices must not overlap, must be contiguous, must begin and terminate at the left and right edges of the picture. It is possible for a single slice to cover the entire picture. The slice structure is optional. Slices are independently coded and no slice can refer to another slice during the decoding process.2.1.4. Macroblock: Picture is divided into MBs. A macroblock includes the luminance and chrominance component pixels that collectively represent a 16x16 region of the picture. In 4:2:0 mode, the chrominance pixels are subsampled by a factor of two in each dimension; therefore each chrominance component contains only one 8x8 block. In 4:2:2 mode, the chrominance pixels are subsampled by a factor of two in the horizontal dimension; therefore each chrominance component contains two 8x8 blocks [2] [4] [5]. The MB header contains information about the coding mode and the motion vectors. It may optionally contain the quantization parameter (QP). Macroblock partitioning and sub macroblock partitioning [2] are shown in Figures 3 and 4. The partitioning is used for motion compensation. The number in each rectangle specifies the order of appearance of motion vectors and reference indices in a bitstream.2.1.5. Block: The block is the smallest coded unit and contains the transform coefficient data for the prediction errors. In case of intra-coded blocks, intra prediction is performed from neighbouring blocks.4. AVS-M codec5.2. Quantization5.3. Intra prediction5.3.1. Intra_4x4: In Intra_4x4 mode, each 4x4 block is predicted from spatially neighboring samples as shown in Figure 7. The 16 samples of the 4x4 block which are labeled as a-p are predicted using prior decoded samples in adjacent block labeled as A-D, E-H and X [11]. The up-right pixels used to predict are expanded by pixel sample D. Similarly, the down-left pixels are expanded by H. For each 4x4 block, one of the nine prediction modes as shown in Figure 8 can be utilized to exploit spatial correlation including eight directional prediction modes [10] (such as Down Left, Vertical, etc) and non-directional prediction mode (DC).5.3.2. Direct intra prediction: When direct intra prediction is used a new method is applied to code the intra prediction mode information. When Intra_4x4 is used, we need at least 1 bit to represent the mode information for each block. It means, for a macroblock, even when intra prediction mode of all 16 blocks are their most probable mode (MPM), 16 bits is needed to indicate the mode information. As AVS-P7 is used for mobile applications, it always has limited bandwidth, so the QP is usually high. Thus, the percentage of best mode equaling to most probable mode is high [7]. Many MBs use 16 bits to present all the blocks when this MB is coded using their most probable mode. In direct intra prediction mode, we use 1 bit flag to indicate whether all of the blocks in this block are coded using their most probably mode or not.(a) (b) (c) (d)CIFCIF(e) (f) (g) (h)Figure 10.(a) Original foreman sequence, (b) Decoded foreman sequence, (c) Original news sequence (d) Decoded news sequence, (e) Original mobile sequence, (f) Decoded mobile sequence, (g) Original tempete sequence, (h) Decoded tempete sequence.[11] http:zhan.ma.googlepages.com/INTRA_CODING_AVS.PDFA Study on AVS-M Video StandardSahana Devaraju1 and K.R. Rao1, IEEE Fellow1Electrical Engineering Department, University of Texas at Arlington, Arlington, TXE-mail: (sahana.devaraju, rao)@uta.eduAbstract Audio video standard for Mobile (AVS-M) [1][9] is the seventh part of the most recentvideo coding standard which is developed by AVS workgroup of China which aims for mobilesystems and devices with limited processing and power consumption. This paper provides aninsight into the AVS-M video standard, features it offers, various data formats it supports,profiles and tools that are used in this standard and architecture of AVS-M codec. A study isdone on the key techniques such as transform and quantization, intra prediction, quarter-pixel interpolation, motion compensation modes, entropy coding and in-loop de-blockingfilter. Simulation results are evaluated in terms of bitrates and SNR.1. IntroductionOver the past 20 years, analog based communication around the world has beensidetracked by digital communication. The modes of digital representation of informationsuch as audio and video signals have undergone much transformation in leaps and bounds.With the increase in commercial interest in video communications, the need for internationalimage and video compression standards arose. Many successful standards of audio-videosignals [18] [19] have been released which have advanced a plethora of applications, thelargest of which is the digital entertainment media. Products have been developed which spana wide range of applications and have been enhanced by the advances in other technologiessuch as the internet and digital media storage. Moving Picture Experts Group (MPEG) [3] was the first group who formed the format,which quickly became the standard for audio and video compression and transmission. Soonafter MPEG-2, was released, being broader in scope, supported interlacing and high definitionvideo formats. Soon later, MPEG-4 uses further coding tools with additional complexity toachieve higher compression factors than MPEG-2. MPEG-4 is very efficient in terms ofcoding,


View Full Document

UT Arlington EE 5359 - AVS-M Video Standard

Documents in this Course
JPEG 2000

JPEG 2000

27 pages

MPEG-II

MPEG-II

45 pages

MATLAB

MATLAB

22 pages

AVS China

AVS China

22 pages

Load more
Download AVS-M Video Standard
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view AVS-M Video Standard and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view AVS-M Video Standard 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?