New version page

Fast Source-based Dithering

Upgrade to remove ads

This preview shows page 1-2-3-4 out of 11 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Fast Source-based Dithering for Networked Digital Video 1Fast Source-based Dithering for Networked Digital VideoTuong Q. Nguyen*, Jonathan Kay, Joseph PasqualeComputer Systems LaboratoryDepartment of Computer Science and EngineeringUniversity of California, San DiegoSan Diego, CA 92093-0114{tnguyen, jkay, pasquale}@cs.ucsd.eduABSTRACTSource-based dithering is a set of techniques designed to maximize the performance ofreal-time networked digital video systems that encode and decode video entirely in software.Usually frame grabber hardware presents frames in a 24 bit per pixel (bpp) format. However, mosthosts are only equipped with single or eight bit deep displays and thus the color depth of the videomust be reduced at some point. If the encoder reduces the color depth, the bandwidth required tocarry the video on the network is lowered by a factor of 24 or 3 respectively, and the computationalload is lightened on the receiving hosts. The color depth reduction algorithm must be efficient sincethe resulting frame rate, and thus the degree to which the illusion of motion is preserved, dependson how quickly a pixel can be processed. We use dithering algorithms chosen for efficiency and acontrast enhancement algorithm to improve image quality.1.0 INTRODUCTIONMost modern real-time networked digital video systems are either completely unable to keep up with realtime video or require expensive special-purpose hardware in each video participant; thus, performance is very impor-tant to such systems. We discuss a series of technique, collectively called source-based dithering, designed toimprove the performance of and minimize the loss of video quality in real-time networked digital video systems thatencode and decode video in software.Usually frame grabber hardware presents individual images to a CPU in a relatively “deep” format such as“true color.” The true color format typically requires at least 24 bits to represent each pixel: 8 bits for each of the red,green, and blue components. True color display hardware is expensive, so most hosts are only equipped to displayimages in an eight bit deep color format or a single bit deep monochrome format. The depth of a 24 bpp image mustbe reduced to either 8 bpp or 1 bpp for it to be displayed on such machines.The pixel depth could in theory be reduced at either the source or destination hosts. Depth reduction at thesource carries the advantages that the bandwidth required to carry the video on the network is reduced and the pro-cessing burden is reduced on receiving hosts. The source host is somewhat compensated for the work of dithering bya reduction in volume of network output processing. Depending on whether the video is reduced to 8 bpp or to 1 bpp,this technique reduces bandwidth by a factor of either 3 or 24 respectively.It is important that the color depth reduction algorithm be efficient because the resulting frame rate, and thusthe degree to which the illusion of motion is preserved, depends on how quickly a pixel can be processed. Thus, weuse dithering algorithms chosen for efficiency. Dithering is a technique for reducing color depth by placing a smallcombination of pixels with different colors within a small neighborhood so that, from a distance, the combinationlooks like the original color. We use a contrast enhancement algorithm to enhance the resultant picture.Fast Source-based Dithering for Networked Digital Video 2This paper is organized as follows. Section 2 discusses related work. Section 3 presents the dithering andcontrast enhancement algorithms. Section 4 describes the performance of our system, including both cost and qualityof different dither methods. Section 5 concludes the paper.2.0 RELATED WORKA variety of techniques have been developed to reduce color depth, the most notable being quantization anddithering algorithms. Foley5, Jarvis8, Netravali12, Stoffel15, and Ulichney17, 18 survey dithering techniques for mono-chrome displays. Heckbert11 discusses some well-known quantization techniques such as popularity and median cutalgorithms for color displays. The most common way of reducing bandwidth requirements for transmitting digital video is to use com-pression. The standard formats include JPEG19, MPEG7, and H.261 or px6410. These standards are now used in anumber of experimental research projects, including J-Video2, Multimedia Multiparty Teleconference3, the MPEGSoftware Decoder14, and the INRIA Videoconferencing System16. However, non-standard formats for special appli-cations are still being developed, such as that used in NV6.Many existing networked digital video applications require the receiver to do a significant amount of workto decode the received video data. Receiving JPEG or MPEG compressed video requires special-purpose hardware orsignificant processing power for decompression. Even NV leaves it to the receiver to reduce the color depth if neces-sary. Our primary focus is on supporting digital video on inexpensive computers with limited processing and dis-play capabilities. Source-based dithering reduces color depth at the source, greatly reducing the work required at thereceiver and lowering the required network bandwidth. This makes video available to many low-end systems thatwould otherwise be unable to keep up with reasonable-quality digital video.3.0 DITHERING AND CONTRAST ENHANCEMENT ALGORITHMS3.1 System OverviewProcessing video from an analog source, e.g. a camera, to display in some reduced format on a workstationrequires a number of stages (see Figure 1).In our system, video is captured from an analog source such as a camera or a VCR by a RasterOps TX/PIPframe grabber. The frame grabber scales the video image to a desired size and digitizes it into a 24-bpp format with 8bits for each of the red, green, and blue, components.OriginalFinalImageFigure 1: Block diagram of processing stages for encoding and decoding video in our system.Encoder at SourceDecoder at ReceiverCapture& ScalePreprocess MonochromeDisplayImageDitherColor DitherFast Source-based Dithering for Networked Digital Video 3For monochrome dithering, there is a preprocessing stage where the gray-scale intensity value is computedfrom the RGB components. These gray-scale intensity values are then dithered to produce a monochrome bit-mapimage.For color dithering, the digitized image is reduced from the 24-bpp RGB format down to 8 bpp. Each result-ing 8 bpp encoding actually represents an index into a color map. Using dithering


Download Fast Source-based Dithering
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Fast Source-based Dithering and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Fast Source-based Dithering 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?