DOC PREVIEW
WMU CS 6260 - Parallel Image Processing

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Parallel Image Processing Western Michigan UniversityCS 6260 (Parallel Computations II)Corse Professor: Elise de DonckerPresentation (2) ReportStudent’s Name: Abdullah AlgarniMarch 24, 2009Presentation’s Topic: Parallel Image ProcessingPresentation’s abstract and goal:Image processing is gaining larger importance in a variety of application areas. Active vision, e.g. for autonomous vehicles, requires substantial computational power, in order to be able to operate in real time. Here, vision allows the development of more flexible and intelligent systemsthan any other sensor system. In addition, there is also the need to speed up non-critical image processing routines, e.g. in evaluating medical or satellite image data. The ideal concept of having one processor (ALU) per image pixel allows a very simple and natural definition of image operations. In my presentation, I would like to give an extensive overview of typical basic image processing operations, demonstrating how they can be processedin parallel mode. I will explain the most important algorithms of image processing such as edge detection, histograms, stereo vision, sharpening, motion detection, gray scale, smoothing filters, noise reduction, segmentation, and compression. Also, my presentation will go through some topics related to parallel image processing (PIP) such as applications of PIP, PIP techniques, PIP hardware architectures, And Streaming with PIP.I. IntroductionThe tremendous amount of data required for image processing and computer vision applications presents a significant problem for conventional microprocessors. In order to process a 640 x 480 full-color image at 30 Hz, a data throughput of 221 Mbs is required. This does not include overhead from other essential processes such as the operating system, control loops or camera interface. While a conventional microprocessor such as the Pentium 4 has a clock speed of nearly4 GHz, running a program at that speed is highly dependent upon continuous access of data fromthe processor’s lowest level cache. Memory access times from the system’s main memory, usually Synchronous DRAM, is an order of magnitude slower than the processor’s cache and thus the large amount of data required for image processing will always be limited by memory access time and not the processor’s clock speed.The disparity between memory access times and the processor’s clock speed will only widen with time. While transistor counts and microprocessor clock speed have traditionally scaled exponentially with Moore’s Law, memory access times have scale linearly. This is not to say a Pentium 4 is unable to handle many image processing algorithms in real time, but there is less growth room as applications require greater resolutions. Medical imaging in particular requiresthe processing of images in the megapixel range. In this thesis we propose an expandable architecture that can easily adapt to these challenges.II. Digital Image ProcessingII.I What is Digital image?An image is a continuous function that has been discretized in spatial coordinates, brightness andcolor frequencies. Most often: 2-D with ‘pixels’ as scalar or vector valueA digital image a[m,n] described in a 2D discrete space is derived from an analog image a(x,y) in a 2D continuous spacethrough a sampling process that is frequently referred to as digitization. The mathematics of that sampling process will be described in Section 5. For now we will look at some basic definitions associated with the digital image. The 2D continuous image a(x,y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m,n]. In fact, in most cases a(x,y)--which we might consider to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color (), and time (t).The image shown in previous figure has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. The process of representing the amplitude of the 2D signal at a given coordinate asan integer value with L different gray levels is usually referred to as amplitude quantization or simply quantization.II.II Image processing embodies :- Image Acquisition- Image Generation- Image Perception- Image Display- Image Compression- Image Manipulation- Image AnalysisII.III Why D.I.P.? Reasons for compression Image data need to be accessed at a different time or location Limited storage space and transmission bandwidth Reasons for manipulation Image data might experience non-ideal acquisition, transmission or display (e.g., restoration, enhancement and interpolation)  Image data might contain sensitive content (e.g., fight against piracy, or forgery)  To produce images with artistic effect (e.g., pointellism) Reasons for analysis Image data need to be analyzed automatically in order to reduce the burden of human operators  To teach a computer to “see” in A.I. tasksIII. Image CompressionImage compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data inan efficient form.Image compression can be lossy or lossless. Lossless compression is sometimes preferred for artificial images such as technical drawings, icons or comics. This is because lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossless compression methods may also be preferred for high value content, such as medical imagery or image scans made for archival purposes. Lossy methods are especially suitable for natural images such as photos in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate.Lossless Image CompressionDefinition:- Decompressed image will be mathematically identical to the original one (zero error)Compression ratio for this kind:- highly depends on the image type and content (synthetic images >10) and (photographic images 1~3 )Applications: - Storage and transmission of medical imagesPopular Lossless Image Compression Techniques- WinZipBased on the celebrated Lempel-Ziv algorithminvented nearly 30 years ago- GIF (Graphic Interchange


View Full Document

WMU CS 6260 - Parallel Image Processing

Download Parallel Image Processing
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Parallel Image Processing and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Parallel Image Processing 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?