UW-Madison ECE 738 - Artifacts and Textured Region Detection

Unformatted text preview:

1Artifacts and Textured Region DetectionVishal BangardECE 738 - Spring 2003I. INTRODUCTIONA lot of transformations, when applied to images, lead to the development of various artifacts in them. Inparticular, we will be addressing artifacts caused by compression algorithms. Few of the artifacts [1] causedby popular compression algorithms are:• Blocking artifacts: Some compression algorithms divide an image into blocks of a definite size. E.g.JPEG works on 8x8 blocks at a time. This leads to the resulting compressed image having a very “blocky”appearance.• Color Distortion: Human eyes are not as sensitive to color as to brightness. As a result, much of thedetailed color (chrominance) information is disposed, while luminance information is retained. Thisprocess is called ”chroma subsampling”. The result of this is that compressed pictures have a “washedout” appearance, in which the colors do not look as bright as in the original image.• Ringing Artifacts: Quite a few times, compression algorithms that work in the spectral domain takeadvantage of the fact that low frequency information is visually more important than the high frequencyinformation. Some of them try to exploit this fact and do not retain all the high frequency information.This leads to distortions in edges and other boundaries.• Blurring Artifacts: With the presence of these artifacts, the image looks smoother than the original coun-terpart. The general shape of objects is correctly retained, but the texture information is lost in someareas.A lot of work has been previously done to tackle blocking artifacts, color distortion and ringing artifacts.To repair the effects of Blurring artifacts, there are existing methods for texture synthesis, or replication oftexture in an area based on information from adjacent regions. However, most of the existing algorithmsrequire the source and destination areas for texture synthesis to be marked out manually. The aim of thisproject is to identify regions near textured areas as these have a higher probability of being subject to textureloss. A completely automated detection system is very hard due to the highly subjective nature of this problem.2II. HIGH FREQUENCY ANALYSISTextured areas generally have a lot more high frequencies as compared to smooth areas. Hence, one of theapproaches adopted was to analyze the high frequency component of the image.A standard wavelet decomposition was used and the sum of the squares of the high frequency coefficientswas found. This gave an indication as to the amount of energy that was being contributed by the high frequencycomponents. It was conjectured that a textured region would have a large amount of energy contributed dueto these high frequencies. This analysis was done on an 8x8 block basis as this eased computation and alsokept a direct link between the spectral and the spatial domains.To see the potential of this method, the image was thresholded to show the best results.The results seen using this method seemed encouraging. However, one of the primary drawbacks of thismethod was low resolution. Each 8x8 block was classified as rich in high frequencies or not. Going to lowerblock sizes (i.e. 4x4 and 2x2) leaves very few high frequency coefficients to get reliable results. Also, thistechnique in itself did not give good edge detection, which also forms an essential part in obtaining nearbyregions.Another problem was that some small areas were missed as being detected as textured or not. The mostlikely reason for this may be the minimum resolution being 8x8 due to the choice of block size.III. GABOR WAVELET ANALYSISAnother approach for texture detection was to find the Gabor wavelet decomposition of the image. TheGabor elementary functions are able to closely model the anisotropic two dimensional receptive fields ofneurons in the mammalian visual cortex. (The first experiment towards this was successfully conducted by J.P. Jones and L. A. Palmer, on the visual cortex of a cat [2]. Later experiments have been performed on othermammals including monkeys and reinforce these results.)In the 1-D case, Gabor wavelets are given by [3] and [4]ψj(x) =k2jσ2exp(−k2jx22 σ2)[exp(i~kj~x) − exp(−σ22)] (1)An explanation of the terms is as follows:•~kj: wave vector• kjgives the centre frequency of the function•k2jσ2: scaling factor - compensates for the frequency dependent decrease of the power spectrum usuallyfound in natural images3(a) Image: Barbara (b) Image (a) compressed at 1:94. Observe the loss intexture in the lady’s clothing and the tablecloth.(c) Solid black areas give edges and regions of high tex-ture(d) An edge map of image (c) overlaid on the com-pressed imageFig. 1. The image Barbara subjected to high frequency analysis4−20−1001020−20020−0.01−0.00500.0050.010.015(a) Real component of the Gabor filter−20−1001020−20020−0.015−0.01−0.00500.0050.010.015(b) Imaginary component of the Gabor filterFig. 2. Gabor filters (These were generated using a filter width of 41 and variances 10 in the x and y directions)Fig. 3. The various orientations of the Real (left) and Imaginary (right) components of the Gabor filters. These are obtained by rotating theimages shown in Fig. (2) and have been depicted as an image• exp(−k2jx22 σ2): gaussian envelope function• exp(i~kj~x) = cos(~kj~x) + i cos(~kj~x): complex-valued plane wave• exp(−σ22) makes the function DC-freeThe two dimensional Gabor filters are as shown in Fig. (2).The Gabor wavelet expansion functions form a complete set. Hence, an exact representation of the signalin terms of the expansion functions is possible. However, they also form a non-orthogonal set. To obtain theGabor decomposition, Gabor filters, shown in Fig. (3), were formed and convolved with the image.The effect of convolving the Gabor filters with the image Barbara is shown in Fig. (4) and (5).The six images on each row give the effect of convolving the Gabor filters with the image at six different5Fig. 4. The magnitude of the result of convolving the image with the Gabor filters at six different orientations and four different scalesorientations. This is done for four different scales and shown here. It is seen in Fig. (4) that the lady’s clothingis captured very well on the first two scales, but the tablecloth barely shows up on them. On scale three andfour, the tablecloth is detected well, but the clothing is barely detected. Hence, it is necessary to


View Full Document

UW-Madison ECE 738 - Artifacts and Textured Region Detection

Download Artifacts and Textured Region Detection
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Artifacts and Textured Region Detection and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Artifacts and Textured Region Detection 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?