UW-Madison ECE 738 - Detecting Artifacts and Textures in Wavelet Coded Images

Unformatted text preview:

Detecting Artifacts and Textures in Wavelet Coded ImagesMotivationSlide 3Slide 4AlgorithmFeature ExtractionFeature Extraction Texture FeaturesSlide 8SegmentationSegmentation Complexity reductionSegmentation ResultsIdentification of source and target regionsSlide 13ResultsAnalysis And Future WorkReferencesJanuary 14, 2019Detecting Artifacts and Textures in Wavelet Coded ImagesRajas A. SambhareECE 738, Spring 2003Final Project2Motivation•Wavelet based image coders like JPEG 2000 lead to new types of artifacts when used at small bit-rates–Blocking artifacts–Color distortion–Ringing artifacts –Blurring artifacts•Illustrated at http://ai.fri.uni-lj.si/~aleks/jpeg/artifacts.htm•Ringing artifacts can be detected and removed easily with smoothing – e.g. [Hu, Tull, Yang and Nguyen, 2001], [Nosratinia, 2002] etc.3Motivation•Blurring artifacts can removed by using texture synthesis as demonstrated by [Hu, Sambhare, 2003]•Two image regions have to be manually identified–Target regions (t): where the blurry artifact is visible–Source regions (S): where the original texture is preserved.4Motivation•Blurring artifacts –Easy to detect visually–Difficult to detect automatically•For visual detection, humans use–Contextual information–Color and contour cues•In this project we–Segment the image using texture and color–Attempt to identify potential source and target regions.5Algorithm•Overview–Feature extraction.–K-means segmentation.–Identification of potential source and target regions.•Computationally intensive–To reduce the computational requirements while maintaining high quality was another goal.•MATLAB implementation (includes Image Processing Toolbox).6Feature Extraction•Used an 11 or 13 dimensional feature vector for each pixel (11 for gray and 13 for color images)•Features–Normalized dimensions (row/maxRow, column/maxCol).–Low passed intensity values (or RGB triplet).•Median filtering instead of linear low-pass filtering to preserve contour edges.–Texture features•8 different texture features.7Feature ExtractionTexture Features•Use oriented Difference of Offset Gaussian (DOOG) and Difference of Gaussian (DOG) Filters to extract features. [Malik, Perona, 1990]•Use 2 DOG filters to detect spotty regions.•Use 6 DOOG filters at orientations from 0° to 180° to detect barred regions.8Feature ExtractionTexture Features•Use a magnitude operator ( abs() ) to get replicate the non-linear step in human texture recognition.•Use a median filter on the result to replicate lateral neuronal inhibition in humans to get final texture feature.9Segmentation•Use k-means segmentation to cluster into k regions.•Complexity is O(cknd) where –n: number of data points–d: dimensionality of feature space–c: number of iterations ( depends sub-linearly on k, n, d.)•For a 256 by 256 image with 8 clusters, this requires more than 5 minutes to run in MATLAB.•Unusable for larger images as the number of data points n increase to square of image dimension.10SegmentationComplexity reduction•Reduce feature dimensionality using Principal Component Analysis. (Reduce d)•Modify k-means algorithm (Reduce n)–Randomly select 10% data points–Classify and get centroids–Use centroids to classify the remaining data points•256 by 256 image takes from 10 - 15 seconds to segment (including feature extraction) in MATLAB.11SegmentationResults12Identification of source and target regions•k-means clusters non-adjacent segments in the same cluster –Separate them•Combine all texture features and identify all textured regions using thresholding (Otsu’s method, MATLAB graythresh function)13Identification of source and target regions•Algorithmfor each textured regionfor each adjacent non-textured regionif (adjacent region is “similar” to textured region)mark regions as potential source and target•“Similar” – small difference in average gray levels (or average RGB levels) of the low passed source and target regions.•Might give better results if we compare histograms instead of average gray levels ?14ResultsImage:256 x 256 8bit.Time: 31sDimensions after PCA: 6Image:228 x 228 24bit.Time: 17sDimensions after PCA: 415Analysis And Future Work•Results are quite promising for color images.–Why? Color cue is exploited in finding similarity.•Gray scale result not as good as color so far.–Why? Inherently more difficult problem.–Humans use implied and visible contour cues which were ignored in this project.•Possible improvements.–Include contour information while segmenting.–Use better segmenting method than k-means.16References•[Hu, Tull, Yang and Nguyen, 2001] S. Yang, Y. H. Hu, D. L. Tull, and T. Q. Nguyen, “Maximum likelihood parameter estimation for image ringing artifact removal,” IEEE Trans. Circuits and Systems for Video Technology, vol. 11, no. 8, August, 2001, pp. 963-973.•[Nosratinia, 2003] A. Nosratinia, “Post-Processing of JPEG-2000 Images to Remove Compression Artifacts,” to appear in IEEE Signal Processing Letters.•[Hu, Sambhare, 2003] Y. H. Hu, and R. A. Sambhare, “Constrained Texture Synthesis for Image Post Processing,” to appear in ICME 2003, Baltimore, MD.•[Malik, Perona, 1990] J. Malik and P. Perona, “Preattentive texture discrimination with early vision mechanisms,” J. Opt. Soc. America A, 7(5):923–32, May


View Full Document

UW-Madison ECE 738 - Detecting Artifacts and Textures in Wavelet Coded Images

Download Detecting Artifacts and Textures in Wavelet Coded Images
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Detecting Artifacts and Textures in Wavelet Coded Images and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Detecting Artifacts and Textures in Wavelet Coded Images 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?