DOC PREVIEW
UCSB ECE 181B - Feature extraction

This preview shows page 1-2-24-25 out of 25 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Feature extractionJanuary 2010READINGSection 4.1.1TodayHarris Corner DetectorBasics of Linear filtering (to continue on Thursday)Monday, January 11, 2010Feature extraction•why feature extraction?•what features?Monday, January 11, 2010Today’s illusion: Rotating Rays, A. Kitaokahttp://www.ritsumei.ac.jp/~akitaoka/index-e.htmlMonday, January 11, 20104Consider a “stereo pair”(ignore the white lines and circles for now)Monday, January 11, 20105another example(ignore the lines)Monday, January 11, 20106Correspondence problemRight imageLeft imageMonday, January 11, 20106Correspondence problemRight imageLeft imageMonday, January 11, 20106Correspondence problemRight imageLeft imageMonday, January 11, 20106Correspondence problemRight imageLeft imageMonday, January 11, 20106Correspondence problemRight imageLeft image• What is a point?• How do we compare points in different images? (Similarity measure)Monday, January 11, 20107Correspondence problem 4.1. Points 215Figure 4.3: Image pairs with extracted patches below. Notice how some patches can be localizedor matched with higher accuracy than others.region around detected keypoint locations in converted into a more compact and stable (invariant)descriptor that can be matched against other descriptors. The third feature matching stage, §4.1.3,efficiently searches for likely matching candidates in other images. The fourth feature trackingstage, §4.1.4, is an alternative to the third stage that only searches a small neighborhood aroundeach detected feature and is therefore more suitable for video processing.A wonderful example of all of these stages can be found in David Lowe’s (2004) Distinctiveimage features from scale-invariant keypoints paper, which describes the development and refine-ment of his Scale Invariant Feature Transform (SIFT). Comprehensive descriptions of alternativetechniques can be found in a series of survey and evaluation papers by Schmid, Mikolajczyk,et al. covering both feature detection (Schmid et al. 2000, Mikolajczyk et al. 2005, Tuytelaarsand Mikolajczyk 2007) and feature descriptors (Mikolajczyk and Schmid 2005). Shi and Tomasi(1994) and Triggs (2004) also provide nice reviews of feature detection techniques.4.1.1 Feature detectorsHow can we find image locations where we can reliably find correspondences with other images,i.e., what are good features to track (Shi and Tomasi 1994, Triggs 2004)? Look again at the imagepair shown in Figure 4.3 and at the three sample patches to see how well they might be matchedor tracked. As you may notice, textureless patches are nearly impossible to localize. Patches withlarge contrast changes (gradients) are easier to localize, although straight line segments at a singleMonday, January 11, 2010Aperture Problem8216 Computer Vision: Algorithms and Applications (October 18, 2009 draft)(a) (b) (c)Figure 4.4: Aperture problems for different image patches: (a) stable (“corner-like”) flow; (b)classic aperture problem (barber-pole illusion); (c) textureless region. The two images I0(yellow)and I1(red) are overlaid. The red vector u indicates the displacement between the patch centers,and the w(xi) weighting function (patch window) is shown as a dark circle.orientation suffer from the aperture problem (Horn and Schunck 1981, Lucas and Kanade 1981,Anandan 1989), i.e., it is only possible to align the patches along the direction normal to the edgedirection (Figure 4.4b). Patches with gradients in at least two (significantly) different orientationsare the easiest to localize, as shown schematically in Figure 4.4a.These intuitions can be formalized by looking at the simplest possible matching criterion forcomparing two image patches, i.e., their (weighted) summed square difference,EWSSD(u)=!iw(xi)[I1(xi+ u) − I0(xi)]2, (4.1)where I0and I1are the two images being compared, u =(u, v) is the displacement vector, w(x)is a spatially varying weighting (or window) function, and the summation i is over all the pixels inthe patch. (Note that this is the same formulation we later use to estimate motion between completeimages §8.1, and that this section shares some material with that later section.)When performing feature detection, we do not know which other image location(s) the featurewill end up being matched against. Therefore, we can only compute how stable this metric is withrespect to small variations in position ∆u by comparing an image patch against itself, which isknown as an auto-correlation function or surfaceEAC(∆u)=!iw(xi)[I0(xi+ ∆u) − I0(xi)]2(4.2)(Figure 4.5).1Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red1Strictly speaking, the auto-correlation is the product of the two weighted patches; I’m using the term here in amore qualitative sense. The weighted sum of squared differences is often called an SSD surface §8.1.Ack: Szelski Chapter 4Monday, January 11, 20109The correspondence problemMonday, January 11, 20109The correspondence problem• A classically difficult problem in computer vision– Is every point visible in both images?– Do we match points or regions or …?– Are corresponding (L-R) image regions similar?Monday, January 11, 20109The correspondence problem• A classically difficult problem in computer vision– Is every point visible in both images?– Do we match points or regions or …?– Are corresponding (L-R) image regions similar?• The so called “aperture problem”Monday, January 11, 2010Next week lectures: Corner detection• Helpful to come prepared – read up on basic linear algebra, eigenvalues of a 2x2 matric, diagonalization etc.– will introduce you to some vision buzz-words: Gaussian kernels, linear filtering, convolution vs correlation– try to read Ch. 3 and Ch. 4 to the extent possible (even if you ignore the math details in those chapters)10Monday, January 11, 2010f(x + a) ≈ f(x)+f!(x).a∇I(xi)=(∂I/∂x, ∂I/∂y)(xi)I0(xi+ !u) ≈ I0(xi)+∇I0(xi). ! uComparing image patches11216 Computer Vision: Algorithms and Applications (October 18, 2009 draft)(a) (b) (c)Figure 4.4: Aperture problems for different image patches: (a) stable (“corner-like”) flow; (b)classic aperture problem (barber-pole illusion); (c) textureless region. The two images I0(yellow)and I1(red) are overlaid. The red vector u indicates the displacement between the patch centers,and the w(xi) weighting function (patch window) is shown as a dark circle.orientation suffer from the aperture problem (Horn


View Full Document

UCSB ECE 181B - Feature extraction

Download Feature extraction
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Feature extraction and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Feature extraction 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?