DOC PREVIEW
Stanford EE 368 - Face Detection using Template Matching

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

FeaturesAssumptionsArchitectureInvestigated MethodsTheoryColor SegmentationTemplate MatchingPrincipal Component Analysis (PCA)ReconstructionResultsConclusionWork DistributionReferences:Face Detection using Template Matching Deepesh Jain Husrev Tolga Ilhan Subbu Meiyappan (Group 12) EE368 Spring 2003 Stanford University Demo: 05/26/03Face Detection Table of Contents Face Detection using Template Matching........................... 1 1. Features....................................................................................................................... 3 2. Assumptions................................................................................................................ 3 3. Architecture.................................................................................................................3 4. Investigated Methods.................................................................................................. 3 5. Theory......................................................................................................................... 4 5.1. Color Segmentation ............................................................................................ 4 5.2. Template Matching ............................................................................................. 5 5.3. Principal Component Analysis (PCA) ................................................................ 6 5.4. Reconstruction .................................................................................................... 7 6. Results......................................................................................................................... 8 7. Conclusion ................................................................................................................ 10 8. Work Distribution ..................................................................................................... 10 9. References:................................................................................................................ 10 Page 2Face Detection Face Detection using Template Matching 1. FEATURES • Color segmentation in YCbCr and HSV space • Dual detection procedure using Template Matching and Principal Component Analysis 2. ASSUMPTIONS • The image set can contain many faces and non-faces • The image set contains only frontal view faces • The image set can have a few faces that are occluded (partly hidden) • The image resolution and size are the same as the original image (1856x1392 pixels) 3. ARCHITECTURE The architecture of the system that was implemented is as shown in Figure 1. Figure 1: System Architecture ColorSegmentationRGB to YCbCrThreshold todetermine skinregionsTemplateMatchingUsing normxcorr2ClassifierInput RGB JPEGImageSkinPixelsAverageFaceThresholdFace/Non-Face 4. INVESTIGATED METHODS Several methods for face detection were studied and investigated. We document a few methods which we explored, for face detection • Template Matching : An average face of a select set of faces from the entire image set was obtained from the ground truth data. This average face was then cross correlated with the skin segmented image. Using a threshold detector, it was determined if it was a face or not. Template matching was tested at various levels: o Average face template o Average Left Eye/Right Eye templates o Average eyes template o Average mouth template o Average nose template Of all these templates only the face template seem to give better results and hence that was retained. Page 3 • EigenFaces: From a set of faces from the ground truth data, eigen images were obtained using the Sirovich and Kirby method. A set of these eigen imagesFace Detection were then used as the basis images to reconstruct the images. The mean squared error between the reconstructed face and the original face was used as a metric for determining whether the input image belonged to the face class. Other classification methods that were investigated include Mahalonobis distance metrics. Eigen images of the nonfaces were also computed to determine if the reconstructed image was closer to the face space or non-face space. But this approach was eventually dropped. Eigeneyes based approach was also investigated as an alternative for detecting occluded images. The eigenface basis was also used to investigate gender classification. Because of face occlusion, these methods did not work very well and hence were dropped. • Wavelets and Neural Nets: The use of wavelets for multiresolution analysis and artifical neural networks (in particular, the Learning Vector Quantization ) for classification was investigated. 5. THEORY Template matching and Eigenimage methods were implemented for the purpose of image detection. This section describes the theoretical and implementation aspects of these two methods. 5.1. COLOR SEGMENTATION We have segmented out the skin portions of the image by filtering out the pixels with non-skin color. The skin color is detected by utilizing color distribution of the skin pixels in (cR, cB, hue) space. This space is a combination of the YcBcR and Hue color spaces. A pixel is labeled as a skin pixel if it satisfies the following thresholds: 9.01.0150100160140≤≤≤≤≥≤huecBcR The boundaries are determined empirically. Satisfying these conditions is not enough to completely segment the face pixels. A face has to cover certain area, and should have certain height and size. We assumed that the area of a face should be more than 800 pixels and its height and width should range between [80, 160] pixels. Figure 2 shows the results of our color segmentation algorithm for one of the training data sets. It is not possible to separate faces if they are very close to each other (especially if one face is occluded by another.) In such cases we used erosion/dilation repetitively. Still, for some cases each frame returned by the segmentation algorithm included more than one face. Page 4Face Detection Figure 2: Skin Color Segmentation 5.2. TEMPLATE MATCHING We first tried the template matching method. As it is seen in figure 3, our template is the average of the faces in the 7 training set. Figure 3: Average Face For each frame out of the color segmentation block, we formed an image pyramid with 6 scales (figure 4(a).) At each scale, the cross correlation of the template with the image is


View Full Document

Stanford EE 368 - Face Detection using Template Matching

Documents in this Course
Load more
Download Face Detection using Template Matching
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Face Detection using Template Matching and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Face Detection using Template Matching 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?