DOC PREVIEW
Stanford EE 368 - Face detection

This preview shows page 1-2-15-16-31-32 out of 32 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 32 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Face detection Inseong Kim, Joon Hyung Shim, and Jinkyu Yang Introduction In recent years, face recognition has attracted much attention and its research has rapidly expanded by not only engineers but also neuroscientists, since it has many potential applications in computer vision communication and automatic access control system. Especially, face detection is an important part of face recognition as the first step of automatic face recognition. However, face detection is not straightforward because it has lots of variations of image appearance, such as pose variation (front, non-front), occlusion, image orientation, illuminating condition and facial expression. Many novel methods have been proposed to resolve each variation listed above. For example, the template-matching methods [1], [2] are used for face localization and detection by computing the correlation of an input image to a standard face pattern. The feature invariant approaches are used for feature detection [3], [4] of eyes, mouth, ears, nose, etc. The appearance-based methods are used for face detection with eigenface [5], [6], [7], neural network [8], [9], and information theoretical approach [10], [11]. Nevertheless, implementing the methods altogether is still a great challenge. Fortunately, the images used in this project have some degree of uniformity thus the detection algorithm can be simpler: first, the all the faces are vertical and have frontal view; second, they are under almost the same illuminate condition. This project presents a face detection technique mainly based on the color segmentation, image segmentation and template matching methods. Color Segmentation Detection of skin color in color images is a very popular and useful technique for face detection. Many techniques [12], [13] have reported for locating skin color regions in the input image. While the input color image is typically in the RGB format, these techniques usually use color components in the color space, such as the HSV or YIQ formats. That is because RGB components are subject to the lighting conditions thus the face detection may fail if the lighting condition changes. Among manyPage 2 color spaces, this project used YCbCr components since it is one of existing Matlab functions thus would save the computation time. In the YCbCr color space, the luminance information is contained in Y component; and, the chrominance information is in Cb and Cr. Therefore, the luminance information can be easily de-embedded. The RGB components were converted to the YCbCr components using the following formula. Y = 0.299R + 0.587G + 0.114B Cb = -0.169R - 0.332G + 0.500B Cr = 0.500R - 0.419G - 0.081B In the skin color detection process, each pixel was classified as skin or non-skin based on its color components. The detection window for skin color was determined based on the mean and standard deviation of Cb and Cr component, obtained using 164 training faces in 7 input images. The Cb and Cr components of 164 faces are plotted in the color space in Fig.1; their histogram distribution is shown in Fig. 2. Fig. 1 Skin pixel in YCbCr color space. Fig. 2 (a) Histogram distribution of Cb. (b) Histogram distribution of Cr.Page 3 The color segmentation has been applied to a training image and its result is shown in Fig. 3. Some non-skin objects are inevitably observed in the result as their colors fall into the skin color space. Fig. 3 Color segmentation result of a training image.Page 4 Image Segmentation The next step is to separate the image blobs in the color filtered binary image into individual regions. The process consists of three steps. The first step is to fill up black isolated holes and to remove white isolated regions which are smaller than the minimum face area in training images. The threshold (170 pixels) is set conservatively. The filtered image followed by initial erosion only leaves the white regions with reasonable areas as illustrated in Fig. 4. Fig. 4. Small regions eliminated image.Page 5 Secondly, to separate some integrated regions into individual faces, the Roberts Cross Edge detection algorithm is used. The Roberts Cross Operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image. It thus highlights regions of high spatial gradients that often correspond to edges. (Fig. 5.)The highlighted region is converted into black lines and eroded to connect crossly separated pixels. Fig.5. Edges detected by the Roberts cross operator. Finally, the previous images are integrated into one binary image and relatively small black and white areas are removed. The difference between this process and the initial small area elimination is that the edges connected to black areas remain even after filtering. And those edges play important roles as boundaries between face areas after erosion. Fig. 6. shows the final binary images and some candidate spots that will be compared with the representative face templates in the next step are introduced in Fig. 7.Page 6 Fig.6. Integrated binary image. Fig.7. Preliminary face detection with red marks.Page 7 Image Matching Eigenimage Generation A set of eigenimages was generated using 106 test images which were manually cut from 7 test images and edited in Photoshop to catch exact location of faces with a square shape. The cropped test images were converted into gray scale, and then eigenimages were computed using those 106 test images. In order to get a generalized shape of a face, the largest 10 eigenimages in terms of their energy densities, have been obtained as shown in the Fig. 8. To save computing time, the information of eigenimages was compacted into one image which was acquired after averaging the first 9 eigenimages excluding the eigenimage 1, the highest-energy one. The first image was excluded due to its excessive energy concentration which will eliminate the details of face shapes that can be shown from other eigenimages from eigenimage 2 to eigenimage 10. The averaged eigenimage is shown in Fig. 9. eigenimage 1 eigenimage2 eigenimage 3 eigenimage 4 eigenimage 5 eigenimage 6 eigenimage 7 eigenimage 8 eigenimage 9 eigenimage 10 Fig.8. Eigenimages Fig.9. Average image using eigenimagesPage 8 Building Eigenimage Database In order to save time to magnify or shrink an eigenimage to meet the size of the test image, a group of


View Full Document

Stanford EE 368 - Face detection

Documents in this Course
Load more
Download Face detection
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Face detection and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Face detection 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?