DOC PREVIEW
CMU CS 15463 - Manipulating Facial Appearance through Shape and Color

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IEEE Computer Graphics and Applications 0272-17-16/95/$4.00 © 1995 IEEEVol. 15, No. 5: September 1995, pp. 70-76Manipulating Facial Appearance throughShape and ColorDuncan A. RowlandSt. Andrews UniversityDavid I. PerrettSt. Andrews UniversityA technique for defining facial prototypes supports transformations alongquantifiable dimensions in "face space." Examples illustrate the use of shape and colorinformation to perform predictive gender and age transformations.Applications of facial imaging are widespread. They include videophones, automatedface recognition systems, stimuli for psychological studies,1,2 and even visualization ofmultivariate data. Most work thus far has addressed solely the manipulation of facial shapes,either in two dimensions3,4 or in three dimensions.1 In this article, we examine both shapeand color information, and document transformational techniques in these domains byapplying them to perceived "dimensions" of the human face. The examples considered here are restricted to manipulations of the facial dimensions ofage and gender, but the transformational processes apply to instances of any homogeneousobject class. For example, Knuth’s idea of a Metafont5 can be thought of as a template offeature points mapped onto letter shapes in order to parameterize specific base typefaces.This enables the creation of new fonts by interpolating between or extrapolating away fromthe base fonts. We consider objects to form a visually homogeneous class if and only if a set of templatepoints can be defined which map onto all instances of that class. By "template points" wemean a set defining salient component features or parts. In the processes we describe here forfaces, the eyes, lips, nose, cheek bones, and so on are used as features for template points.The template points delineate major feature "landmarks," following previous conventions(notably Brennan3). Automating the selection of template points would be a boon to the process, since wecould then generate a transform automatically for any two groups within a homogeneousobject class (for example, airplanes or letters). However, the present level in understandingof qualitative object features does not support this automation. Creating a class templaterequires understanding what about an object is "essential" (in the Platonic sense of thatword). As several authors have discussed, humans have this ability, but so far computers donot.6 After selecting the feature template set, we manually record (delineate) the pointpositions on an exemplar, though this process may soon be automated.7,8 The processes we describe begin with the creation of a facial prototype. Generally, aprototype can be defined as a representation containing the consistent attributes across aclass of objects.9,10 Once we obtain a class prototype, we can take an exemplar that hassome information missing and augment it with the prototypical information. In effect this"adds in" the average values for the missing information. We use this notion to transformgray-scale images into full color by including the color information from a relevantprototype. It is also possible to deduce the difference between two groups within a class (forexample, male and female faces). Separate prototypes can be formed for each group. Thesecan be used subsequently to define a transformation that will map instances from one grouponto the domain of the other.MethodsThe following sections detail the procedure we use to transform facial images and showhow it can be used to alter perceived facial attributes.Feature encoding and normalizationIn our derivations of prototypes we have used facial images from a variety of sources.However, we constrained the image selection to frontal views with the mouth closed.(Separate prototypes could be made for profile views, mouths open with teeth visible, and soforth.) We also omitted images that showed adornments such as jewelry and other itemsobscuring facial features, such as glasses and beards.The faces used here came from two distinct collections:1. Faces photographed under the same lighting, identical frontal views, neutral facialexpression, and no makeup.2. Faces taken from magazines (various lighting, slight differences in facial orientationand expression, and various amounts of makeup).Originally, we thought it would be important to tightly control the factors mentioned forcollection 1. We found, however, that if we use enough faces to form the prototype (n > 30),the inconsistencies disappear in the averaging process and the resulting prototype stilltypifies the group.The faces forming collection 1 consisted of 300 male and female Caucasians (ages 18 to65). The images (for example, Figure 1a) were frame grabbed in 24-bit color at an averageinterpupilary distance of 142 pixels (horizontal) and full-image resolution of 531 (horizontal)by 704 (vertical) pixels. Impartial subjects rated the images as to the strength of traits such asattractiveness and distinctiveness. We also recorded objective values such as age and gender.Then we used this information to divide the population into classes from which prototypescould be derived. The faces forming collection 2 (average interpupilary distance of 115pixels) were also rated for similar traits, although objective values (except gender) wereunavailable.Figure 1. Deriving a facial prototype. Original shape of anindividual face depicted (a) in color and (b) in black and whitewith superimposed feature delineation points; (c) four face shapesdelineated, surrounding a prototype shape made by averaging thefeature positions across 60 female faces (ages 20 to 30). Originalface warped into prototypical shape, (d) with and (e) withoutcorresponding feature points marked; (f) color prototype made byblending the 60 original faces after they had been warped into theprototype shape.The choice of feature points was described previously.3,4,10 We allocate 195 points to aface such that point 1 refers to the center of the left eye, points 2 to 10 refer to the outercircle around the left iris, and so on as shown in Figure 1b. Using a mouse, an operatordelineates these points manually for each face. (Operators practiced a standard delineationprocedure to maintain high interoperator reliability.) If a feature is occluded (for example, anear or eyebrow hidden by hair), then the operator places the points so that the feature iseither an average size and shape or symmetrical with visible features. An additional 13feature


View Full Document

CMU CS 15463 - Manipulating Facial Appearance through Shape and Color

Documents in this Course
Lecture

Lecture

36 pages

Lecture

Lecture

31 pages

Wrap Up

Wrap Up

5 pages

morphing

morphing

16 pages

stereo

stereo

57 pages

mosaic

mosaic

32 pages

faces

faces

33 pages

MatTrans

MatTrans

21 pages

matting

matting

27 pages

matting

matting

27 pages

wrap up

wrap up

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

40 pages

15RANSAC

15RANSAC

54 pages

lecture

lecture

48 pages

Lecture

Lecture

42 pages

Lecture

Lecture

11 pages

Lecture

Lecture

52 pages

Lecture

Lecture

39 pages

stereo

stereo

57 pages

Lecture

Lecture

75 pages

texture

texture

50 pages

Lectures

Lectures

52 pages

Load more
Download Manipulating Facial Appearance through Shape and Color
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Manipulating Facial Appearance through Shape and Color and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Manipulating Facial Appearance through Shape and Color 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?