DOC PREVIEW
TAMU CSCE 483 - face-progress4

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

- 1 - Embodied Speech and Facial Expression Avatar Progress Report 4 Presented to Professor Ricardo Gutierrez-Osuna on April 26, 2004 by Dan Harbin - Evan Zoss - Jaclyn Tech - Brent Sicking- 2 - Table of Contents Accomplishments .......................................................................................................................... 3 Motor Power ............................................................................................................................... 3 Yano Emotions............................................................................................................................ 3 Eye Motor Tweaking .................................................................................................................. 3 Parameterized Expressions ......................................................................................................... 4 Sound Analysis Software............................................................................................................ 4 Time Schedule ............................................................................................................................... 6 Figure 1: Gantt chart .............................................................................................................. 6 Goals for the Next Two Weeks .................................................................................................... 7 Printed Circuit Board .................................................................................................................. 7 Finalizing of Expression Presets................................................................................................. 7 Finalizing of Sound Analysis Software ...................................................................................... 7 Re-assemble Yano ...................................................................................................................... 7 Final Communication.................................................................................................................. 7- 3 - Accomplishments Motor Power The problem with getting enough power to Yano's motors has been resolved. For time constraint purposes, we chose to stack multiple TC4424 H-Bridges in parallel. This allows the motors to draw as much current as they want to. Although the motors will work and not get stuck running off of only 2 H-Bridges, 3 in parallel let them move much faster and more reliably. Yano Emotions We created the presets for Yano's emotions. We had pretty good success with being convincing on most of the emotions. Some of the more angry emotions are quite hard to create on Yano's face because he is naturally smiling and happy. The biggest problems we had were with consistency of the expressions over time and speed of creating the expressions. To deal with consistency, we let the eye motors draw more power by adding an additional bridge before them, and by doing a lot of tweaking on the eye motor software control. The additional power to the eye motors helped with the speed of creating the expressions some, but we have mostly dealt with this by adding the ability to reverse the eye motor's direction if it is a shorter path. Eye Motor Tweaking The problem with Yano's eyes is that they have the largest movement range and therefore take the longest and are most easily gotten out of sync during runtime. To compare, the eyes usually have around 55 pulses to make a complete cycle, whereas the cheeks have 5 pulses and the mouth has 3. The first thing we noticed was that the pulses pulsed faster during runtime then during calibration. This could cause inconsistencies between calibration settings and runtime settings because of the difference in "spin down" time between pulses. We added a little software delay into the runtime control, not noticeable when watching, but enough to keep the runtime in sync with calibration. The next problem we that we wanted to be able to reverse the eye motor if needed to take the shortest path to its destination (we can do this because the eye motor, unlike the others, has no limits and can always run in either direction). We created special procedures for the eye motors that would allow them to wrap around past MAX to 0 and past 0 to MAX in the other direction. When we did this, however, switching directions would- 4 - cause the eye motor to get out of sync and not end up at its calibrated values. We found that, because of the hardware in Yano, when the direction is reversed, it takes a few pulses to the motors before the gears are actually moved and the eyes actually move. We accounted for this in the software, and now have a good consistent, fast control over Yano's eye motors. Parameterized Expressions We have taken a good look at the parameterized expressions page for Yano. The goal was to come up with a translation from the Expression Variables (valence, arousal, and stance) to the Motor Variables (eyes, cheeks, mouth). We played with many different functions, from linear to quadratic and exponential, along many axes. We did notice some good correlations between the expression variables and the motor variables that indicated this translation was possible. However, we had a couple big problems. One, the non-linear nature of the eye motors would make it nearly impossible to have any kind of functional translation map from the expression variables to this motor variable. And two, the very low resolution of the other two motors (5 values for the cheeks and 3 for the mouth) would mean a lot of significant error just from rounding problems when getting values for these motors from the expression variables. Taking into consideration these problems and the goal of our project to be more visually convincing then mathematically accurate to the expressions graph of valence, arousal, and stance, we decided a slightly different method would be appropriate. Basically, we took random input from the expression variables, and produced random output in the motor variables, with no functional correlation, but with a one to one mapping of every possible input to every possible output. This provides us with the convincing emotions on Yano while still having parameterized control of him. Sound Analysis Software We got the mouth to move "realistically" with speech by adding oscillation to our algorithm. For nearly all sound


View Full Document

TAMU CSCE 483 - face-progress4

Download face-progress4
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view face-progress4 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view face-progress4 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?