DOC PREVIEW
ICAHRRfinal

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

From RoboNova to HUBO: Platforms for Robot Dance David Grunberg1, Robert Ellenberg12, Dr. Youngmoo E. Kim1, Dr. Paul Y. Oh2 Drexel University, 3141 Chestnut Street Philadelphia, PA, 19104 {[email protected], [email protected], ykim@ drexel.edu, [email protected] } 1 Electrical Engineering; 2 Mechanical Engineering Abstract. A robot with the ability to dance in response to music could lead to novel and interesting interactions with humans. For example, such a robot could be used to augment live performances alongside human dancers. This paper describes a system enabling humanoid robots to move in synchrony with music. A small robot, the Hitec RoboNova, was initially used to develop smooth sequences of complex gestures used in human dance. The system uses a real-time beat prediction algorithm so that the robot’s movements are synchronized with the audio. Finally, we implemented the overall system on a much larger robot, HUBO, to establish the validity of the smaller RoboNova as a useful prototyping platform. Keywords: Gestures, robots, robotics, dance, motion. 1 Introduction Several recent artistic productions have incorporated robots as performers. For example, in 2007 Toyota unveiled robots that could play the trumpet and violin in orchestras [1]. In 2008, the Honda robot ASIMO conducted the Detroit Symphony Orchestra [2]. And in 2009, robot actors were used in a theater production in Osaka, Japan [3]. But as of yet the use of humanoid robots in dance has received little attention from robotics researchers and less from the dance community. As approached through research in human-robot interaction, this topic offers a unique opportunity to explore the nature of human creative movement. A major problem in developing a dancing robot is that full-sized humanoid robots remain very expensive. This makes it risky to test new algorithms on these robots, because an error that damages the robot could be costly. Thus, a less costly prototyping platform to test algorithms for larger dance robots would be useful for researchers. Additional challenges include using signal processing algorithms to predict music beats in real-time and designing robot gestures that appear humanlike. We are exploring solutions to these problems utilizing the Hitec RoboNova as a prototyping platform for the larger HUBO robot (Figure 1). The physical configuration of the RoboNova is similar to the HUBO and both robots allow for sufficient control to produce smooth gestures. This builds off of our previous work in this area, in which we used the RoboNova to produce movements in response to audio with an update rate of approximately 5 Hz [4].Figure 1. The RoboNova and HUBO (left and right). 2 Prior Work Two robots that incorporate movement with music are Keepon [5] and Haile [6]. Keepon bobs its head in time with audio, and it has been used to influence others to dance. This work offers evidence that human-dancing robot interactions can be constructive and influential. Keepon, however, does not detect beats in music; the movement is pre-programmed to be synchronized with the audio. On the other hand, Haile is able to identify beats and rhythms and synthesizes complementary ones on a drum, but its gestures are designed for percussion performance and not dance. The Ms DanceR [7] was built to solve a related problem – creating a robot able to dance with a human partner. Although it has some understanding of dance styles, it cannot locate the beats in music and is reliant on a human partner for dancing control information. The problem of music beat identification from the acoustic signal has been studied by many researchers in music information retrieval (e.g., [8] and [9]). Recent beat tracking methods are able to accurately identify the strong impulses that define a song’s tempo. The best performing systems, however, operate offline and are not suitable for real-time dancing in response to live audio. Our prior work on this project is detailed in I.a.i.[4]. 3 Beat Predictor The beat predictor used in our system is based on a beat-identification algorithm proposed by Scheirer [8], which we modified to operate in real-time. Our configuration uses an outboard computer for audio processing that communicates wirelessly with the robot. Our algorithm is depicted in Figure 2 and functions as follows:- Each audio frame (92.9 msec) is sent through a Cochlear filterbank, which splits the audio into frequency subbands similar to perceptual resolution of the human ear. (Figure 2b) - Each subband is downsampled, half-wave rectified, and smoothed with a lowpass filter. (Figure 2c) - The resulting signals are passed through a set of comb filters of varying delays. When audio passes through a comb filter, resonance results if the delay of that filter matches the periodicity of the audio. The filter that produces the greatest resonance across all subbands determines the current tempo estimate. (Figure 2d). - The phase of the audio is determined from the delay states of this filter, and the beat location is determined from the tempo estimate and change in the phase. Figure 2. Description of the beat prediction algorithm. Flow chart (left,), frame of audio from "Fire Wire" by Cosmic Gate (right a), 120-160 Hz subband of audio frame (right b), smoothed subband (right c), tempogram over several audio frames (right d). 4 Robot Platforms We chose the RoboNova (14'') as our prototyping platform for the following reasons: - Its low cost allows us to test algorithms without fear of expensive damage. - Its humanoid shape and large number of Degrees of Freedom (DoFs) approximate HUBO's, so its gestures will be similar to those of the larger robot. - Its wireless communication abilities allow an external processor to assume some of the computational burden. Our initial RoboNova implementation used the RoboBasic programming environment [4]. This provided a convenient platform for generating gestures, as it issimple to have the robot interpolate between any two points.1 The limitations of this environment were: - The robot could only linearly interpolate between start and end points. Gestures moved at the same speed all the way through, which appeared unnatural. - RoboBasic is relatively slow; RoboNova’s update rate with this environment was on the order of 5 Hz. This causes timing inaccuracies. - Because of a variable loop speed, a gesture would occasionally be sent after the beat occurred, and the robot would


ICAHRRfinal

Download ICAHRRfinal
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view ICAHRRfinal and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view ICAHRRfinal 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?