Difference between revisions of "Human-Robot Gestual Interaction"
m |
m |
||
Line 14: | Line 14: | ||
}} | }} | ||
− | The aim if this project is to develop effective interaction between the robot [[E-2? - A robot for exhibitions|E-2]] people at an exhibition to convince them to be escorted to the home booth. Since the robot can hardly speak, but cannot understand anything, given the application context and the current technology, most of the interaction relies on gestures and movements of the robot. | + | The aim if this project is to develop effective interaction between the robot [[E-2? - A robot for exhibitions|E-2?]] and people at an exhibition to convince them to be escorted to the home booth. Since the robot can hardly speak, but cannot understand anything, given the application context and the current technology, most of the interaction relies on gestures and movements of the robot. |
E-2? has a kinect system on the head and this is the main source of information for this project. | E-2? has a kinect system on the head and this is the main source of information for this project. |
Revision as of 11:11, 7 July 2011
Human-Robot Gestual Interaction
| |
Short Description: | Gestual interaction with people at an exhibition |
Coordinator: | AndreaBonarini (andrea.bonarini@polimi.it) |
Tutor: | AndreaBonarini (andrea.bonarini@polimi.it) |
Collaborator: | |
Students: | DeborahZamponi (deborahzamponi@gmail.com), CristianMandelli (cristianmandelli@gmail.com) |
Research Area: | Robotics |
Research Topic: | Robot development |
Start: | 2011/07/1 |
End: | 2012/03/31 |
Status: | Active |
The aim if this project is to develop effective interaction between the robot E-2? and people at an exhibition to convince them to be escorted to the home booth. Since the robot can hardly speak, but cannot understand anything, given the application context and the current technology, most of the interaction relies on gestures and movements of the robot.
E-2? has a kinect system on the head and this is the main source of information for this project.
The implementation will include libraries from the ROS community, together with reasoning modules both to make the collection of information more robust that with the vision algorithms only, and to select the best actions to do in the different situations.
The gestual interaction will play an important role both in the approaching phase, in the convincing phase and in the escorting phase.