E2GoHome

From AIRWiki
Revision as of 09:20, 25 January 2015 by AndreaBonarini (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
E2GoHome
Short Description: The goal of the project is to develop a system to bring E-2? home while maintaining a relationship with people that interacts with it.
Coordinator: AndreaBonarini (andrea.bonarini@polimi.it)
Tutor: AndreaBonarini (andrea.bonarini@polimi.it)
Collaborator:
Students: LorenzoRipani (ripani.lorenzo@gmail.com)
Research Area: Robotics
Research Topic: Robot development
Start: 5/05/2013
End: 20/12/2013
Status: Closed
Level: Ms
Type: Thesis
E-2

The robot was created as a project for an exhibition environment, and its purpose is to attract people's attention and bring them to a particular stand after a careful assessment of the interest of the speaker. This kind of action requires a high level of interaction with the person and it is therefore necessary to increase the human-robot experience, and that's why the robot has been equipped with a neck capable of complex movements and a series of behaviours to enable a communication with the person the robot is talking to.

The elaborate encloses a set of projects developed over time within the AIR-lab. For a complete list see E-2? - A robot for exhibitions.


Hardware & Components

The robot consists of three main sections: a mobile base, a neck and an mechanical head able to show different facial expressions. The head of the robot accommodate all the mechanical control of the eyes, eyebrows and mouth, and support the Kinect sensor, used for the artificial vision system and as navigation sensor.

The neck is designed to ensure better rigidity of the whole structure and, to increase the set of reproducible movements. This is made ​​up of 5 servomotors that provide differt degrees of freedom wich allow the robot to make even very complex movements.

The base of the robot is an omni-directional base ( see Triskar2 ) made up of three pairs of electric motors to drive robot's wheels, a mini-PC Zotac and three pairs of 12V batteries which supply all the components of the robot.

All the control is achieved by using a set of ROS nodes which communicate to handle all robot's tasks.


Behavior

The robot is able to perform different actions as a result of external inputs. The following will describe some of the main behavior that the robot is able to perform.

  • GoHome: When the robot realizes that it has attracted the user's attention , E-2? plans its trajectory towards the stand of the Airlab. The robot relies on facial analysis coming from kinect sensor to determine the speaker's interest.
  • UserFollowing: The robot is capable of determine if an user is following him. The robot periodically checks if the person with whom it started an interaction is near of itself. The robot takes frames of the enviroment to detect people faces. Once detected it compares them with the info stored in memory.
  • Wait: The robot can perceice distances from people and when it detect a long distance from the user it first try to engage him by calling him, if this doesn't work the robot will approach the user to check if he still want to follow itself.
  • RecoverUser: When the robot lose the focus with the user it plan a backward trajectory towards the last detection point to retrive the person. During navigation it will periodically check for known faces.
  • AvoidObstacles: The robot can avoid obstacles in the environment by transforming kinect's images in LaserScan to have a representation of the workspace. With this informations E-2 is able to calculate new trajectories when an obstacle is detected in front of the robot.
  • KeepUserConnection: If the robot is following a path that brings to the stand, and has perceived the presence of the user, E-2? begins an interaction with the person talking about projects and news concerning robotic field, to keep high the level of interest of the speaker.


Video

The integrated functionalities


References

ROS wiki