Learning Robogames

From AIRWiki
Jump to: navigation, search
Learning Robogames
Coordinator:
Tutor: AndreaBonarini (andrea.bonarini@polimi.it), MarcelloRestelli (restelli@elet.polimi.it)
Collaborator:
Students: DavideTateo (davide.tateo@polimi.it)
Research Area: Robotics
Research Topic: Robot learning
Start: 1/11/2014


Abstract

Physical Interactive Robogames (PIRG) are one of the most challenging task in robotics, as they involved complex interaction with both environment and people. They represent a good benchmark for home robotics applications, as they have limited and well defined domains (the game rules), but they allow to highlight most of the common problems that robots can have in domestic environments while they interact with people. To obtain engaging interaction between the player and a robot, the robot must have the ability to adapt to multiple possible scenarios it can found, such as different game configurations and different kind of players. Furthermore a pure reactive agent is not well suited for the task, as the robot tends to act in a naive way, that can be easily exploited by the player, making the game too easy and boring the player. Particularly for complex games, designing and implementing a complex robot policy, not a purely reactive one, is a hard task. The task becomes harder if the robot has to face novel situations and different types of players. The design of complex games that show advanced interaction between the robot and the players is strongly limited by the ability of the robot of act in a credible and reliable way. Reinforcement leaning is useful to overcome these limitations and allows an easier adaptation of the robot to new environments. In particular, recent advances in policy search algorithms and inverse reinforcement learning have made possible for robots to learn complex tasks that are very hard to model and solve by classical approaches. However, policy search approaches may fail when the policy is too complex and general, when the number of robot behavior samples needed to learn the policy becomes intractable, and when the reward signal is sparse. Hierarchical reinforcement learning approaches decompose the task into subgoals, that are easier to learn. In this research we will focus on extending the Hierarchical reinforcement learning approaches in order to allow the learning of complex policies. The key idea behind this approach is to give to the robot a set of tasks of increasing complexity in order to make the robot incrementally learn skills useful to solve more complex tasks. We will research techniques to automatically discover subgoals and transfer subgoals policies to other task and different environments. Inverse reinforcement learning techniques will be used for learning the structural properties of a subgoal. The ultimate goal is to have a robot that, given a sufficient set of skills, is able to engage effectively any kind of player, by learning just using its own experience and expert demonstrations in a simplified version of the game.