First Level Theses

From AIRWiki
Revision as of 12:29, 29 September 2008 by MarcelloRestelli (Talk | contribs) (New page: <!--==== Agents, Multiagent Systems, Agencies ====--> <!--==== BioSignal Analysis ====--> <!--==== Computer Vision and Image Analysis ====--> <!--==== E-Science ====--> ==== Machine Learni...)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Machine Learning

Title: Reinforcement Learning in Poker
Description: In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.

Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.

On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.

This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.

Tutor: Marcello Restelli
Start: Anytime
Number of students: 2
CFU: 5