<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://airwiki.elet.polimi.it/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=EwertonLopes</id>
		<title>AIRWiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://airwiki.elet.polimi.it/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=EwertonLopes"/>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php/Special:Contributions/EwertonLopes"/>
		<updated>2026-04-04T09:34:32Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.25.6</generator>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18074</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18074"/>
				<updated>2016-03-14T15:35:04Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Learning behaviors and user models to optimise the player's experience in robogames &lt;br /&gt;
|short_descr=Focused on the use of machine learning for supporting player modelling and behavior/strategy adjustment towards maintaining (or conversely, improving) human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=Francesco Amigoni; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents without the need to produce an entire virtual reality, such as that in classical videogames. This new style of game interaction, known as Physically Interactive Robogames (PIRG), exploit the real world (in both dynamical unstructured and structured aspect) as environments and real, physical, autonomous entities as game companions. In this scenario, the present PhD research aims at investigating the use of machine learning techniques for developing complex behavior in PIRG autonomous robots. Specially, to the extent of supporting the development of on-line player modelling (which should also include an approach to intention detection) envisioning in-game behavior/strategy adjustment towards maintaining (or improving) the human player engagement. The planned methodology also aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are constraints currently addressed in robogames and in the whole robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and Human-Robot Interaction. Moreover, it should add a new layer of exploration to the problem of creating playing robots even more able of being perceived as rational agents, i.e., possibly smart enough to be accepted as opponents or teammates, thus, becoming more likely to reach the mass-market as a new robotic product.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
The explosion of advancements in computing power, artificial intelligence (AI) and hardware has resulted in a steady progress in intelligent systems, entertainment devices and robots. Interactive systems that perceive, act and communicate are more and more able to perform and occupy a growing number of roles in today’s society. Recently, taking advantages from that, video games companies, inserted into a mass market of entertainment, have aimed on establishing a new paradigm that involve players actively moving in front of the screen1, picking up objects around them, actually interacting with the game in a more realistic fashion with or without the need of ad-hoc intelligent devices. This scenario, although enabling impressive gaming experience through virtual reality, usually poses some limitations regarding, for example: price, movement constraints or even the requirement of an assembly of a specific playing environment structure (which directly affects price).&lt;br /&gt;
&lt;br /&gt;
What seems to be a natural evolution for game playing experience, though, is to bring the elimination of screens and devices in order to present the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality. This pretty new style of games is defined as Physically Interactive RoboGames (PIRG) and has as the main objective the exploitation of the real world (in both its dynamical unstructured and structured aspects) as environment and one or more real, physical, autonomous robots as a game opponent/companion for human player(s) [25].&lt;br /&gt;
&lt;br /&gt;
Like commercial virtual games, the main aspect of PIRG’s is to produce a sense of entertainment and pleasure that can be &amp;quot;consumed&amp;quot; by a large number of users. Furthermore, an important aspect of autonomous robots and systems during the game should be, as commonly expected, an exhibition of rational behavior and, in this sense, they must be capable enough to play the role of opponents or teammates effectively, since by practical means people tend do not play with or against a dull companion/opponent [25]. It can be observed that the development of an ability of AI adaptation is strongly important since it may help to keep the user’s enjoyment by accordingly (or at least ideally) respond to his skills and emotions, producing a more realist appearance of rationality. This observation has psychological foundation given that productivity and/or satisfaction can be raise by a proper alignment between personality and environment [4]. Indeed, it has been shown that when virtual AI- controlled game characters play too weak against the human player, the human player loses interest in the game. Conversely, the human player often gets frustrated and wants to quit playing when AI-controlled characters play too strong against him [5, 23]. Similar observation has been done during experiments involving PIRG’s as well [25].&lt;br /&gt;
&lt;br /&gt;
Using that as a motivation, researchers often try to implement a model of the human player in order to supplement the decision making of actions by the AI engine and related components, allowing some adaptation to happen. A very popular approach in such domain is that of Dynamic Difficulty Adjustment (DDA), where the difficulty level of the game is adjusted dynamically in order to better fit the individual player. In the last years, player modeling has been a pretty hot topic in the computer science community and, specially, in the commercial game development one, giving rise to several sophisticated models, grouped into different taxonomies [5, 29, 31]. Proposed approaches often exploit different aspects, such as: actions, stategies, tactics, profiling, emotional traits and method of data extraction [5]. At the core of most recent attempts is the application of machine learning (ML) techniques that can explore the amount of data generated by the game and find useful patterns for reasoning also under uncertainty.&lt;br /&gt;
&lt;br /&gt;
A part from the success of virtual games, research interest in practical development of PIRG’s is still in its &amp;quot;infancy&amp;quot; and despite of initial &amp;quot;proof-of-concept&amp;quot; progresses [25] the design and implementation of player modeling for supporting strategy selection envisioning to keep player enjoyment is yet a poorly addressed problem. Based on this, the present research is aimed at investigating how to develop efficient player modeling abilities in autonomous robots for the purpose of increasing player’s enjoyment in PIRGs. From a ML-based perspective, I focus on the ability of accessing player features for supporting strategy adjustment. To some extent, this can be viewed as a first attempt for implementing DDA in robogames. Additionally, I seek to test research results both in simulated environment as well as via the exploitation of mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments, since these are interesting constraints currently addressed in robogames and in the whole Robotics community which may enable the spread of robots in the society and make them reach the market.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18073</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18073"/>
				<updated>2016-03-14T15:32:34Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Learning behaviors and user models to optimise the player's experience in robogames &lt;br /&gt;
|short_descr=Focused on the development of player modelling (which should include an approach to intention detection) for strategy adjustment with the aim of maintaining (or conversely, increasing) the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents without the need to produce an entire virtual reality, such as that in classical videogames. This new style of game interaction, known as Physically Interactive Robogames (PIRG), exploit the real world (in both dynamical unstructured and structured aspect) as environments and real, physical, autonomous entities as game companions. In this scenario, the present PhD research aims at investigating the use of machine learning techniques for developing complex behavior in PIRG autonomous robots. Specially, to the extent of supporting the development of on-line player modelling (which should also include an approach to intention detection) envisioning in-game behavior/strategy adjustment towards maintaining (or improving) the human player engagement. The planned methodology also aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are constraints currently addressed in robogames and in the whole robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and Human-Robot Interaction. Moreover, it should add a new layer of exploration to the problem of creating playing robots even more able of being perceived as rational agents, i.e., possibly smart enough to be accepted as opponents or teammates, thus, becoming more likely to reach the mass-market as a new robotic product.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
The explosion of advancements in computing power, artificial intelligence (AI) and hardware has resulted in a steady progress in intelligent systems, entertainment devices and robots. Interactive systems that perceive, act and communicate are more and more able to perform and occupy a growing number of roles in today’s society. Recently, taking advantages from that, video games companies, inserted into a mass market of entertainment, have aimed on establishing a new paradigm that involve players actively moving in front of the screen1, picking up objects around them, actually interacting with the game in a more realistic fashion with or without the need of ad-hoc intelligent devices. This scenario, although enabling impressive gaming experience through virtual reality, usually poses some limitations regarding, for example: price, movement constraints or even the requirement of an assembly of a specific playing environment structure (which directly affects price).&lt;br /&gt;
&lt;br /&gt;
What seems to be a natural evolution for game playing experience, though, is to bring the elimination of screens and devices in order to present the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality. This pretty new style of games is defined as Physically Interactive RoboGames (PIRG) and has as the main objective the exploitation of the real world (in both its dynamical unstructured and structured aspects) as environment and one or more real, physical, autonomous robots as a game opponent/companion for human player(s) [25].&lt;br /&gt;
&lt;br /&gt;
Like commercial virtual games, the main aspect of PIRG’s is to produce a sense of entertainment and pleasure that can be &amp;quot;consumed&amp;quot; by a large number of users. Furthermore, an important aspect of autonomous robots and systems during the game should be, as commonly expected, an exhibition of rational behavior and, in this sense, they must be capable enough to play the role of opponents or teammates effectively, since by practical means people tend do not play with or against a dull companion/opponent [25]. It can be observed that the development of an ability of AI adaptation is strongly important since it may help to keep the user’s enjoyment by accordingly (or at least ideally) respond to his skills and emotions, producing a more realist appearance of rationality. This observation has psychological foundation given that productivity and/or satisfaction can be raise by a proper alignment between personality and environment [4]. Indeed, it has been shown that when virtual AI- controlled game characters play too weak against the human player, the human player loses interest in the game. Conversely, the human player often gets frustrated and wants to quit playing when AI-controlled characters play too strong against him [5, 23]. Similar observation has been done during experiments involving PIRG’s as well [25].&lt;br /&gt;
&lt;br /&gt;
Using that as a motivation, researchers often try to implement a model of the human player in order to supplement the decision making of actions by the AI engine and related components, allowing some adaptation to happen. A very popular approach in such domain is that of Dynamic Difficulty Adjustment (DDA), where the difficulty level of the game is adjusted dynamically in order to better fit the individual player. In the last years, player modeling has been a pretty hot topic in the computer science community and, specially, in the commercial game development one, giving rise to several sophisticated models, grouped into different taxonomies [5, 29, 31]. Proposed approaches often exploit different aspects, such as: actions, stategies, tactics, profiling, emotional traits and method of data extraction [5]. At the core of most recent attempts is the application of machine learning (ML) techniques that can explore the amount of data generated by the game and find useful patterns for reasoning also under uncertainty.&lt;br /&gt;
&lt;br /&gt;
A part from the success of virtual games, research interest in practical development of PIRG’s is still in its &amp;quot;infancy&amp;quot; and despite of initial &amp;quot;proof-of-concept&amp;quot; progresses [25] the design and implementation of player modeling for supporting strategy selection envisioning to keep player enjoyment is yet a poorly addressed problem. Based on this, the present research is aimed at investigating how to develop efficient player modeling abilities in autonomous robots for the purpose of increasing player’s enjoyment in PIRGs. From a ML-based perspective, I focus on the ability of accessing player features for supporting strategy adjustment. To some extent, this can be viewed as a first attempt for implementing DDA in robogames. Additionally, I seek to test research results both in simulated environment as well as via the exploitation of mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments, since these are interesting constraints currently addressed in robogames and in the whole Robotics community which may enable the spread of robots in the society and make them reach the market.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18072</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18072"/>
				<updated>2016-03-14T15:31:55Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Learning behaviors and user models to optimise the player's experience in robogames &lt;br /&gt;
|short_descr=Focused on the development of player modelling (which should include an approach to intention detection) for strategy adjustment with the aim of maintaining (or conversely, increasing) the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents without the need to produce an entire virtual reality, such as that in classical videogames. This new style of game interaction, known as Physically Interactive Robogames (PIRG), exploit the real world (in both dynamical unstructured and structured aspect) as environments and real, physical, autonomous entities as game companions. In this scenario, the present PhD research aims at investigating the use of machine learning techniques for developing complex behavior in PIRG autonomous robots. Specially, to the extent of supporting the development of on-line player modelling (which should also include an approach to intention detection) envisioning in-game behavior/strategy adjustment towards maintaining (or improving) the human player engagement. The planned methodology also aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are constraints currently addressed in robogames and in the whole robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and Human-Robot Interaction. Moreover, it should add a new layer of exploration to the problem of creating playing robots even more able of being perceived as rational agents, i.e., possibly smart enough to be accepted as opponents or teammates, thus, becoming more likely to reach the mass-market as a new robotic product.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
The explosion of advancements in computing power, artificial intelligence (AI) and hardware has resulted in a steady progress in intelligent systems, entertainment devices and robots. Interactive systems that perceive, act and communicate are more and more able to perform and occupy a growing number of roles in today’s society. Recently, taking advantages from that, video games companies, inserted into a mass market of entertainment, have aimed on establishing a new paradigm that involve players actively moving in front of the screen1, picking up objects around them, actually interacting with the game in a more realistic fashion with or without the need of ad-hoc intelligent devices. This scenario, although enabling impressive gaming experience through virtual reality, usually poses some limitations regarding, for example: price, movement constraints or even the requirement of an assembly of a specific playing environment structure (which directly affects price).&lt;br /&gt;
&lt;br /&gt;
What seems to be a natural evolution for game playing experience, though, is to bring the elimination of screens and devices in order to present the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality. This pretty new style of games is defined as Physically Interactive RoboGames (PIRG) and has as the main objective the exploitation of the real world (in both its dynamical unstructured and structured aspects) as environment and one or more real, physical, autonomous robots as a game opponent/companion for human player(s) [25].&lt;br /&gt;
&lt;br /&gt;
Like commercial virtual games, the main aspect of PIRG’s is to produce a sense of entertainment and pleasure that can be &amp;quot;consumed&amp;quot; by a large number of users. Furthermore, an important aspect of autonomous robots and systems during the game should be, as commonly expected, an exhibition of rational behavior and, in this sense, they must be capable enough to play the role of opponents or teammates effectively, since by practical means people tend do not play with or against a dull companion/opponent [25]. It can be observed that the development of an ability of AI adaptation is strongly important since it may help to keep the user’s enjoyment by accordingly (or at least ideally) respond to his skills and emotions, producing a more realist appearance of rationality. This observation has psychological foundation given that productivity and/or satisfaction can be raise by a proper alignment between personality and environment [4]. Indeed, it has been shown that when virtual AI- controlled game characters play too weak against the human player, the human player loses interest in the game. Conversely, the human player often gets frustrated and wants to quit playing when AI-controlled characters play too strong against him [5, 23]. Similar observation has been done during experiments involving PIRG’s as well [25].&lt;br /&gt;
&lt;br /&gt;
Using that as a motivation, researchers often try to implement a model of the human player in order to supplement the decision making of actions by the AI engine and related components, allowing some adaptation to happen. A very popular approach in such domain is that of Dynamic Difficulty Adjustment (DDA), where the difficulty level of the game is adjusted dynamically in order to better fit the individual player. In the last years, player modeling has been a pretty hot topic in the computer science community and, specially, in the commercial game development one, giving rise to several sophisticated models, grouped into different taxonomies [5, 29, 31]. Proposed approaches often exploit different aspects, such as: actions, stategies, tactics, profiling, emotional traits and method of data extraction [5]. At the core of most recent attempts is the application of machine learning (ML) techniques that can explore the amount of data generated by the game and find useful patterns for reasoning also under uncertainty.&lt;br /&gt;
&lt;br /&gt;
A part from the success of virtual games, research interest in practical development of PIRG’s is still in its &amp;quot;infancy&amp;quot; and despite of initial &amp;quot;proof-of-concept&amp;quot; progresses [25] the design and implementation of player modeling for supporting strategy selection envisioning to keep player enjoyment is yet a poorly addressed problem. Based on this, the present research is aimed at investigating how to develop efficient player modeling abilities in autonomous robots for the purpose of increasing player’s enjoyment in PIRGs. From a ML-based perspective, I focus on the ability of accessing player features for supporting strategy adjustment. To some extent, this can be viewed as a first attempt for implementing DDA in robogames. Additionally, I seek to test research results both in simulated environment as well as via the exploitation of mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments, since these are interesting constraints currently addressed in robogames and in the whole Robotics community which may enable the spread of robots in the society and make them reach the market.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18071</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=18071"/>
				<updated>2016-03-14T15:28:44Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Robogame Strategy&lt;br /&gt;
|short_descr=Focused on the development of player modelling (which should include an approach to intention detection) for strategy adjustment with the aim of maintaining (or conversely, increasing) the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents without the need to produce an entire virtual reality, such as that in classical videogames. This new style of game interaction, known as Physically Interactive Robogames (PIRG), exploit the real world (in both dynamical unstructured and structured aspect) as environments and real, physical, autonomous entities as game companions. In this scenario, the present PhD research aims at investigating the use of machine learning techniques for developing complex behavior in PIRG autonomous robots. Specially, to the extent of supporting the development of on-line player modelling (which should also include an approach to intention detection) envisioning in-game behavior/strategy adjustment towards maintaining (or improving) the human player engagement. The planned methodology also aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are constraints currently addressed in robogames and in the whole robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and Human-Robot Interaction. Moreover, it should add a new layer of exploration to the problem of creating playing robots even more able of being perceived as rational agents, i.e., possibly smart enough to be accepted as opponents or teammates, thus, becoming more likely to reach the mass-market as a new robotic product.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
The explosion of advancements in computing power, artificial intelligence (AI) and hardware has resulted in a steady progress in intelligent systems, entertainment devices and robots. Interactive systems that perceive, act and communicate are more and more able to perform and occupy a growing number of roles in today’s society. Recently, taking advantages from that, video games companies, inserted into a mass market of entertainment, have aimed on establishing a new paradigm that involve players actively moving in front of the screen1, picking up objects around them, actually interacting with the game in a more realistic fashion with or without the need of ad-hoc intelligent devices. This scenario, although enabling impressive gaming experience through virtual reality, usually poses some limitations regarding, for example: price, movement constraints or even the requirement of an assembly of a specific playing environment structure (which directly affects price).&lt;br /&gt;
&lt;br /&gt;
What seems to be a natural evolution for game playing experience, though, is to bring the elimination of screens and devices in order to present the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality. This pretty new style of games is defined as Physically Interactive RoboGames (PIRG) and has as the main objective the exploitation of the real world (in both its dynamical unstructured and structured aspects) as environment and one or more real, physical, autonomous robots as a game opponent/companion for human player(s) [25].&lt;br /&gt;
&lt;br /&gt;
Like commercial virtual games, the main aspect of PIRG’s is to produce a sense of entertainment and pleasure that can be &amp;quot;consumed&amp;quot; by a large number of users. Furthermore, an important aspect of autonomous robots and systems during the game should be, as commonly expected, an exhibition of rational behavior and, in this sense, they must be capable enough to play the role of opponents or teammates effectively, since by practical means people tend do not play with or against a dull companion/opponent [25]. It can be observed that the development of an ability of AI adaptation is strongly important since it may help to keep the user’s enjoyment by accordingly (or at least ideally) respond to his skills and emotions, producing a more realist appearance of rationality. This observation has psychological foundation given that productivity and/or satisfaction can be raise by a proper alignment between personality and environment [4]. Indeed, it has been shown that when virtual AI- controlled game characters play too weak against the human player, the human player loses interest in the game. Conversely, the human player often gets frustrated and wants to quit playing when AI-controlled characters play too strong against him [5, 23]. Similar observation has been done during experiments involving PIRG’s as well [25].&lt;br /&gt;
&lt;br /&gt;
Using that as a motivation, researchers often try to implement a model of the human player in order to supplement the decision making of actions by the AI engine and related components, allowing some adaptation to happen. A very popular approach in such domain is that of Dynamic Difficulty Adjustment (DDA), where the difficulty level of the game is adjusted dynamically in order to better fit the individual player. In the last years, player modeling has been a pretty hot topic in the computer science community and, specially, in the commercial game development one, giving rise to several sophisticated models, grouped into different taxonomies [5, 29, 31]. Proposed approaches often exploit different aspects, such as: actions, stategies, tactics, profiling, emotional traits and method of data extraction [5]. At the core of most recent attempts is the application of machine learning (ML) techniques that can explore the amount of data generated by the game and find useful patterns for reasoning also under uncertainty.&lt;br /&gt;
&lt;br /&gt;
A part from the success of virtual games, research interest in practical development of PIRG’s is still in its &amp;quot;infancy&amp;quot; and despite of initial &amp;quot;proof-of-concept&amp;quot; progresses [25] the design and implementation of player modeling for supporting strategy selection envisioning to keep player enjoyment is yet a poorly addressed problem. Based on this, the present research is aimed at investigating how to develop efficient player modeling abilities in autonomous robots for the purpose of increasing player’s enjoyment in PIRGs. From a ML-based perspective, I focus on the ability of accessing player features for supporting strategy adjustment. To some extent, this can be viewed as a first attempt for implementing DDA in robogames. Additionally, I seek to test research results both in simulated environment as well as via the exploitation of mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments, since these are interesting constraints currently addressed in robogames and in the whole Robotics community which may enable the spread of robots in the society and make them reach the market.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17935</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17935"/>
				<updated>2015-09-21T16:35:11Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* General Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Robogame Strategy&lt;br /&gt;
|short_descr=Focused on the development of player modelling (which should include an approach to intention detection) for strategy adjustment with the aim of maintaining (or conversely, increasing) the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Short abstract ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, this PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
The explosion of advancements in computing power, artificial intelligence (AI) and hardware has resulted in a steady progress in intelligent systems, entertainment devices and robots. Interactive systems that perceive, act and communicate are more and more able to perform and occupy a growing number of roles in today’s society. Recently, taking advantages from that, video games companies, inserted into a mass market of entertainment, have aimed on establishing a new paradigm that involve players actively moving in front of the screen1, picking up objects around them, actually interacting with the game in a more realistic fashion with or without the need of ad-hoc intelligent devices. This scenario, although enabling impressive gaming experience through virtual reality, usually poses some limitations regarding, for example: price, movement constraints or even the requirement of an assembly of a specific playing environment structure (which directly affects price).&lt;br /&gt;
&lt;br /&gt;
What seems to be a natural evolution for game playing experience, though, is to bring the elimination of screens and devices in order to present the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality. This pretty new style of games is defined as Physically Interactive RoboGames (PIRG) and has as the main objective the exploitation of the real world (in both its dynamical unstructured and structured aspects) as environment and one or more real, physical, autonomous robots as a game opponent/companion for human player(s) [25].&lt;br /&gt;
&lt;br /&gt;
Like commercial virtual games, the main aspect of PIRG’s is to produce a sense of entertainment and pleasure that can be &amp;quot;consumed&amp;quot; by a large number of users. Furthermore, an important aspect of autonomous robots and systems during the game should be, as commonly expected, an exhibition of rational behavior and, in this sense, they must be capable enough to play the role of opponents or teammates effectively, since by practical means people tend do not play with or against a dull companion/opponent [25]. It can be observed that the development of an ability of AI adaptation is strongly important since it may help to keep the user’s enjoyment by accordingly (or at least ideally) respond to his skills and emotions, producing a more realist appearance of rationality. This observation has psychological foundation given that productivity and/or satisfaction can be raise by a proper alignment between personality and environment [4]. Indeed, it has been shown that when virtual AI- controlled game characters play too weak against the human player, the human player loses interest in the game. Conversely, the human player often gets frustrated and wants to quit playing when AI-controlled characters play too strong against him [5, 23]. Similar observation has been done during experiments involving PIRG’s as well [25].&lt;br /&gt;
&lt;br /&gt;
Using that as a motivation, researchers often try to implement a model of the human player in order to supplement the decision making of actions by the AI engine and related components, allowing some adaptation to happen. A very popular approach in such domain is that of Dynamic Difficulty Adjustment (DDA), where the difficulty level of the game is adjusted dynamically in order to better fit the individual player. In the last years, player modeling has been a pretty hot topic in the computer science community and, specially, in the commercial game development one, giving rise to several sophisticated models, grouped into different taxonomies [5, 29, 31]. Proposed approaches often exploit different aspects, such as: actions, stategies, tactics, profiling, emotional traits and method of data extraction [5]. At the core of most recent attempts is the application of machine learning (ML) techniques that can explore the amount of data generated by the game and find useful patterns for reasoning also under uncertainty.&lt;br /&gt;
&lt;br /&gt;
A part from the success of virtual games, research interest in practical development of PIRG’s is still in its &amp;quot;infancy&amp;quot; and despite of initial &amp;quot;proof-of-concept&amp;quot; progresses [25] the design and implementation of player modeling for supporting strategy selection envisioning to keep player enjoyment is yet a poorly addressed problem. Based on this, the present research is aimed at investigating how to develop efficient player modeling abilities in autonomous robots for the purpose of increasing player’s enjoyment in PIRGs. From a ML-based perspective, I focus on the ability of accessing player features for supporting strategy adjustment. To some extent, this can be viewed as a first attempt for implementing DDA in robogames. Additionally, I seek to test research results both in simulated environment as well as via the exploitation of mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments, since these are interesting constraints currently addressed in robogames and in the whole Robotics community which may enable the spread of robots in the society and make them reach the market.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17934</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17934"/>
				<updated>2015-09-04T16:06:35Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* Game Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Game Description==&lt;br /&gt;
Click [https://docs.google.com/document/d/1WyVIm6Kz1FuK7JLO8RM41h3G8KnhdUziYo1M6lyEUOo/edit?usp=sharing here] to access the google docs editable version.&lt;br /&gt;
*Current version: 5.&lt;br /&gt;
&lt;br /&gt;
== Architectural assumptions ==&lt;br /&gt;
* Omnidirectional motion&lt;br /&gt;
* Omnidirectional onboard camera?&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17923</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17923"/>
				<updated>2015-08-26T13:11:38Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Robogame Strategy&lt;br /&gt;
|short_descr=Focused on the development of player modelling (which should include an approach to intention detection) for strategy adjustment with the aim of maintaining (or conversely, increasing) the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== General Description ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, this PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17922</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17922"/>
				<updated>2015-08-26T13:10:59Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=ewertonprofilepic.jpg&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
*Robogames&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My [[Robogame_Strategy| PhD research project]] proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackle the development of player modelling (which should also include an approach to intention detection) for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17921</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17921"/>
				<updated>2015-08-26T13:07:50Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: Replaced content with &amp;quot;== Game Description== Click [https://docs.google.com/document/d/1WyVIm6Kz1FuK7JLO8RM41h3G8KnhdUziYo1M6lyEUOo/edit?usp=sharing here] to access the google docs editable vers...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Game Description==&lt;br /&gt;
Click [https://docs.google.com/document/d/1WyVIm6Kz1FuK7JLO8RM41h3G8KnhdUziYo1M6lyEUOo/edit?usp=sharing here] to access the google docs editable version.&lt;br /&gt;
*Current version: 5.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17920</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17920"/>
				<updated>2015-08-26T13:05:47Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Google Docs version==&lt;br /&gt;
Click [https://docs.google.com/document/d/1WyVIm6Kz1FuK7JLO8RM41h3G8KnhdUziYo1M6lyEUOo/edit?usp=sharing here] to access the google docs editable version.&lt;br /&gt;
*Current version: 5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reference definitions ==&lt;br /&gt;
&lt;br /&gt;
'''State variable:''' a variable that is needed to identify a state of the game (player, robot, environment); e.g., the distance between robot and player, the relative direction, the time passed when a time triggering event occurs, etc.&lt;br /&gt;
&lt;br /&gt;
'''Constraint:''' boundaries on variables that limit the state space.&lt;br /&gt;
&lt;br /&gt;
'''Game actions:''' the legal actions in the game, possibly discretised.&lt;br /&gt;
&lt;br /&gt;
'''Game rule:''' a description of what is possible to do and the possible consequences; e.g., actions that lead to states where points can be given, or constraints (or effects of actions) change.&lt;br /&gt;
&lt;br /&gt;
'''Strategy:''' a criterion to select an action over another one.&lt;br /&gt;
&lt;br /&gt;
'''Strategy support variable:''' a variable, computed from available data, used to provide the strategic module with information useful to select a strategy (e.g., timing of player's activity, usual kind of actions, etc.)&lt;br /&gt;
&lt;br /&gt;
== Game Description ==&lt;br /&gt;
&lt;br /&gt;
The game consists of a three-round based interaction between the human player and a robot within an arena — possibly with some predefined obstacles, where the robot tries to reach a specific target and the human player attempts to prevent that by pressing a touch sensor attached to the robot. &lt;br /&gt;
&lt;br /&gt;
There exits 3 specific regions defined by how far the robot detects the player in relation to itself, namely: A, B and C, where &lt;br /&gt;
*Region A: denotes the critical region such that the human player can perform an attempt to push the robots sensor.&lt;br /&gt;
*Region B: Defines the medium distance meaning that the player is search for an opportunity to perform an strike.&lt;br /&gt;
*Region C: Is the furthermost one and the human prevalence in it has some side effects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Game Rules ==&lt;br /&gt;
&lt;br /&gt;
*Rule 0: at the beginning of a round the players are apart from each other and the robot detects the human player’s position within region C.&lt;br /&gt;
&lt;br /&gt;
*Rule 1: at each round the human player must try to prevent the robot from reaching its target by pressing its touch sensor.&lt;br /&gt;
**Consequence: if (at any time) the human player succeeds on pushing the robot touch sensor then he wins the current round and Rule 3 is applied. On the other hand, if the robot accomplishes Rule 2 he loses the game and Rule 3 is also applied.&lt;br /&gt;
&lt;br /&gt;
*Rule 2:  at each round the robot must try to reach a specific spot in the arena.&lt;br /&gt;
**Consequence: If the robot reach its target spot it wins the round and Rule 3 is applied. On the other hand, if he has its sensor pressed (at any time) he loses the round followed by the application of Rule 3. &lt;br /&gt;
&lt;br /&gt;
*Rule 3: at the end of each round except the last one, the players restart a new round being apart from each other and the robot must be detecting the player’s position within region C (Rule 0). &lt;br /&gt;
**Consequence: the beginning of a new round.&lt;br /&gt;
&lt;br /&gt;
*Rule 4: at any time, if the human is detected inside region A and stays there for more than 2 seconds then the robot earns the right for a 5-secs free run.&lt;br /&gt;
**Consequences: disables the touch sensor (which returns to function after the free run); The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds.&lt;br /&gt;
&lt;br /&gt;
*Rule 5: After entering the region A the player has to wait for 5 seconds without touching the robot and only after that period of time he can perform a new attempt to push the robot’s touch sensor.&lt;br /&gt;
**Consequence: The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds. &lt;br /&gt;
&lt;br /&gt;
*Rule 6: at any time, if the human player stays in region C for more than 10 secs then the robot earns a 7 seconds free run.&lt;br /&gt;
**Consequence: same as Rule 4.&lt;br /&gt;
&lt;br /&gt;
*Rule 7: The human player cannot be closer than a certain range T from the robot’s target.&lt;br /&gt;
**Consequence: If this rule is broken the robot earns a 3 seconds free run with its touch sensor disabled (the sensor returns to function after the free run).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Winning Conditions ==&lt;br /&gt;
&lt;br /&gt;
The game ends after three rounds and the winner is the player who has won most rounds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== State Variables ==&lt;br /&gt;
&lt;br /&gt;
*Robot distance to the human player;&lt;br /&gt;
*Player time constraint;&lt;br /&gt;
*Robot touch sensor status;&lt;br /&gt;
*Direction the human player is current facing at w.r.t. the robot;&lt;br /&gt;
*Robot distance to obstacles;&lt;br /&gt;
*Rounds left;&lt;br /&gt;
*Number of rounds won by the robot&lt;br /&gt;
*Distance to the target&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Strategy support variable ==&lt;br /&gt;
&lt;br /&gt;
*Estimated time for the player to reach the robots touch sensor given his current position;&lt;br /&gt;
*Frequency of which the player cross in Region A.&lt;br /&gt;
*Estimated player’s reaction time;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Extensions ==&lt;br /&gt;
&lt;br /&gt;
*Exploit planning for obstacles;&lt;br /&gt;
*Exploit learning strategies (with the aim of keeping the player's interest high);&lt;br /&gt;
*Add other robot players to have team strategies;&lt;br /&gt;
*Allow more than one human player;&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17919</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17919"/>
				<updated>2015-08-26T13:01:06Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Google Docs version==&lt;br /&gt;
Click [https://docs.google.com/document/d/1WyVIm6Kz1FuK7JLO8RM41h3G8KnhdUziYo1M6lyEUOo/edit?usp=sharing here] to access the google docs editable version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reference definitions ==&lt;br /&gt;
&lt;br /&gt;
'''State variable:''' a variable that is needed to identify a state of the game (player, robot, environment); e.g., the distance between robot and player, the relative direction, the time passed when a time triggering event occurs, etc.&lt;br /&gt;
&lt;br /&gt;
'''Constraint:''' boundaries on variables that limit the state space.&lt;br /&gt;
&lt;br /&gt;
'''Game actions:''' the legal actions in the game, possibly discretised.&lt;br /&gt;
&lt;br /&gt;
'''Game rule:''' a description of what is possible to do and the possible consequences; e.g., actions that lead to states where points can be given, or constraints (or effects of actions) change.&lt;br /&gt;
&lt;br /&gt;
'''Strategy:''' a criterion to select an action over another one.&lt;br /&gt;
&lt;br /&gt;
'''Strategy support variable:''' a variable, computed from available data, used to provide the strategic module with information useful to select a strategy (e.g., timing of player's activity, usual kind of actions, etc.)&lt;br /&gt;
&lt;br /&gt;
== Game Description ==&lt;br /&gt;
&lt;br /&gt;
The game consists of a three-round based interaction between the human player and a robot within an arena — possibly with some predefined obstacles, where the robot tries to reach a specific target and the human player attempts to prevent that by pressing a touch sensor attached to the robot. &lt;br /&gt;
&lt;br /&gt;
There exits 3 specific regions defined by how far the robot detects the player in relation to itself, namely: A, B and C, where &lt;br /&gt;
*Region A: denotes the critical region such that the human player can perform an attempt to push the robots sensor.&lt;br /&gt;
*Region B: Defines the medium distance meaning that the player is search for an opportunity to perform an strike.&lt;br /&gt;
*Region C: Is the furthermost one and the human prevalence in it has some side effects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Game Rules ==&lt;br /&gt;
&lt;br /&gt;
*Rule 0: at the beginning of a round the players are apart from each other and the robot detects the human player’s position within region C.&lt;br /&gt;
&lt;br /&gt;
*Rule 1: at each round the human player must try to prevent the robot from reaching its target by pressing its touch sensor.&lt;br /&gt;
**Consequence: if (at any time) the human player succeeds on pushing the robot touch sensor then he wins the current round and Rule 3 is applied. On the other hand, if the robot accomplishes Rule 2 he loses the game and Rule 3 is also applied.&lt;br /&gt;
&lt;br /&gt;
*Rule 2:  at each round the robot must try to reach a specific spot in the arena.&lt;br /&gt;
**Consequence: If the robot reach its target spot it wins the round and Rule 3 is applied. On the other hand, if he has its sensor pressed (at any time) he loses the round followed by the application of Rule 3. &lt;br /&gt;
&lt;br /&gt;
*Rule 3: at the end of each round except the last one, the players restart a new round being apart from each other and the robot must be detecting the player’s position within region C (Rule 0). &lt;br /&gt;
**Consequence: the beginning of a new round.&lt;br /&gt;
&lt;br /&gt;
*Rule 4: at any time, if the human is detected inside region A and stays there for more than 2 seconds then the robot earns the right for a 5-secs free run.&lt;br /&gt;
**Consequences: disables the touch sensor (which returns to function after the free run); The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds.&lt;br /&gt;
&lt;br /&gt;
*Rule 5: After entering the region A the player has to wait for 5 seconds without touching the robot and only after that period of time he can perform a new attempt to push the robot’s touch sensor.&lt;br /&gt;
**Consequence: The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds. &lt;br /&gt;
&lt;br /&gt;
*Rule 6: at any time, if the human player stays in region C for more than 10 secs then the robot earns a 7 seconds free run.&lt;br /&gt;
**Consequence: same as Rule 4.&lt;br /&gt;
&lt;br /&gt;
*Rule 7: The human player cannot be closer than a certain range T from the robot’s target.&lt;br /&gt;
**Consequence: If this rule is broken the robot earns a 3 seconds free run with its touch sensor disabled (the sensor returns to function after the free run).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Winning Conditions ==&lt;br /&gt;
&lt;br /&gt;
The game ends after three rounds and the winner is the player who has won most rounds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== State Variables ==&lt;br /&gt;
&lt;br /&gt;
*Robot distance to the human player;&lt;br /&gt;
*Player time constraint;&lt;br /&gt;
*Robot touch sensor status;&lt;br /&gt;
*Direction the human player is current facing at w.r.t. the robot;&lt;br /&gt;
*Robot distance to obstacles;&lt;br /&gt;
*Rounds left;&lt;br /&gt;
*Number of rounds won by the robot&lt;br /&gt;
*Distance to the target&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Strategy support variable ==&lt;br /&gt;
&lt;br /&gt;
*Estimated time for the player to reach the robots touch sensor given his current position;&lt;br /&gt;
*Frequency of which the player cross in Region A.&lt;br /&gt;
*Estimated player’s reaction time;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Extensions ==&lt;br /&gt;
&lt;br /&gt;
*Exploit planning for obstacles;&lt;br /&gt;
*Exploit learning strategies (with the aim of keeping the player's interest high);&lt;br /&gt;
*Add other robot players to have team strategies;&lt;br /&gt;
*Allow more than one human player;&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17918</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17918"/>
				<updated>2015-08-26T13:00:21Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Google Docs version==&lt;br /&gt;
Click [https://docs.google.com/document/d/1WyVIm6Kz1FuK7JLO8RM41h3G8KnhdUziYo1M6lyEUOo/edit?usp=sharing here] to access the google docs editable version.&lt;br /&gt;
&lt;br /&gt;
== Reference definitions ==&lt;br /&gt;
&lt;br /&gt;
'''State variable:''' a variable that is needed to identify a state of the game (player, robot, environment); e.g., the distance between robot and player, the relative direction, the time passed when a time triggering event occurs, etc.&lt;br /&gt;
&lt;br /&gt;
'''Constraint:''' boundaries on variables that limit the state space.&lt;br /&gt;
&lt;br /&gt;
'''Game actions:''' the legal actions in the game, possibly discretised.&lt;br /&gt;
&lt;br /&gt;
'''Game rule:''' a description of what is possible to do and the possible consequences; e.g., actions that lead to states where points can be given, or constraints (or effects of actions) change.&lt;br /&gt;
&lt;br /&gt;
'''Strategy:''' a criterion to select an action over another one.&lt;br /&gt;
&lt;br /&gt;
'''Strategy support variable:''' a variable, computed from available data, used to provide the strategic module with information useful to select a strategy (e.g., timing of player's activity, usual kind of actions, etc.)&lt;br /&gt;
&lt;br /&gt;
== Game Description ==&lt;br /&gt;
&lt;br /&gt;
The game consists of a three-round based interaction between the human player and a robot within an arena — possibly with some predefined obstacles, where the robot tries to reach a specific target and the human player attempts to prevent that by pressing a touch sensor attached to the robot. &lt;br /&gt;
&lt;br /&gt;
There exits 3 specific regions defined by how far the robot detects the player in relation to itself, namely: A, B and C, where &lt;br /&gt;
*Region A: denotes the critical region such that the human player can perform an attempt to push the robots sensor.&lt;br /&gt;
*Region B: Defines the medium distance meaning that the player is search for an opportunity to perform an strike.&lt;br /&gt;
*Region C: Is the furthermost one and the human prevalence in it has some side effects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Game Rules ==&lt;br /&gt;
&lt;br /&gt;
*Rule 0: at the beginning of a round the players are apart from each other and the robot detects the human player’s position within region C.&lt;br /&gt;
&lt;br /&gt;
*Rule 1: at each round the human player must try to prevent the robot from reaching its target by pressing its touch sensor.&lt;br /&gt;
**Consequence: if (at any time) the human player succeeds on pushing the robot touch sensor then he wins the current round and Rule 3 is applied. On the other hand, if the robot accomplishes Rule 2 he loses the game and Rule 3 is also applied.&lt;br /&gt;
&lt;br /&gt;
*Rule 2:  at each round the robot must try to reach a specific spot in the arena.&lt;br /&gt;
**Consequence: If the robot reach its target spot it wins the round and Rule 3 is applied. On the other hand, if he has its sensor pressed (at any time) he loses the round followed by the application of Rule 3. &lt;br /&gt;
&lt;br /&gt;
*Rule 3: at the end of each round except the last one, the players restart a new round being apart from each other and the robot must be detecting the player’s position within region C (Rule 0). &lt;br /&gt;
**Consequence: the beginning of a new round.&lt;br /&gt;
&lt;br /&gt;
*Rule 4: at any time, if the human is detected inside region A and stays there for more than 2 seconds then the robot earns the right for a 5-secs free run.&lt;br /&gt;
**Consequences: disables the touch sensor (which returns to function after the free run); The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds.&lt;br /&gt;
&lt;br /&gt;
*Rule 5: After entering the region A the player has to wait for 5 seconds without touching the robot and only after that period of time he can perform a new attempt to push the robot’s touch sensor.&lt;br /&gt;
**Consequence: The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds. &lt;br /&gt;
&lt;br /&gt;
*Rule 6: at any time, if the human player stays in region C for more than 10 secs then the robot earns a 7 seconds free run.&lt;br /&gt;
**Consequence: same as Rule 4.&lt;br /&gt;
&lt;br /&gt;
*Rule 7: The human player cannot be closer than a certain range T from the robot’s target.&lt;br /&gt;
**Consequence: If this rule is broken the robot earns a 3 seconds free run with its touch sensor disabled (the sensor returns to function after the free run).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Winning Conditions ==&lt;br /&gt;
&lt;br /&gt;
The game ends after three rounds and the winner is the player who has won most rounds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== State Variables ==&lt;br /&gt;
&lt;br /&gt;
*Robot distance to the human player;&lt;br /&gt;
*Player time constraint;&lt;br /&gt;
*Robot touch sensor status;&lt;br /&gt;
*Direction the human player is current facing at w.r.t. the robot;&lt;br /&gt;
*Robot distance to obstacles;&lt;br /&gt;
*Rounds left;&lt;br /&gt;
*Number of rounds won by the robot&lt;br /&gt;
*Distance to the target&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Strategy support variable ==&lt;br /&gt;
&lt;br /&gt;
*Estimated time for the player to reach the robots touch sensor given his current position;&lt;br /&gt;
*Frequency of which the player cross in Region A.&lt;br /&gt;
*Estimated player’s reaction time;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Extensions ==&lt;br /&gt;
&lt;br /&gt;
*Exploit planning for obstacles;&lt;br /&gt;
*Exploit learning strategies (with the aim of keeping the player's interest high);&lt;br /&gt;
*Add other robot players to have team strategies;&lt;br /&gt;
*Allow more than one human player;&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17917</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17917"/>
				<updated>2015-08-26T11:47:50Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Robogame Strategy&lt;br /&gt;
|short_descr=Focused on the development of player modelling (which includes a minimalist approach to intention detection) for strategy adjustment with the aim of maintaining (or conversely, increasing) the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== General Description ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, this PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17916</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17916"/>
				<updated>2015-08-26T11:45:18Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* PhD Research */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=ewertonprofilepic.jpg&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
*Robogames&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My [[Robogame_Strategy| PhD research project]] proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackle the development of player modelling (which includes a minimalist approach to intention detection) for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17913</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17913"/>
				<updated>2015-08-25T16:48:24Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Reference definitions ==&lt;br /&gt;
&lt;br /&gt;
'''State variable:''' a variable that is needed to identify a state of the game (player, robot, environment); e.g., the distance between robot and player, the relative direction, the time passed when a time triggering event occurs, etc.&lt;br /&gt;
&lt;br /&gt;
'''Constraint:''' boundaries on variables that limit the state space.&lt;br /&gt;
&lt;br /&gt;
'''Game actions:''' the legal actions in the game, possibly discretised.&lt;br /&gt;
&lt;br /&gt;
'''Game rule:''' a description of what is possible to do and the possible consequences; e.g., actions that lead to states where points can be given, or constraints (or effects of actions) change.&lt;br /&gt;
&lt;br /&gt;
'''Strategy:''' a criterion to select an action over another one.&lt;br /&gt;
&lt;br /&gt;
'''Strategy support variable:''' a variable, computed from available data, used to provide the strategic module with information useful to select a strategy (e.g., timing of player's activity, usual kind of actions, etc.)&lt;br /&gt;
&lt;br /&gt;
== Game Description ==&lt;br /&gt;
&lt;br /&gt;
The game consists of a three-round based interaction between the human player and a robot within an arena — possibly with some predefined obstacles, where the robot tries to reach a specific target and the human player attempts to prevent that by pressing a touch sensor attached to the robot. &lt;br /&gt;
&lt;br /&gt;
There exits 3 specific regions defined by how far the robot detects the player in relation to itself, namely: A, B and C, where &lt;br /&gt;
*Region A: denotes the critical region such that the human player can perform an attempt to push the robots sensor.&lt;br /&gt;
*Region B: Defines the medium distance meaning that the player is search for an opportunity to perform an strike.&lt;br /&gt;
*Region C: Is the furthermost one and the human prevalence in it has some side effects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Game Rules ==&lt;br /&gt;
&lt;br /&gt;
*Rule 0: at the beginning of a round the players are apart from each other and the robot detects the human player’s position within region C.&lt;br /&gt;
&lt;br /&gt;
*Rule 1: at each round the human player must try to prevent the robot from reaching its target by pressing its touch sensor.&lt;br /&gt;
**Consequence: if (at any time) the human player succeeds on pushing the robot touch sensor then he wins the current round and Rule 3 is applied. On the other hand, if the robot accomplishes Rule 2 he loses the game and Rule 3 is also applied.&lt;br /&gt;
&lt;br /&gt;
*Rule 2:  at each round the robot must try to reach a specific spot in the arena.&lt;br /&gt;
**Consequence: If the robot reach its target spot it wins the round and Rule 3 is applied. On the other hand, if he has its sensor pressed (at any time) he loses the round followed by the application of Rule 3. &lt;br /&gt;
&lt;br /&gt;
*Rule 3: at the end of each round except the last one, the players restart a new round being apart from each other and the robot must be detecting the player’s position within region C (Rule 0). &lt;br /&gt;
**Consequence: the beginning of a new round.&lt;br /&gt;
&lt;br /&gt;
*Rule 4: at any time, if the human is detected inside region A and stays there for more than 2 seconds then the robot earns the right for a 5-secs free run.&lt;br /&gt;
**Consequences: disables the touch sensor (which returns to function after the free run); The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds.&lt;br /&gt;
&lt;br /&gt;
*Rule 5: After entering the region A the player has to wait for 5 seconds without touching the robot and only after that period of time he can perform a new attempt to push the robot’s touch sensor.&lt;br /&gt;
**Consequence: The human loses the round if he makes any contact with the robot or blocks its trajectory for more than 2 seconds. &lt;br /&gt;
&lt;br /&gt;
*Rule 6: at any time, if the human player stays in region C for more than 10 secs then the robot earns a 7 seconds free run.&lt;br /&gt;
**Consequence: same as Rule 4.&lt;br /&gt;
&lt;br /&gt;
*Rule 7: The human player cannot be closer than a certain range T from the robot’s target.&lt;br /&gt;
**Consequence: If this rule is broken the robot earns a 3 seconds free run with its touch sensor disabled (the sensor returns to function after the free run).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Winning Conditions ==&lt;br /&gt;
&lt;br /&gt;
The game ends after three rounds and the winner is the player who has won most rounds.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== State Variables ==&lt;br /&gt;
&lt;br /&gt;
*Robot distance to the human player;&lt;br /&gt;
*Player time constraint;&lt;br /&gt;
*Robot touch sensor status;&lt;br /&gt;
*Direction the human player is current facing at w.r.t. the robot;&lt;br /&gt;
*Robot distance to obstacles;&lt;br /&gt;
*Rounds left;&lt;br /&gt;
*Number of rounds won by the robot&lt;br /&gt;
*Distance to the target&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Strategy support variable ==&lt;br /&gt;
&lt;br /&gt;
*Estimated time for the player to reach the robots touch sensor given his current position;&lt;br /&gt;
*Frequency of which the player cross in Region A.&lt;br /&gt;
*Estimated player’s reaction time;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Extensions ==&lt;br /&gt;
&lt;br /&gt;
*Exploit planning for obstacles;&lt;br /&gt;
*Exploit learning strategies (with the aim of keeping the player's interest high);&lt;br /&gt;
*Add other robot players to have team strategies;&lt;br /&gt;
*Allow more than one human player;&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17912</id>
		<title>Talk:Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Talk:Robogame_Strategy&amp;diff=17912"/>
				<updated>2015-08-25T16:39:44Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: Created page with &amp;quot;Prova&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Prova&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User_talk:EwertonLopes&amp;diff=17911</id>
		<title>User talk:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User_talk:EwertonLopes&amp;diff=17911"/>
				<updated>2015-08-25T16:39:02Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User_talk:EwertonLopes&amp;diff=17910</id>
		<title>User talk:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User_talk:EwertonLopes&amp;diff=17910"/>
				<updated>2015-08-25T16:38:32Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: Created page with &amp;quot;Ewerton is smart&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Ewerton is smart&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17909</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17909"/>
				<updated>2015-08-25T15:42:29Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Robogame Strategy&lt;br /&gt;
|short_descr=Focused on the development of an ability of intention detection for strategy adjustment with the aim of maintaining the human player engagement in PIRGs.&lt;br /&gt;
|collaborator=Tiago Nascimento;&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== General Description ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, this PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17908</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17908"/>
				<updated>2015-08-25T15:35:16Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* General Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Project&lt;br /&gt;
|title=Robogame Strategy&lt;br /&gt;
|short_descr=Aim of this project is to produce autonomous robots able to play on stage together with human actors, possibly improvising, or in any case facing the casualities occurring on the scene.&lt;br /&gt;
|coordinator=AndreaBonarini&lt;br /&gt;
|tutor=AndreaBonarini; &lt;br /&gt;
|students=EwertonLopes&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|restopic=Robogames;&lt;br /&gt;
|start=2015/01/05&lt;br /&gt;
|end=2018/12/31&lt;br /&gt;
|status=Active&lt;br /&gt;
|level=PhD&lt;br /&gt;
|type=Thesis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== General Description ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, this PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17907</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17907"/>
				<updated>2015-08-25T15:31:02Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* General Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General Description ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, this PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17906</id>
		<title>Robogame Strategy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=Robogame_Strategy&amp;diff=17906"/>
				<updated>2015-08-25T15:28:06Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: Created page with &amp;quot;== General Description ==  Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of s...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General Description ==&lt;br /&gt;
&lt;br /&gt;
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality (such as that in classical videogames). This new style of robogames, known as Physically Interactive RoboGames (PIRG), exploit the real world (in both its dynamical unstructured and structured aspects) as environments and real, physical, autonomous devices as game companions. Considering this, the PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of PIRG design by the use of machine learning (ML) techniques. Specifically, the ability of intention detection for strategy adjustment will be targeted. The planned methodology aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time (&amp;quot;green algorithms&amp;quot;) in non-structured environments since these are interesting constraints currently addressed in robogames, and in the whole Robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research will open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and human-robot interaction. Moreover, it will be possible to tackle the necessity of creating robots even more able of being perceived as rational agents, i.e., possibly smart enough to play the role of effective opponents or teammates and thus become more likely to reach the mass-market as a new robotic product. As consequence of this, this proposal directly contributes for the advancement of different scientific fields, such as Artificial Intelligence, Machine Learning and Robotics. The results obtained in Robogames will be made available for use in all other applications involving Human-Robot Interaction.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17903</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17903"/>
				<updated>2015-08-25T15:00:45Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=[[File:ewertonprofilepic.jpg]]&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
*Robogames&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My [http://airwiki.ws.dei.polimi.it/index.php/Robogame_Strategy PhD research project] proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackled the development of an ability of intention detection for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=File:Ewertonprofilepic.jpg&amp;diff=17902</id>
		<title>File:Ewertonprofilepic.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=File:Ewertonprofilepic.jpg&amp;diff=17902"/>
				<updated>2015-08-25T15:00:01Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: EwertonLopes uploaded a new version of &amp;amp;quot;File:Ewertonprofilepic.jpg&amp;amp;quot;: Ewerton's profile picture.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Ewerton's profile picture.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=File:Ewertonprofilepic.jpg&amp;diff=17901</id>
		<title>File:Ewertonprofilepic.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=File:Ewertonprofilepic.jpg&amp;diff=17901"/>
				<updated>2015-08-25T14:57:59Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: Ewerton's profile picture.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Ewerton's profile picture.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17900</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17900"/>
				<updated>2015-08-25T14:43:12Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
*Robogames&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My [http://airwiki.ws.dei.polimi.it/index.php/Robogame_Strategy PhD research project] proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackled the development of an ability of intention detection for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17899</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17899"/>
				<updated>2015-08-25T14:40:35Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
*Robogames&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackled the development of an ability of intention detection for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17898</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17898"/>
				<updated>2015-08-25T14:40:14Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: /* Research Interests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
*Robogames&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackled the development of an ability of intention detection for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17897</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17897"/>
				<updated>2015-08-25T14:39:45Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Hi. I'm a PhD student from Politecnico di Milano (POLIMI). I have a Master of Science degree in informatics from [http://www.ufpb.br Universidade Federal da Paraíba], in Brazil (2015). I got my major degree (licentiate) in Computer Science from the same university in 2013. Currently at POLIMI, I have the support from the [http://www.cnpq.br Brazilian National Council for Scientific and Technological Development (CNPq)].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Research Interests ==&lt;br /&gt;
My main research interests are:&lt;br /&gt;
*Artificial Intelligence and bio-inspired computational models;&lt;br /&gt;
*Probabilistic reasoning and Machine Learning (specially classification models);&lt;br /&gt;
*Intelligent autonomous agents;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== PhD Research ==&lt;br /&gt;
My PhD research project proposes to investigate how to develop complex strategy-based abilities in autonomous robots for the purpose of designing better Physically Interactive Robogames (PIRG) by the use of machine learning (ML) techniques. Specifically, I tackled the development of an ability of intention detection for strategy adjustment with the aim of keeping (or raising) the human player engagement.&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17896</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17896"/>
				<updated>2015-08-25T13:15:02Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17895</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17895"/>
				<updated>2015-08-25T13:14:45Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Artificial Intelligence and Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	<entry>
		<id>https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17894</id>
		<title>User:EwertonLopes</title>
		<link rel="alternate" type="text/html" href="https://airwiki.elet.polimi.it/index.php?title=User:EwertonLopes&amp;diff=17894"/>
				<updated>2015-08-25T13:13:30Z</updated>
		
		<summary type="html">&lt;p&gt;EwertonLopes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Ewerton&lt;br /&gt;
|lastname=Lopes&lt;br /&gt;
|photo=[]&lt;br /&gt;
|email=ewerton.lopes@polimi.it&lt;br /&gt;
|resarea=Robotics&lt;br /&gt;
|advisor=AndreaBonarini;&lt;br /&gt;
|projectpage= Robogame Strategy&lt;br /&gt;
|status=active&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>EwertonLopes</name></author>	</entry>

	</feed>