Working notes about Robogame design

From AIRWiki
Jump to: navigation, search

These are intended as working notes, to be shared and contributed by all the people involved in this activity, to define a basic framework for designing interactive games with autonomous robots.

Introduction and conceptual considerations

A game involving the interaction with an autonomous robot is in a sense a computer game, since the behavior of at least one of the players is managed by a computer. The main difference w.r.t. a computer game is that there is a physical interaction between the players. This means that all of them will probably have to move in some way, and, in order to do so, they have to perceive some signals from the other players.

As for any game, it is important to define, as initial specifications for the game, who are the target users and the target environment to address.

Users have to be involved in the game. This means that the game should stimulate interest. This is usually obtained with something that is challenging, but not too difficult to be faced by the human player. (E. L.Deci, R. Flaste, Why We Do What We Do: Understanding Self-Motivation - see also the work on flow by Chikszentmihalyi)

In this project we may consider users different by age (6-9 years, 10-15, 15-18, 18-25, 25-45, over 45) or also by gender (most videogames do not address the female potential users)

The environment we consider is an indoor environment, typically a home, or a school. This reduces problems due to outdoor environment and makes the game more usable.

Robot features

A central component is the autonomous robot. To have a realistic game we should keep the robot as cheap as possible (as a reference, consider the cost of a console (2-300 Euros) )

To characterize the robot we may consider the following aspects:

  • movement: the robot should move as needed by the game, considering also safety aspects (it should not be dangerous in any case, and it should not be perceived as dangerous)
    • speed; up to 0.5-1 m/sec
    • accelleration
    • kind of movement
      • on the ground
        • holonomic (can move in any direction without any limitation (e.g., Rovio)
        • non-holonomic (has some limitations, like a car or a tank (e.g., Spyke)
      • fixed (like an arm): the degrees of freedom and how they are distributed have to be considered
    • power and batteries: for moving robots, it's important to consider the weight, the power needed to move the robot as expected, and the relative battery capacity (which also influences the weight)
  • sensors
    • internal (on board)
      • contact sensors (e.g., micro-switches): give signals when triggered by contact
      • proximity sensors (e.g. infrared): give a signal when detect objects in a given range (typically, few centimeters); infrared proximity sensors are not fully reliable (may not detect objects absorbing infrared light, and may be affected by high temperature places (such as a strong lamp, or a heater)
      • range sensors (e.g., sonar): give signals proportional to the distance to the closest object detected in the rangeof detection (usually approximated to a cone with 10-40 degrees of opening and spanning 2-4 meters)
      • magnetic field (e.g., compass): give a direction w.r.t. the magnetic North; may be influenced by local magnetic fields (such as an iron frame of a bed, or the loudspeakers
      • encoder: gives the angular movement of the axis on which is mounted (e.g., the engine axis)
      • accellerometers, gyros: give estimations of the accelleration or speed in a given direction
      • camera: provides an image that can be elaborated. Some elaboration can be done on electronics connected to the camera (e.g., the WIIMote has a camera and elaboration HW that provides directly the coordinates in the image plane of up to four infrared spots, any optical mouse estimates movement by elaborating up to 2000 images per second, looking at differences between subsequent images). Image elements relatively simple to elaborate (e.g., with the OpenCV libraries) are color blobs (dimension, shape, position in image coordinates), edges (strong differences in light intensity, typically on edges of objects or on specifically designed markers (e.g., those used in the Lurch project). Any image processing done without specific HW may require relatively high computational power. When possible, exploiting embedded image processing is a cheap, effective solution.
    • external
      • camera (see above): can be used to localize objects
      • RFID: in Italy we can use at the moment only passive RFID which can be detected only from a close distance (few centimeters) or withlarge antennas. Each RFID can provide different information (eventually useful to identify objects)
      • accelerometers, gyros (see above), eventually to be used to track the movement of other players holding the device
  • computation (to decide the movement and interpret sensor data)
    • on board: issues to be considered are computational power and the needed energy. If not a full-featured computer something like the Rabbit or Fox boards can be considered, or also the higher-end ARM or PIC microprocessors
    • external: PC connected with a fast enough link
  • communication
    • Blue-tooth: usually short range, widely available
    • X-Bee, Zig-Bee: cheap, relatively small band, short-medium range (in home up to 20 m)
    • WI-FI: long range, widely available, large band

Robot functionality

Among the functionalities we may expect to obtain from the robot are:

  • self-localization
    • continuous: at any time the robot knows its position w.r.t. some reference point (e.g., a base station): usually it is hard to obtain at a low cost
      • radio
      • internal camera and markers
      • dead-reckoning by estimating position from movement or accelerometers (usually prone to errors in a short time)
    • discontinuous: the robot knows its position only in specific points
      • RFID (20 cm)
      • camera (2-3 meters)
      • magnetic detector (20 cm)
      • infrared (or light)
  • localization from external points
    • camera and markers (on the robot)
    • triangulation with more cameras
  • ability to detect specific points or objects, or the players
    • camera
    • RFID
  • fine movement and/or manipulation

Kind of interaction

The games we are considering are interactive games.

The interaction can be:

  • competitive: player and robot play each one against the other (e.g., a hunting game)
  • cooperative: player and robot interact to come to a given state (e.g., help each other to find an object)
  • direct: the interaction signals are exchanged directly among the players (e.g. shooting at the opponent, or guiding it)
  • indirect: the interaction signals come indirectly from the environment (e.g. finding a marker left by the other player)
  • in real-time
  • in deferred time

Among the signals that can be used to interact we have:

  • position
  • sound
  • explicit signal (to/from the computer)
  • gesture
  • ...