ViRHaS
ViRHaS
| |
Short Description: | Omnidirectional robot, drivable in virtual reality |
Coordinator: | AndreaBonarini (andrea.bonarini@polimi.it) |
Tutor: | AndreaBonarini (andrea.bonarini@polimi.it) |
Collaborator: | |
Students: | MicheleBertoni (michele2.bertoni@mail.polimi.it) |
Research Area: | Robotics |
Research Topic: | Robot development |
Start: | 2016/10/20 |
End: | 2016/07/07 |
Status: | Completedwarning.png"Completed" is not in the list of possible values (Active, Closed) for this property. |
Level: | Bs |
Type: | Course |
ViRHaS is a ominidirectional robot, based on Triskar model, equipped with two Raspberry Pi Cameras. These cameras can be used either to watch video streaming from multiple devices, either to control robot in 3D mode: this mode is available both for smartphone (inside a VRBOX) and for PC plugged to a 3D-ready screen.
Requirements
ViRHaS is structured into 3 parts:
- Raspberry Pis are the hub of the entire robot: they handle both video and control streaming: one raspberry, the one with local address 192.168.1.3 is the master, while the other (192.168.1.4) is a slave device and only handles video transmission;
-Arduino Mega is used to control the motors: it's a slave device, controlled by master Raspberry Pi: Raspberry sends over USB serial to the Arduino a byte vector, which contains all speed components; then Arduino computes PID and control motors over PWM;
- User device: it may be any modern electronical device, for example laptop, smartphone, desktop computer, etc... ; the device receives video streaming and sends controls.
Components
All software components may be divided into four parts:
- Video capture: performed only by Raspberry Pis, uses v4l2 libraries to get MJPG streaming from cameras; all data is stored in ByteArrays;
- Communication: performed by all the parts of the robot, in particular using WebSockets from Raspberries to user device (and viceversa) and via Usb Serial communication from Raspberry to Arduino Mega;
- Video visualization: obviously, only user device can display video streaming: once received a new ByteArray containing a frame, the device loads the JPG image and displays it in the right position of the screen (user might change settings) using OpenGL and hardware acceleration of the GPU, if present;
- Control handling: each device, except the slave Pi, handle control:
- user device is directly connected to the controller, which can be any kind supported by user device; the gamepad is handled transparently from the user device, which combines a key to a default command (for example throttle, direction, etc...); this association may be changed by the user;
- Master Raspberry receives default commands, and from these calculates strafe, forward and angular components of the speed vector, then simplifies each component to a single byte for a simplier parsing by the Arduino;
- Arduino receives a 4 bytes string, composed by the 3 values and the string terminator 'n'; after parsing, it computes inverse kinematics and estimates each wheel speed; finally, target speed enters the PID loop and the real speed is sent to each wheel via PWM.
Development
The first part of the project I developed was User Application: written in C++, using Qt libraries, this application is multiplatform, reliable and user-friendly: Qt compiler can generate executable file from the source code for almost every device: here is a list of compatible and tested OS:
- Linux (any distribution): perfectly working
- MacOS: perfectly working, since this is a Unix OS
- Android: perfectly working
- iOS: theoretically, software should work, but, since this is a closed OS i could not test it
- Windows: unfortunally this OS has two known issues, both caused by Windows itself:
the first is the incompability of the library QtGamepad with this OS (N.B. from the version 5.9 of Qt, QtGamepad should work fine, but I haven't tested it yet); the second is a bug concerning window update immediately after stopping OpenGL (to fix this, just press CTRL+ALT+DEL and then cancel, immediately after stopping video streaming).