Poit cloud SLAM with Microsoft Kinect

From AIRWiki
Jump to: navigation, search
Title: Point cloud SLAM with Microsoft Kinect
PointCloudKinect.jpg

Image:PointCloudKinect.jpg

Description: Simultaneous Localization and Mapping (SLAM) is one of the basic functionalities required from an autonomous robot. In the past we have developed a framework for building SLAM algorithm based on the use of the Extended Kalman Filter and vision sensors. A recently available vision sensor which has tremendous potential for autonomous robots is the Microsoft Kinect RGB-D sensor. The thesis aims at the integration of the Kinect sensor in the framework developed for the development of a point cloud base system for SLAM.

Material:

  • Kinect sensor and libraries
  • A framework for multisensor SLAM
  • PCL2.0 library for dealing with point clouds

Expected outcome:

  • Algorithm able to build 3D point cloud representation of the observed scene
  • Point clouds processing could be used to improve the accuracy of the filter as well

Required skills or skills to be acquired:

  • Basic background in computer vision
  • Basic background in Kalman filtering
  • C++ programming under Linux
Tutor: MatteoMatteucci (matteo.matteucci@polimi.it)
Start: 2015/01/01
Students: 1 - 2
CFU: 10 - 20
Research Area: Computer Vision and Image Analysis
Research Topic: none
Level: Ms
Type: Thesis
Status: Active