Positioning and mapping of a robot
The positioning of a robot in its environment is an important issue in the field of mobile robotics to allow autonomous navigation of the robot and the taking of geolocated measurements. In constraining environments, the use of absolute positioning systems (beacons, GPS…) is impossible. Only relative positioning methods can be considered, but they generally produce a drift in the positioning due to the accumulation of small errors as the robot investigates. To this end, algorithms based on different sensors allow robots to map their environments while locating themselves in the established map.
Localisation instantanée et cartographie - Simultaneous localization and mapping
Simultaneous localization and mapping (SLAM) algorithms are the subject of much research as they have many advantages in terms of functionality and robustness. However, they depend on a multitude of factors that make their implementation difficult and must therefore be specific to the system to be designed. The implementation of such an algorithm must take into consideration the characteristics of the system model, the noise acting on it, the accuracy of the desired results, the speed of execution and the memory demand.
Innovation for our sensor robots
At INNOWTECH, we are developing such an algorithm to make our robots autonomous and to allow the localisation of measurements carried out in hostile environments.
A thorough search reveals two main principles. The first is based on point cloud matching that does not require any odometry information. This method is based on the analysis of segments constructed by acquiring a multitude of points with a LIDAR (LIght Detection And Ranging) mounted on a mobile robot at a given height. The result of the detection operation is therefore a set of points expressed in polar coordinates to be transcribed into a set of segments. As the method does not use odometry information, it relies exclusively on the geometry of the scans and the detection of “geometric landmarks” on which the matching process is based. The method can work well provided that the scans have an overlap containing at least one common geometric landmark between them.
The second method consists of merging the data from the odometry sensors (encoders, inertial units, etc.) to reconstruct the robot’s trajectory in an absolute reference frame using the various existing trajectory reconstruction methods, and then re-aligning the LIDAR data to this trajectory to produce the map.
Trajectory reconstruction methods
Existing trajectory reconstruction methods include the Kalman filter and its derivatives and the particle filter. These methods differ in terms of the assumptions and principles on which they are based, as well as in terms of accuracy, speed and memory demand. In the table below, we compare four recognised methods used in robotics.
The choice of such a method is based on the nature of the system evolution model, the level of knowledge of the nature of the noise affecting the system, the volume of data processed, the desired processing speed and the accuracy of the positioning information provided by the SLAM algorithm, taking into account the inverse proportionality between all these characteristics.