How do you perceive your environment?
Two main techniques are used to render a digital environment:
- The use of cameras allows colour information to be obtained according to the camera plane.
- The use of Lidar allows an environment to be sampled by providing the position of points in the environment relative to the sensor.
Hybrid sensors are now available to obtain both visual and distance (depth) information to take advantage of the benefits of each technique in terms of accuracy and repeatability, ease of processing and hardware costs.
The arrival of the 3D TOF sensor on a recent smartphone is a testament to the capabilities of this technology.
From the information extracted from the above sensors, combined with other proprioceptive or exteroceptive sensors, the environment is generally rendered as a point cloud in 3D space.
Why perceive your environment?
Current computing capacities allow them to visualise and update this 3D space in real time on graphic engines which then allow interaction by the user (measurement of a distance or a surface of the environment, instructions for movement in the environment, etc.) or direct use of the robot.
Indeed, the model of the environment thus constructed allows the robot to define optimal paths of passage without colliding with the environment. It also allows the definition of mapping techniques according to the complexity of the environment and the measured quantities, for example to dynamically tighten a measurement mesh in order to remove a doubt and achieve the accuracy objectives.
The perception of the environment is therefore a major asset to assist the user in the use of a remote robot or to make a robot fully autonomous in a complex environment.
The perception of different robots working together then allows more tasks to be performed more quickly.
A supervisory HMI :
INNOWTECH offers an application that allows the management and control of its various robots for a multitude of missions, guaranteeing simplicity, safety and reliability of operation.
The user can thus retrieve shots or video rushes of an inspection and a map of an environment with the associated measurement gradients in 3D space.