How sensors and intelligent algorithms pave the way for our autonomous robotic helpers

27 November 2020
Flurin, Robotics Software Engineer
As humans we use our senses to navigate the world. Similarly, a robot must rely on its sensors. To ensure the high-quality requirements of professional greenkeepers, a healthy turf, and centimeter level localization at all times, different components must work together to achieve a common goal. Same as in life, communication is key.
 
Satellite navigation systems can tell a robot’s position up to meter level accuracy, and in this context, multi band receivers allow to combine the benefits from any of the visible satellites. By including the information from nearby fixed base stations, and further analyzing the satellite signal, better location estimates are achieved, through what we call real-time kinematics. By comparing the current measurements to past measurements, it is also possible to determine the velocity and direction a robot has moved. With a mathematical model, it is possible to take into account the speed of the wheels, known as odometry, as well as the measurements from accelerometers and gyroscopes, known as inertial measurement units (IMUs). These provide further information on the robot’s direction, velocity and orientation. A camera can even track salient features in the field of view, which can then inform about how it is moving with respect to other objects in space.
 
The issue with sensors is that measurements are all wrong. Always, even if just slightly. A measurement is only the representation of a physical entity. In a perfect world, we would have perfect measurements, but what happens when a wheel slips? IMU sensors drift all the time. What does this mean for the measurements? What happens when the camera shakes on its fixture, or a visible tree moves in the wind? It means the system will have errors in its measurements. Our task is to identify, quantify and minimize these mistakes. Through intelligent algorithms, for example Kalman filtering, we can fuse insights from different sensors, we can minimize the uncertainties that are inherent to the measurements and achieve better results than any of the sensors could have achieved on their own. Unity is strength.
 
The same concept applies to us humans and our senses, by the way. Take optical illusions, spatial disorientation, vertigo, Shepard tones, and other sensory illusions. They all emerge from incomplete information or approximations our brains make.
 
I recently started working at Ronovatec AG, and I will be making the most out of our sensors. My task is to establish good connections for transferring sensor measurements, in order to feed our algorithms with saucy data. Our algorithms detect minimal deviations from the perfect result and correct for them. Our software will also inform the user about the hedgehog family walking through at night, or send a picture of the sunbathing lizard that the robot had to avoid by adapting its path. Our robots will help with the tedious tasks, performing them more reliably, freeing up your time for more meaningful, satisfying and fun jobs. I as well, want to have fun at my job and look forward to working with you towards perfect results in the years to come.
Copyright 2023 Ronovatec AG