Golfi, as the team has dubbed their creation, uses a 3D camera to take a snapshot of the green, which it then feeds into a physics-based model to simulate thousands of random shots from different positions. These are used to train a neural network that can then predict exactly how hard and in what direction to hit a ball to get it in the hole, from anywhere on the green.
On the green, Golfi was successful six or seven times out of ten.
Like even the best pros, it doesn’t get a hole in one every time. The goal isn’t really to build a tournament winning golf robot though, says Junker, but to demonstrate the power of hybrid approaches to robotic control. “We try to combine data-driven and physics based methods and we searched for a nice example, which everyone can easily understand,” she says. “It’s only a toy for us, but we hope to see some advantages of our approach for industrial applications.”
So far, the researchers have only tested their approach on a small mock-up green inside their lab. The robot, which is described in a paper due to be presented at the IEEE International Conference on Robotic Computing in Italy next month, navigates its way around the two meter-square space on four wheels, two of which are powered. Once in position it then uses a belt driven gear shaft with a putter attached to the end to strike the ball towards the hole.
First though, it needs to work out what shot to play given the position of the ball. The researchers begin by using a Microsoft Kinect 3D camera mounted on the ceiling to capture a depth map of the green. This data is then fed into a physics-based model, alongside other parameters like the rolling resistance of the turf, the weight of the ball and its starting velocity, to simulate three thousand random shots from various starting points.
This data is used to train a neural network that can predict how hard and in what direction to hit the ball to get it in the hole from anywhere on the green. While it’s possible to solve this problem by combining the physics based model with classical optimization, says Junker, it’s far more computationally expensive. And training the robot on simulated golf shots takes just five minutes, compared to around 30 to 40 hours if they collected data on real-world strokes, she adds.
Before it can make it’s shot though, the robot first has to line its putter up with the ball just right, which requires it to work out where on the green both itself and the ball are. To do so, it uses a neural network that has been trained to spot golf balls and a hard-coded object detection algorithm that picks out colored dots on the top of the robot to work out its orientation. This positioning data is then combined with a physical model of the robot and fed into an optimization algorithm that works out how to control its wheel motors to navigate to the ball.
Junker admits that the approach isn’t flawless. The current set-up relies on a bird’s eye view, which would be hard to replicate on a real golf course, and switching to cameras on the robot would present major challenges, she says. The researchers also didn’t report how often Golfi successfully sinks the putt in their paper, because the figures were thrown off by the fact that it occasionally drove over the ball, knocking it out of position. When that didn’t happen though, Junker says it was successful six or seven times out of ten, and since they submitted the paper a colleague has reworked the navigation system to avoid the ball.
Golfi isn’t the first machine to try its hand at the sport. In 2016, a robot called LDRICK hit a hole-in-one at Arizona’s TPC Scottsdale course and several devices have been built to test out golf clubs. But Noel Rousseau, a golf coach with a PhD in motor learning, says that typically they require an operator painstakingly setting them up for each shot, and any adjustments take considerable time. “The most impressive part to me is that the golf robot is able to find the ball, sight the hole and move itself into position for an accurate stoke,” he says.
Beyond mastering putting, the hope is that the underlying techniques the researchers have developed could translate to other robotics problems, says Niklas Fittkau, a doctoral student at Paderborn University and co-lead author of the paper. “You can also transfer that to other problems, where you have some knowledge about the system and could model parts of it to obtain some data, but you can’t model everything,” he says.
From Your Site Articles
Related Articles Around the Web