Google has its own robot to play table tennis like a pro

As if it wasn’t enough to have artificial intelligence which helps to know what Lady Di, Elvis Presley, Michael Jackson, Heath Ledger, Paul Walker or Amy Winehouse would be like physically, now Google AI has created a robot with AI with the objective that he can beat anyone who gets in front of him in table tennis (also known to some as ping-pong).
For now they emphasize that it is “cooperative”, but with the ability it shows in the video that can be seen below, you could believe that this robot will take care of professionals in a very short time.
The project, called i-Sim2Real, it’s not just about Ping-Pong, but more about building a robotic system that can work with and around fast-paced and relatively unpredictable human behavior. Table tennis has the advantage of being one on one (as opposed to playing basketball or cricket), but it is still a sport with a balance of complexity and simplicity.
How the robot that plays table tennis works
Sim2Real is a way of describing an AI creation process in which a model is taught machine learning what to do in a virtual environment or simulation, and then apply that knowledge in the real world.
It takes several years of testing (obviously making mistakes) to come up with a working model: doing it in a simulation allows years of real-time training to happen in a few minutes or hours.
But it’s not always possible to do something in a simulation; for instance, what if a robot needs to interact with a human? That’s not that easy to simulate, so you need real-world data to get started.

Finally, the creator of this robot ends up with a real problem: he doesn’t have the human data, because it would be needed to make the robot interact with the human and generate that data in the first place. The researchers from Google They escaped this pitfall by starting simple and doing a feedback loop:
“i-Sim2Real uses a simple model of human behavior as a rough starting point and alternates between simulation training and real-world implementation. In each iteration, both the model of human behavior and the policy are refined”, mentions the tech giant.
It’s okay to start with a poor approximation of human behavior, because the robot is also starting to learn. More real human data is collected with each game, improving accuracy and allowing the AI to learn more.
The approach was successful enough that the team’s table tennis robot was able to carry out a 340 stroke rally. then the video:
It is also capable of returning the ball to different regions; not exactly mathematically accurate, but good enough that you can start executing a strategy. The team also tried a different approach for more hit-oriented behavior, like returning the ball to a very specific spot from a variety of positions.
Again, it’s not about creating the ultimate table tennis machine (although that’s a likely consequence), but about finding ways to train efficiently with and for human interactions without making people repeat the same action thousands of times.
TechMarkup readers can learn more about the techniques used by the Google team in the following summary video: