Tech News

Robots can now peel bananas too

The robots are capable of serving restaurants, doing acrobatics and dancing, but one of the biggest challenges is getting them to perform activities that require fine motor skills.

[article_mb_code]

That’s why I was surprised the model presented by researchers at the University of Tokyo in which a robot lifts and peels a banana with both arms in three minutes.

Although the two-arm machine is only successful 57% of the time, the rate is quite good considering the difficulties involved for a robot to perform this type of task.

[article_mb_code]

The most interesting thing about this development is not that artificial intelligence is capable of successfully peeling a fruit, but rather that it opens up a large number of possibilities for the future, since this type of motor skills can help robots carry out tasks that require meticulous care. such as moving small pieces from one place to another, taking and storing delicate objects, etc.

Researchers Heecheol Kim, Yoshiyuki Ohmura, and Yasuo Kuniyoshi They trained the robot using a process of machine learning. In this type of training, several samples are taken to produce data that is then used by the robot to replicate the action.

[article_mb_code]

Kuniyoshi reckons his training method could help AI systems perform all sorts of tasks. that for humans can be simple but require a lot of coordination and motor skills. This would favor the use of this type of technology in homes, factories and all kinds of environments.

In recent years, several developments have emerged that aim to enhance the capabilities of robots so that these machines can relieve many repetitive or routine activities.. The focus has been placed, as in this case, on the training of coordination, stability and fine motor skills.

Such is the case of researchers at the University of California, Berkeley who created the Motion2Vec algorithm, looking to get youA robot is capable of suturing patients with the precision of a human.

[article_mb_code]

With this objective, they used a deep learning system semi-supervised with which the robot learns by watching videos of surgical interventions where sutures are made. With that information, the AI ​​system learns to mimic the movements of health professionals in order to mimic them accurately.

The developers used a Siamese neural network (siamese neural network) which consists of the use of two identical networks that receive two sets of data separately and after processing them, compares them and displays a final result.

On the one hand, the system receives the video of the doctor making the sutures and, on the other, the recordings of the robot practicing. He compares these two clips and learns how to improve the precision of his movements.

The videos used in the training are part of the JIGSAWS database, which gathers information on surgical activity for modeling human movement. The data that forms part of JIGSWAS was collected through a collaboration between the Johns Hopkins University (JHU) and Intuitive Surgical, Inc. (Sunnyvale, CA. ISI) within an IRB-approved study.

Also in line with this type of robot butlers. There are since models capable of picking up objects on the ground and order the chaos of the house until robot chef to have as allies in the kitchen. There are options for what you want to imagine, but the truth is that this technology has not yet become part of everyday life or has spread in part because it still needs to mature, optimize some functions and also lower its values, something that will happen when widespread use of these devices.

:

Back to top button