How Learning Robots are Adapting to Their Tasks | EBV Elektronik

Display portlet menu

How Learning Robots are Adapting to Their Tasks | EBV Elektronik

Display portlet menu

How Learning Robots are Adapting to Their Tasks and Environment

Smart cameras and image processing combined

While robots required complicated programming by experts in the past, the systems will be able to teach themselves how to carry out their tasks in the future. This will allow robots to adapt autonomously to changing surrounding circumstances and optimize themselves.

Intuitive cooperation

One example of this is the BionicCobot concept from Festo: the robot is connected to IT systems from the field of Artificial Intelligence, which can understand and interpret spoken questions people ask. Thus, operator and robot can collaborate intuitively. The learning system can also process and link images from connected camera systems as well as positioning data and different information from other devices in the working environment. A semantic card is thus created, which grows continually thanks to machine learning. The system then distributes the tasks logically to the robots and other tools in order to support people optimally in their work.

Learning to grip like a baby

A special challenge in the world of robotics is the ability to grip different objects – how can the object be held, what force is needed to grip the object? Robotic hands have been developed at Bielefeld University for this purpose: these familiarize themselves autonomously with unknown objects. The new system operates without first knowing the properties of objects, such as fruit or tools. “Our system learns through trial and self-discovery – like babies when they are exploring new objects,” says Professor Helge Ritter, the neuroinformatics scientist heading up the project. The system learns on the basis of Artificial Intelligence how everyday objects, such as fruit, crockery or even soft toys, can vary in color and shape, and what is important when it comes to gripping these different objects. A banana can be grasped while a button has to be pressed. “The system uses the properties it learns to recognize these options and develops an interaction and recognition-based model for itself,” says Ritter. The gripping system is part of fundamental research – the results should benefit future self-learning robots in both the home and in industry.

Sharing experiences between robots 

Japanese company Fanus is also working on reducing the training effort for gripping tasks through deep learning: the company demonstrated this at the Hannover Messe in 2017 using a bin-picking cell, as it is called: in this cell, two robots equipped with 3D camera sensors are placed in a bin with parts that the robot has to retrieve from the bin without specifically being taught to do so. Each robot saves the experiences gained, for example in the internal cloud referred to as fog. Once stored there, this information is also available to other robots. If four robots are working at this bin, for example, they benefit from the “experiences” of the other robots, emptying the bin more quickly as a result. The learning curve indicates that after 1,000 attempts,
the robot has a success rate of 60 per cent. After 5,000 attempts, it can already pick up 90per cent of all the parts – without a single line of program code having to be written.

Continually improving movements 

Nonetheless, mobile systems represent the ultimate discipline in the field of robotics: robots that can move independently in complex environments continue to pose major challenges for research and development. For a robot to act autonomously, it has to perceive its own motion and its environment through sensors, process the sensor data and compute new action commands to be executed. The result of these processes is then monitored in turn by the sensors. This continuous feedback allows the robot to balance itself or walk, for example.
Stefan Schaal and his team from the Autonomous Motion Department of the Max Planck Institute in Tübingen have developed a “continuous motion optimization and control technology”, which makes robots see what they manipulate. This technology uses a new algorithm that continually optimizes the motions of robots and improves hand-eye coordination. Robots can therefore shape their behavior according to the environment and adapt to unpredictable interactions of human-robot cooperation.
American robotics specialist Lula Robotics is now continuing to develop the technology and plans to integrate it fully into existing robotics platforms. “Our system continuously optimizes its behavior and reacts to changes, giving a life-like quality that promotes close human-robot collaboration,” explains Nathan Ratliff, co-inventor and CEO of Lula Robotics. “Today, we are concentrating on the collaborative man-machine interaction in the area of industrial manufacturing and assembly, but the technology might even be the basis for robots used in the home or for healthcare in the future.”

 

****

This article is also available in German language. Click here for the German version.

 

How Learning Robots are Adapting to Their Tasks | EBV Elektronik

Display portlet menu

How Learning Robots are Adapting to Their Tasks | EBV Elektronik

Display portlet menu
Related Articles
Industrial communication via SPE technology
Industrial communication via SPE technology
By Karl Lehnhoff   -   January 25, 2023
Wholesale adoption of Industry 4.0 is now a major goal for manufacturing sites around the globe, bringing a multitude of different benefits.
ST Analog Building Control
Powering building alarm systems and driving LEDs
By Bernard Vincens   -   October 3, 2022
This article investigates the technical considerations for powering building alarm control systems, sensors, and indicator panels, emphasising power conversion efficiency and achieving a low power consumption profile.

How Learning Robots are Adapting to Their Tasks | EBV Elektronik

Display portlet menu
Related Events
NXP | Live Lab: Designing Motor Control Applications with New MCX
Date: July 10, 2024
Location: Online Webinar, EMEA