Thoughtful machines

Web special Fraunhofer magazine 3.2024

It still has a somewhat alarmingly wobbly gait at this point. But Optimus can take a raw egg out of a carton and place it in an egg cooker. There are high hopes for the broad-shouldered metal bot: Elon Musk, head of Tesla and the official father of Optimus, has predicted – in his usual grandiose style – that humanoid AI robots like Optimus will completely revolutionize the economy.

Optimus is not the only one of its kind. Figure 2, from the tech labs of American software firm OpenAI, can talk and load a dishwasher, while Atlas, from Boston Dynamics, can turn a somersault and complete even a difficult obstacle course. Armar-7, created by the Karlsruhe Institute of Technology (KIT) to provide day-to-day support, can open doors, interact with humans, and prepare small meals. These achievements are spurring higher public expectations for AI-based robotics. Some of today’s visionaries make it sound as though the age of Terminator is already upon us.

Many experts at Fraunhofer take a different view. “Developments like Optimus, at Tesla, garner a lot of attention, of course. But the actual innovation is occurring elsewhere. A welding robot might not be as sexy as Optimus, but it already plays a much larger role in industrial production today,” says Dr. Werner Kraus, a mechatronics engineer and head of research on automation and robots at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart. “We have big and well-founded doubts that humanoid robots will make a significant contribution to value creation in the next two to five years.” Even so, Kraus agrees that smart robots are a must: “AI needs robotics – and robotics needs AI,” Kraus says.

This represents the perfect marriage of two strands of research. Until now, robots didn’t have the intelligence needed to do more than execute a fixed set of pre-programmed motions. And artificial intelligence, for its part, lacked the body it would need to take action in the real world. “The vision is to have a robot that can really think and take action proactively at some point,” Kraus says. “So far, though, we’re just happy if we can give a robot a certain level of flexibility in carrying out its tasks, like making it able to grab even objects that it has never seen before.”

Bin picking: a well-known issue

A team of researchers at Fraunhofer IPA has been working on teaching robots to pick things up for several years now. Automating the process of “bin picking” is considered a core problem in robotics. Many areas of industrial manufacturing generate large volumes of bulk items that need to be sorted and separated as accurately as possible. It is a monotonous, physically demanding, high-cost task, which makes it a perfect candidate to be assigned to a robot instead. But it is also a huge challenge for robots: The bulk of the industrial robots currently used for these jobs utilize laser scanning to be able to at most tell previously learned objects apart.

Artificial intelligence attempts to mimic human cognitive abilities by recognizing and sorting incoming information like Homo sapiens does. However, that doesn’t mean it identifies how to solve the problem. In this machine learning method, the algorithm develops a way to execute tasks correctly, but only as part of a simulation. Through training with very large volumes of data, “neural networks” — a subdiscipline of machine learning – can recognize patterns and connections and use them as a basis for making decisions and predictions. And that means they improve over time.

Machine learning can also help deal with unknown objects. In the Deep Grasping research project, neural networks were trained in a virtual simulation environment with the aim of recognizing objects and then transferred to real-world robots. The robot system is now even able to recognize components that are hooked together and plan its grasping motions so it can unhook them. The researchers are also working on automation systems that configure themselves, for example through automatic selection of grippers and gripping poses  “automation of automation”, so to speak.

Picking things out of a bin and setting them down somewhere else might sound underwhelming in light of the hopes raised by Optimus and similar robots. But these deceptively small jobs represent huge advances in robotics. Kraus points to what is known in the field as Moravec’s paradox: Tasks that seem incredibly simple to us as humans are actually extremely difficult for robots. Or, as Canadian researcher Hans Moravec put it, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

The goal of incorporating AI into robotics is to help overcome these challenges, and this is viewed as one of the groundbreaking trends in the digital transformation of industrial manufacturing. Market research firm Mordor Intelligence, for example, forecasts an annual growth rate of 29 percent for the robotics market between now and 2029. Smart industrial robots can enhance production speed, accuracy, and security, facilitate troubleshooting, and make production more resilient through predictive maintenance.

To support the industrial sector on the way to Industry 5.0, Fraunhofer IPA teamed up with the Fraunhofer Institute for Industrial Engineering IAO in Stuttgart in 2019 to establish the AI Innovation Center “Learning Systems and Cognitive Robotics”, an applied branch of Cyber Valley, Europe’s biggest research consortium in the field of AI. The goal is to conduct practical research projects as a way to bridge the gap between the technologies involved in cutting-edge AI research and SMEs.

In Magdeburg, the Fraunhofer Institute for Factory Operation and Automation IFF has partnered with companies to create use case labs where the manufacturing sector can present its automation requirements and devise customized smart robotics solutions. The cutting-edge Lamarr Institute, one of five university AI competence centers across Germany to receive ongoing funding as part of the German federal government’s AI strategy, is designing a new generation of artificial intelligence that will be powerful, sustainable, trustworthy and secure and contribute to resolving key challenges in industry and society. The Fraunhofer Institute for Material Flow and Logistics IML is contributing in various ways, including with its PACE Lab research infrastructure.

This past July also saw the launch of the Robotics Institute Germany (RIG), which is to become a central point of contact for all aspects of robotics in Germany. The competence network is led by the Technical University of Munich (TUM) and has received 20 million euros in funding from the German Federal Ministry of Education and Research (BMBF). Three Fraunhofer institutes — IPA, IML, and the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB — are all involved. The goals of the RIG are to establish internationally competitive research on AI-based robotics in Germany, use research resources effectively, provide targeted support for talent in the field of robotics, and simplify and advance the transfer of research findings to industry, logistics companies, and the service sector.

Developing purpose-built robots

Robots that are newly joining the team do not necessarily have to walk in on two legs, which is actually not the most effective form of movement according to the current state of technological advancement: too energy-intensive, too slow, too high a risk of falling. “Overall, a humanoid robot would be completely overdeveloped for use in industry and logistics: too many actors, too much freedom, too many motors that aren’t even needed for the relevant use case,” argues Leon Siebel-Achenbach, an electrical engineer and deputy head of the IoT and Embedded Systems department at Fraunhofer IML. “Instead, we need to build more robots that can deliver significant performance, especially for industries that are already feeling the squeeze from the shortage of skilled workers.”

A creative project team at Fraunhofer IML developed a robot they call the Load- Runner for the logistics sector back in 2019: an autonomous high-speed vehicle that looks like an oversized robot vacuum, moves securely and proficiently at a rate of up to ten meters per second – even as part of a swarm – thanks to smart vehicle coordination, and can join forces with other robots as needed. The LoadRunner can transport loads of up to 30 kilograms, making it perfect for sorting and distribution tasks.

Under the leadership of Patrick Klokowski, the LoadRunner was followed in 2021 by the evoBOT®: “The project’s objective is to develop an agile robot for use in logistics that can independently pick up, transport, and actively deliver loads at a height that is also served by humans,” Siebel-Achenbach explains. Instead of walking on two legs, the evoBOT® rolls along on two wheels — and is quite nimble at it, too. The evoBOT® is a type of robot known as an autonomous mobile robot, or AMR. It moves based on the inverted pendulum principle familiar from the Segway self-balancing two-wheeled electric scooter. It has “arms” equipped with grippers to the left and right that it can use to lift and transport objects.

At the end of the project, the goal is for the evoBOT® to be able to harness artificial intelligence to recognize and chart its environment and the surface it is moving on so it can move freely around a defined space and avoid obstacles. Camera systems help it to identify and classify load items, so it can lift them correctly, balance appropriately according to their weight, and set the goods down in a different location. “At that point, the evoBOT® could be used completely autonomously as a service robot,” Siebel-Achenbach explains. “This would be interesting for fields such as the logistics sector or even in hospital settings, where the robot could be used to transport beds or distribute medications within the facility. We deliberately developed it with a modular approach so that it is maximally scalable in size while also covering as many use cases as possible.”

 

" "
© Sven Döring / laif
Where does the path lead? Leon Siebel-Achenbach and his team of researchers at Fraunhofer IML deliberately designed the evoBOT® to be modular so it would cover many possible use cases.

" "
© Sven Döring / laif
How close is too close? Prof. Elkmann and Magnus Hanses also focus on the issue of safety during human-robot collaboration in their research at Fraunhofer IFF.
" "
© Sven Döring / laif
Little helper: Pepper from SoftBank Robotics can respond to gestures, mimics and speech.

The new learning paradigm

“AI will let us find new and better solutions for various complex issues, even though we can’t understand in detail just how the AI works,” says logistics expert Prof. Michael ten Hompel, who retired from his position as head of Fraunhofer IML in April. “That also means we’re moving toward a new paradigm in learning.”

One visible example of this is the iDEAR project at the Fraunhofer Institute for Factory Operation and Automation IFF in Magdeburg. The project’s primary aim is to enhance sustainability by reusing and recycling resources derived from the approximately 54 million metric tons of e‑waste generated per year (figures as of 2019). Another central question is what role robots might play in this development. After all, AI-supported robots would seem to be a perfect fit for dealing with end-oflife electronics.

However, high-tech devices are much more complicated to disassemble than to assemble, as there is more to it than joining a known number of parts together according to specified work steps. “Computers are built differently depending on the manufacturer, and there are no longer any instructions for many old devices,” explains Prof. Norbert Elkmann, head of the Robotic Systems department at Fraunhofer IFF. “On top of that, you can’t see what’s going on inside the computer from the outside. That means the robot has to autonomously generate the action for the next step during disassembly.”

The researchers are thus on the cusp of a paradigm shift: “Working with a lot of unknowns can’t be done with a static program. It requires an adaptive approach,” says Magnus Hanses, group manager for cognitive robotics at Fraunhofer IFF. “Whenever you have an activity where a person says they decide on the next action situationally and based on intuition, it gets tough for programmers. But AI can help with that.” The objective of iDEAR is to develop automation systems that can respond not only flexibly, but also intelligently – from product identification and evaluation to dynamic cost-effectiveness assessment and through to planning and executing disassembly.

So how do you teach a robot all that? “Training on the real-world system wouldn’t be cost-effective, because it takes too long and is also too risky,” Hanses explains. “Our approach involves modeling the dismantling process in a simulation. Any number of virtual robots can work in digital space at the same time and at a much faster pace without any safety concerns.” This makes it possible to automatically find solution strategies for subprocesses with high variance. To achieve this, data from a digital twin flows continuously into the automated disassembly process in the real world, just as information from the dismantling process is reported back to the digital twin. Human experience is also fed in to further enhance the level of automation. However, Hanses says, AI should only be used where it actually provides added value: “Analytical methods are a much more efficient way to tackle many subprocesses.”

 

Bridging the gap between simulation and reality

There are many advantages to learning in the digital simulation – but it also has one vulnerability. The virtual learning environment is never 100 percent the same as the real world. “The challenge for researchers lies in minimizing this ‘reality gap,’” computer scientist Christian Jestel explains. There are two possible approaches here. The simulation can either be designed to be as realistic as possible, or it can encompass many possible versions of reality so the neural network learns to generalize and can then find its way around even unfamiliar environments later on.

" "
© Sven Döring / laif
How do AI robots learn? Christian Jestel uses rewards and punishments to train the RoboMaster during simulation exercises at Fraunhofer IML.

Jestel’s work at Fraunhofer IML involves using what are known as “Robo- Masters”: robots from a Chinese manufacturer that he breaks down to just the chassis and wheels. These elements are used as a research platform for deep reinforcement learning – essentially, a system of rewards and punishments. In one simulation developed by Jestel, the AI scores more points the faster it reaches a predetermined destination with no disruptions. Points are deducted if it moves away from the endpoint or collides with an obstacle.

“At the start, the AI just tries everything. But in the very next training rounds, it has already learned to avoid the actions that cost it points the last time,” Jestel explains. Equipped with the trained AI, the Robo- Masters can then make real-world directional decisions autonomously, without any centralized input. This is done based on the input on their surroundings from laser scanners. This training approach could also change the logistics industry not long from now. “Sometimes people will say a Robo- Master is just a toy,” Jestel says. “But this project isn’t about the vehicle itself. It’s about the idea behind it. And that idea has the potential to transform many applications for industrial mobile robots.”  

 

Plug-in intelligence

Thanks to artificial intelligence, robots are becoming more and more emancipated from humans. Right now, the potential uses for robots like these are impossible to foresee, but that is exactly what computer scientist Sebastian Hoose, who works as a research scientist at Fraunhofer IML, finds so fascinating about this subject. “Everything is so marvelously complicated, with change coming thick and fast and bringing new and exciting aspects with it,” he says. To help small and medium-sized enterprises keep up, he is currently working to develop software that can be trained and used as generically as possible. Remote AI, or RAI, can be thought of as AI to go: artificial intelligence in a box that can be simply dropped into conventional transportation vehicles, quickly and easily adding autonomous capabilities and upgrading them with specific attributes. The algorithms in the AI box allow the robot to move around in defined spaces and handle transportation-related tasks. Another advantage to the remote approach is that what individual vehicles learn can easily be shared with other robots or even an entire fleet.

Upgrading instead of buying new equipment: “RAI makes the technology economically interesting for SMEs,” Hoose says. As an additional module, the AI box acts as a kind of bridge between conventional and AI-based robotics. Right now, the remote approach is still limited by the challenge that every transportation robot has a different interface depending on the manufacturer, which means the box cannot simply operate as a plug-and-play solution. “To help with that, we’ve developed a standard from the RAI side,” Hoose explains. Now all that needs to be done during installation is to implement a short piece of code. “There’s no way around that unless and until there are industry standards,” he says.

 

" "
© Sven Döring / laif
Care for some AI with that? Computer scientist Sebastian Hoose developed a portable system called “RAI — Remote AI” at Fraunhofer IML. It equips robots with cognitive abilities.

Robots for dangerous missions

The AI-based approach to robotics is interesting for two main reasons. First, there are the industrial sectors that are experiencing a shortage of skilled workers. Robot systems can fill the gap there by taking on certain jobs. Second, there are also tasks that are too difficult, cumbersome, or dangerous for humans to perform. These include working on the ground in crisis or disaster situations, at landfills and contaminated sites, or in terrain riddled with land mines. The ROBDEKON competence center focuses on developing smart robot systems for these types of environments that are risky to people. Initiated by the German Federal Ministry of Education and Research (BMBF) in 2018, the center has received about 20 million euros in funding. Fraunhofer IOSB is responsible for the overall coordination of the network.

" "
© Sven Döring / laif
What does a robot see? Exploring the terrain with sensors and optical cameras. Fraunhofer IOSB develops the sensor technologies and algorithms to help heavy vehicles move around autonomously.
" "
Dr. Janko Petereit works at Fraunhofer IOSB, getting heavy vehicles ready for autonomous work in environments that are dangerous for humans.

Researchers there have been working for some time to develop various base functions that can add a certain level of autonomy and intelligence to existing robot systems and vehicles, much like the RAI box. tems,” says Dr. Janko Petereit, the group manager for autonomous robot systems at Fraunhofer IOSB, who is also the coordinator of ROBDEKON. Areas of focus include excavators that can be used to retrieve and dispose of hazardous materials or remove contaminated layers of soil without any human operator at all. Petereit comments: “Thanks to various smart algorithms for localization, mapping, obstacle recognition, and movement planning, the robot systems can move around independently, even in unfamiliar terrain, and complete tasks. This enhances efficiency, relieves some of the burden on specialists, and above all, it can reduce accident and health risks when there are jobs to do in hard-to-reach or dangerous environments.”

Petereit demonstrated just how well this already works at the BMBF Innovation Forum in Berlin in May. Participants were invited to assign tasks to ALICE, a 24-ton excavator located some 500 kilometers away in Karlsruhe, having it perform missions like retrieving potentially contaminated barrels. ALICE used AI to sense and interpret its environment and perform the tasks completely autonomously.

 

Fascination for many fields

What many Fraunhofer researchers find especially fascinating about AI-based robotics is the interdisciplinary aspect: “Developing an intelligent overall system requires expertise from many different fields,” Petereit explains. And those fields include more than just computer science, electrical engineering, and mechatronics. Psychology, law, and ethics are also involved. After all, intelligent robots, many of them already capable of speech, will increasingly be a fixture of everyday life in the workplace, at home, and in public settings, and that will change the way we look at high-tech devices and how we perceive them and interact with them. The more humanoid their appearance, for example, the more we tend to believe robots are not only intelligent, but also have human characteristics and feelings. In June 2024, a robot administrative officer operated by the Gumi City Council in South Korea ran in circles for a while and then flung itself down a six-foot flight of stairs, immediately sparking theories that the technical failure was actually a robot suicide caused by overwork.

" "
© Sven Döring / laif
How will we work tomorrow? Fraunhofer helps companies harness the power of AI for robotics.
" "
© Sven Döring / laif
" "
© Sven Döring / laif
What does the future hold? Engineering psychologist Selina Layer conducts research at Fraunhofer IAO on human-robot interactions.

Selina Layer studied engineering psychology, a relatively new discipline. She now works as a research scientist at Fraunhofer IAO, studying the underlying issues in human-machine interactions and how to optimize them. While working on the NIKA project, which focuses on user-centered interaction design for context- sensitive and acceptable robots, Layer wrote her bachelor’s thesis on the topic of what robots need to be able to do and how they should behave in order to be accepted by older people as well as younger groups and offer them added value. The project’s results were stored as “interaction patterns” in a pattern library, a collection that goes back to research done by Dr. Kathrin Pollmann from Fraunhofer IAO. In the long term, the goal is for the library to help with selecting the right behaviors for a robot and transmitting them to the device, based on the user and situation. “We plan for the pattern library to grow into a kind of basis for designing social interactions between humans and machines,” Layer explains.

Layer’s current research project revolves around three service robots that are intended for use not in the home, but rather in public settings, where they could be used for tasks such as street sweeping or transportation. Right now, Layer says, machines like these are used primarily overnight in industrial production facilities – when few people see them. Even so, both robots and people need to be prepared for these encounters. “Initial studies conducted as part of our ZEN-MRI project showed, for example, that pedestrians are not very effective at anticipating how and where a machine is going to move next,” Layer explains. The risk of “unplanned proximity” is especially high when people are distracted, for instance by conversation or looking at a cell phone. “To prevent collisions, robots need to call more attention to themselves in these situations – but without being annoying due to the noise they make,” Layer says.

The robot’s motivations should also be clear so people will accept them as they go about their tasks, especially in the early years of the growing “robotization” of the world around us: Why is this machine here, and what is it doing? “If we don’t make it clear just what kind of socially desirable job the robot is doing – which can be in the form of a sign posted where it is working or a sticker on the robot itself – there’s an elevated risk of attacks, up to and including property damage,” Layer explains. Seventy-three percent of those surveyed for the ZEN-MRI project were afraid they might be injured in a fall after colliding with a robot, for example. Just under half are worried about potential security vulnerabilities caused by problems with a robot’s hardware or software, and nearly one in three people view robots as potential obstacles. These kinds of fears and perceptions contribute to the phenomenon known as “robot bullying,” when people attack a robot.

Make it cute – but not too cute

To prevent these kinds of situations and make it easier for people to interact with machines, the RAI box from Fraunhofer IML was given two stylized eyes and two speakers, positioned to the left and right, that look like ears. Two dots vaguely reminiscent of eyes give the evoBOT® a cute, somewhat human appearance. “The point of the friendly design is to lower people’s inhibitions about interacting with the robot,” researcher Siebel-Achenbach comments. “We achieved better than expected results with the design. Although it’s a prototype that still has a few development cycles ahead of it before it’s ready to interact properly with people, we’re surprised by how open people are when approaching the robot.” The principle also works in reverse. The mobile cleaning units used in the ZEN-MRI project were deliberately designed to look nothing like humans or animals to keep from prompting an unconscious desire for interaction.

“Robots aren’t all that established in people’s everyday lives yet, which is what I think makes research in this area so exciting and valuable,” Layer explains. “It’s about shaping standards: How will we communicate with robots – and them with us – down the road?”

A question of safety

Industrial robots still operate primarily behind a protective grate or bars. With cobots, which can be used without a protective fence, the subject of personal safety and safety certification is a major factor. But intelligent autonomous mobile robots (AMRs) and humanoid robots achieve new levels of safety and compliance with safety requirements. “The safety question has proven de facto to be one of the most significant challenges when it comes to collaboration between humans and robots in shared work settings,” explains Prof. Norbert Elkmann from Fraunhofer IFF. “It is a key reason why reality has fallen short of industry’s hopes for fast implementation of robots and cobots in production operations.”

A 2018 study by the World Economic Forum, for example, predicted that robots would be responsible for some 52 percent of all hours worked – the majority – by 2025. Elkmann says we are still far from achieving that. Factors include the time, effort and expense involved in planning and implementation and the fact that the process of CE marking for applications involving human-robot collaboration is often very intricate. Mobile robots give rise to a whole new type of challenges: the actions and scenarios that come into play where humans and robots meet are much harder to plan for than is the case with stationary robots. With this in mind, Fraunhofer IFF is taking a new approach: Every action performed by a robot undergoes digital risk analysis and then gets a CE mark. “AI-supported robots that plan and execute their movements autonomously are diametrically opposed to the safety and certification principles we have today, which are the product of deterministic systems,” Elkmann explains. “But we’ll get there.”

Fraunhofer IFF works on safety standards for human-robot collaboration that are then reflected in official standards worldwide. For example, researchers at the institute have developed a test system with a pendulum that – with the ethics committee’s approval – will enable collision trials with human test subjects to determine the pain threshold: At what point does a collision start to hurt, and where on the body do people experience pain? The levels of stress in relation to force, pressure, and impact energy determined in studies like these can then be translated to verified limits, which will be critically important to the process of designing safe human-robot collaboration. The study’s results act as a snapshot of the current state of technological advancement worldwide. The data also enable ultra-precise simulation of physical contact between people and robots. This is a key prerequisite in ensuring that in the future, AI-controlled robots move in such a way that people are not harmed if the two sides come into contact. “Industry, and people in general, have high expectations when it comes to intelligent and possibly also humanoid robots – in some cases actually quite excessive expectations owing to the latest news from the U.S.,” Elkmann says. “Robotics developers still have a number of fundamental tasks ahead of them before products that are truly versatile and make financial sense hit the market. The safety questions will need to be resolved, too. It’s really fun to be here supporting and contributing to this development.”                         

 

Fraunhofer Strategic Research Field Artificial Intelligence (AI)

Artificial intelligence (AI), cognitive systems and machine learning have a key role to play in the transformation of our society and economy. The Fraunhofer-Gesellschaft is developing key AI technologies and applications at a number of its institutes. Our research contributes significantly to the development of safe, trusted and resource-efficient AI technologies that closely match the real-world needs of companies and society as a whole.

 

Fraunhofer Group for Production

The Fraunhofer Group for Production is a cooperative venture by a number of Fraunhofer Institutes, created with the aim of collaborating on production-oriented research and development in order to be able to offer customers in the manufacturing, commercial and service sectors comprehensive single-source solutions derived from the pooling of the wide-ranging expertise and experience of the individual institutes.