3 | 21 Fraunhofer magazine W hat if you could just get out at your destination and let your car find itself a parking space? Dr. Michael Mock, an associate professor and research fellow at the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, estimates that this dream could become a reality in another five years thanks to artificial intelligence (AI). But who would leave their vehicle with an AI they can’t trust? After all, you want to be sure your car won’t get damaged as it autonomously makes its way through a city on the hunt for a parking space — or cause any dam- age, either. Plenty of skepticism still. According to a survey by the German digital industry association Bitkom in September 2020, nine out of ten respondents agreed that AI systems must be tested “particularly thoroughly” before being approved. This shows there is concern. However, there is hope, too — two thirds felt the opportuni- ties associated with the technology were enormous. Dr. Mock and his colleagues at Fraun- hofer IAIS are working on improving trust in AI. “Artificial intelligence is the key technology we can use to significantly advance digitalization — so long as we manage to foster trust in it,” Dr. Mock says confidently. While conventional software works on the basis of algorithms provided by developers, AI systems are fed large amounts of sample data from which they learn and derive rules on their own (“machine learning”). But what those rules are, no one knows. Even in retrospect, they can often only be partly determined, or cannot be determined at all — the user must trust that the AI will come to the right conclusions. As part of the Certified AI project, Dr. Mock and the Fraunhofer IAIS team are developing methods and measures to make AI systems more secure. In June, the researchers published a guide to designing trustworthy artificial intelli- gence, which is available for free on the Internet. The EU also sees the need for action. In April, it proposed the world’s first legal framework for AI. “When it comes to artificial intelligence, trust is a must, not an optional extra,” stressed Margrethe Vestager, Vice-President of the European Commission for a Europe Fit for the Digital Age. AI that threatens the safety, livelihoods and rights of people should be banned, and systems that pose high risks must meet strict requirements before they are launched on the market. “The EU has put forward a very good proposal for reg- “When it comes to artificial intelligence, trust is a must, not an optional extra.” Margrethe Vestager, Vice-President of the EU Commission for a Europe Fit for the Digital Age ulating AI,” Dr. Mock believes. “It sets out sensible requirements — although it men- tions hardly any measures for meeting these requirements. Our guide is much more specific on that front.” The 160-page guide aims to help developers consider all relevant criteria from the offset. “There are a lot of risks to consider — but there are also ways to deal with them,” says Dr. Mock. Every test begins with a risk analysis, which is carried out systematically based on essential requirements, i.e. the six dimensions of trust: fairness, reliability, autonomy and control, transparency, safety and data protection. “Not every dimension is relevant to every application,” Dr. Mock empha si zes. For a n A I- cont rol led paint-mixing machine in an industrial production context, for example, the dimension of fairness, i.e. the question of whether the AI treats all those involved fairly, is not relevant — in contrast to an automated candidate selection process. In that case, it is crucial to provide the AI with the right samples during the training phase to avoid discrimination based on gender, age, religion, skin color or ethnic- ity. For example, in 2018 Amazon made it public that its AI system had favored male applicants because women were under- represented in the underlying sample data. On Twitter, its automatic image-cropping feature favored white women who con- formed to certain beauty standards. The company recently shut down the software as a consequence. “That shows how import- ant it is to carefully select training data for AI. It often conveys human prejudices,” Dr. Mock warns. In addition, there are mathematical methods that can be used to ensure all affected groups are considered fairly. These methods are described in the recently developed guide, as are reviewing methods. In the case of autonomous driving, all six dimensions of trustworthiness come into play. Together with Volkswagen, Mock is deputy head of a project for creating safe AI for automated mobility, and is also responsible for scientific coordination. As part of this project, large German car manufacturers, suppliers, technology companies and research institutes have joined forces and are funded by the German Federal Ministry for Economic Affairs and Energy. The total budget is 41 million euros. Reliability is central to the trustworthi- ness of AI in cars, and it is also Mock’s area of expertise. The samples you use to train the AI are important here, too. For this reason, the Fraunhofer IAIS team first defines the conditions under which the perception modules, such as pedestrian detection, are to function. Like every software system, AI requires a precisely defined operating range; in the context of autonomous driving, this is known as the operational design domain, or ODD for short. “You specify which use cases the AI has been tested for and where it works r e n t i e L a l i e n a D : s n o i t a r t s u l l I 33