Use Cases for Trustworthy AI

How can AI technology be used safely and securely? Discover our examples of powerful and trustworthy AI systems from various application areas.

There is a high demand for AI solutions in industrial applications. These solutions need to be efficient, trustworthy and secure in order to be used in series production or quality control, for example. However, the new possibilities opened up by generative AI also raise questions. Can users rely on what a chatbot says? How can vulnerabilities in an AI model be detected early on during development? 

We at Fraunhofer have therefore developed solutions and tools that are powerful, trustworthy and reliable, as well as compliant with European data protection standards.

Are you interested in working with us? Please contact us!

ZERTIFIZIERTE KI: Safety and Trustworthiness

There is a high demand for AI solutions in industrial applications. These solutions need to be powerful, trustworthy and safe in order to be used in series production or quality control, for example.

How can companies ensure this? Our AI Assessment Catalog and AI assessment tools are two key instruments in achieving this.

Trustworthy AI thanks to the AI Assessment Catalog: Guidelines for companies

Fraunhofer IAIS has developed the AI Assessment Catalog, a detailed and extensive catalog that helps companies operationalize requirements relating to the trustworthiness of their AI applications. These guidelines, which have already been tested with successful results in numerous pilot projects, are available free of charge. Our goal is to establish standards for AI assessments and pave the way for independent AI certification.

Assessment tools for trustworthy AI from the company, developer, auditor and assessor perspective

Fraunhofer IAIS supports companies in the mechanical and plant engineering sector in designing the use of AI to be safe and reliable. Our AI assessment tools provide support with systematic quality assurance for AI systems. They enable companies and developers to identify and improve vulnerabilities. For example, systematic tests show whether AI models have learned errors or weak points. If there are systematic weaknesses, this indicates distortions the models have learned from the training data — in that the model only recognizes a certain kind of quality defects in certain scenarios, for example. It is impossible for humans to identify these kinds of systematic errors without technological tools. Our assessment tools offer suitable instruments for this. They can support developers, auditors, and assessors alike. This makes the tools useful both for development of AI systems within companies and during quality assessments by testing and certification bodies.

Your partner for safe and trustworthy AI technologies

As part of the "ZERTIFIZIERTE KI" KI.NRW flagship project, the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS is working with high-profile partners such as the German Federal Office for Information Security (BSI) and the German Institute for Standardization (DIN) to make the future of artificial intelligence (AI) safe and trustworthy.

 

Additional links:

 

 

Uncertainty Wrapper: Managing Uncertainty

Dealing with uncertainty plays a crucial role in AI-based systems, especially in security- and safety-critical domains such as medicine and autonomous driving. The uncertainty wrapper from Fraunhofer IESE is a method of managing uncertainty in AI. Our approach, coupled with dedicated tool support, streamlines the creation of uncertainty wrappers tailored to specific AI applications. These wrappers utilize the same inputs as the AI component, along with pertinent contextual data, to assign a confidence value to each output. This empowers informed uncertainty management in AI systems.

Effective uncertainty management allows for more accurate, more reliable and also more safe behaviors of AI-based systems, which in turn brings greater acceptance of and trust in AI systems.

Reliable predictions in medicine

In medicine, effective uncertainty management can help to more accurately adapt AI models to various patient groups and medical conditions. For example, they can use MRI scans to make diagnoses or predict how certain treatments might affect the patient. However, these kinds of predictions always carry a certain amount of uncertainty stemming from factors such as image quality or the complexity of the model. The uncertainty wrapper can be used to quantify the uncertainties surrounding diagnoses, thereby helping with decision making.

Handling uncertainty in autonomous driving

In the field of autonomous driving, the uncertainty wrapper aids, for example, in the identification and evaluation of uncertainties related to obstacle detection and monitoring. For instance, in adverse weather conditions, visibility of street signs can be challenging for both human drivers and AI systems. Here, the uncertainty wrapper plays a crucial role by quantifying the uncertainty associated with the AI-based classifications. This, in turn, enables an informed management and decision making, contributing to safety and trustworthiness.

Your partner for safe and dependable AI

The benefits of AI can only be unlocked when AI-based systems are also dependable. This is particularly true for critical domains, where safety needs to be ensured and maybe even certified. With over two decades of expertise in safety and security engineering, coupled with deep knowledge in data science, Fraunhofer IESE specializes in Dependable AI solutions. We offer a comprehensive suite of services and cutting-edge technologies throughout the entire engineering lifecycle, providing our partners with the dependability assurance they need

 

Additonal links:

 

 

OpenGPT-X: Large Language Models for companies

Artificial intelligence (AI) is coming to play an increasingly important role across almost all industries and processes. Generative AI (GenAI) models in particular will take center stage in the future. Safety and trustworthiness are crucial in this development, but they are not the only issues. Training data and algorithms are also becoming key competitive factors. For example, European languages are significantly underrepresented in the data used to train models from American providers.

Open-source AI model

The OpenGPT-X consortium project, which is receiving funding from the German Federal Ministry for Economic Affairs and Climate Action (BMWK), is developing large language models under the leadership of the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS and the Fraunhofer Institute for Integrated Circuits IIS. In contrast to the American market leaders, OpenGPT-X takes into account a large number of European languages, including German, Spanish, and English. The objective is to make OpenGPT-X available to European companies with open source code, thereby helping to ensure the digital sovereignty of Germany and Europe.

Data sovereignty and data protection

Companies looking to implement generative AI are forced to choose between cloud-based and locally operated infrastructures. Especially in sensitive areas such as healthcare and public administration, safeguarding data sovereignty and ensuring transparency in relation to the use of data are crucial. The open-source approach taken by OpenGPT-X allows companies to adjust algorithms to reflect their specific needs while also maintaining control over their data at all times.

Use cases from the OpenGPT-X project

The AI model can be used for automatic claims adjustment in motor vehicle insurance, for example. AI-assisted document analysis makes it easier to settle claims. A digital assistant can help customers have their claims processed quickly and fairly. In the automotive industry, too, a conversational AI system allows vehicle users to ask questions and get answers via an interface. Large language models (LLMs) are used for this to manage and control entries and provide domain-specific information.

Your partner for a trustworthy large language model

Fraunhofer IAIS is contributing its expertise to the OpenGPT-X project as a leading European research institution in the field of artificial intelligence and big data, and Fraunhofer IIS is doing the same for the domain of audio and voice signal processing. 

 

Additional links:

 

 

DigiWeld – AI Solution for Error-Free Industrial Processes

In times when specialists are in short supply and businesses are increasingly forced to rely on inexperienced personnel, the likelihood of operator errors is rising rapidly. This not only brings increased waste in the form of defectively manufactured products but also an unnecessary increase in the materials and energy consumed. But there is a solution: Artificial intelligence (AI) can identify these operator errors and defects early on, thus reducing them — all with the help of energy data.

Federated learning

DigiWeld, an AI solution developed by the Fraunhofer Institute for Manufacturing Engineering and Automation IPA and the University of Stuttgart, uses an innovative approach known as federated learning for this. This method makes it possible to utilize data from various users in industry without any need to store the data centrally. This is achieved through decentralized training of individual AI models and joint use of model parameters instead of the actual data. In this way, sensitive production data stays safely in the hands of users while the AI receives effective training.

The AI solution detects anomalies in industrial workflows purely by analyzing energy data, which is typically collected without additional sensors. The experts from Fraunhofer IPA use a specific process used in production to show how this works: arc welding. Operator errors can be detected early on and prevented using just a few pieces of data on current, voltage, and wire feed. The research team also continuously evaluates how to strike a good balance between privacy and the detection rate by adjusting the federated learning parameters.

Your partner for AI in production

In DigiWeld, Fraunhofer IPA has devised a solution that allows makers of machinery and systems to tap into the benefits of artificial intelligence without having to collect sensitive production data from their customers.

 

Additional links:

 

 

DisCo: Identifying and Fighting Fake News

The spread of disinformation and fake news in relation to events such as the coronavirus pandemic and the wars in the Middle East and Ukraine is a serious challenge. Forwarded and shared without checking, fake news can spread swiftly on social media platforms. This causes uncertainty and loss of trust across society.

Researchers at the National Research Center for Applied Cybersecurity | Fraunhofer SIT have tackled this challenge from the perspective of forensic text analysis and forensic multimedia analysis as part of the DisCo (Disinformation and Corona) project. Their first step was an in-depth analysis of the current fake news landscape in text and images. As part of this analysis, they concentrated on studying the typical methods and techniques used to produce fake news in the context of current crises.

Your partner in the fight against disinformation

One of the team’s key areas of focus was developing a demonstrator that shows how AI technologies can help stakeholders such as journalists by automatically identifying and highlighting passages of text that should be verified. To this end, they studied machine learning methods that predict which sentences should be prioritized for fact checking. Tools such as inverse image searches provided by various search engines were also tested with an eye to effectiveness and performance. These tools make it possible to identify images that have been taken out of their original context, a common feature of fake news.

The human element in the fight against fake news

Even with cutting-edge technological solutions, human judgment is always needed to tell truth from fake news. With this in mind, the researchers working on the DisCo project worked closely with fact-checking sites to ensure that the results of their technological analysis were evaluated by qualified specialists. Ultimately, deciding what is true and false is still the responsibility of journalists and fact checkers.

 

Additional links:

 

 

Human AI – Human-Centered AI Research

The more important the role of artificial intelligence (AI) in our world becomes, the more urgent the need to take human-centered and ethical issues into account when developing and using AI systems. These days, companies need to do more than just recognize the economic benefit of AI. They also have to ensure that solutions that put humans and their requirements and needs at the forefront are both developed and used. Human-centered consideration of AI is the focus of the research on human AI done at the ADA Lovelace Center by Fraunhofer IIS.

Acceptance of AI in healthcare

While others are concentrating primarily on the economic or technological possibilities opened up by AI, this area of the project (ADA Lovelace Center) focuses instead on research on human-centered AI. Pursuing what they call “human AI,” the researchers explore the individual and societal roles of AI and the ethical principles that should apply during development and use of AI. Drawing on the social and behavioral sciences, researchers here specifically aim to explain why (or why not) people accept AI and what the consequences of using AI are for individuals.

AI has vast untapped potential for use in the healthcare sector in particular, from analyzing people’s vital signs in preventative care to imaging and big data analyses for diagnosis and treatment purposes through to robots that deliver care. But at the same time, this industry has a greater need than almost any other for human connection, to get people involved, address their concerns, and deal sensitively with their data.

User-centered recommendations

Research in the field of human AI takes place across various projects, including “KI-BA: Künstliche Intelligenz in der Versorgung — Bedingungen der Akzeptanz von Versicherten, Ärzten und Ärztinnen” (Artificial Intelligence in Healthcare — Conditions of Acceptance by Patients and Doctors), which is receiving funding from the Innovation Committee at the Federal Joint Committee. The project is working to identify individual and contextual factors affecting acceptance and use of AI applications across various areas of healthcare. They include aspects like education, income, and gender, but also technological affinity, personality, healthcare and living situations, and personal networks.

The goal of the acceptance study, which is being conducted with 500 doctors and 1,500 patients, is to understand the factors influencing acceptance among these groups and use this information as a basis for establishing practical, user-centered recommendations for the use of AI in healthcare. The ultimate aim is to make these recommendations available to patients, family members, physicians, medical centers, health insurance funds, insurers, and government agencies as well. By working with various stakeholders, the researchers plan to craft concrete recommendations for actions to take to counter the acceptance risks that have been identified and ensure ethical and human-centered use of AI in healthcare.

Your partner for human-centered AI

With its research in the field of human AI, the ADA Lovelace Center for Analytics, Data and Applications at the Fraunhofer Institute for Integrated Circuits IIS will drive the future of AI toward human-centered design. The goal is to use AI not just effectively but also responsibly and in ways that inspire trust.

 

Additional links:

 

Übersicht

Science Year 2024

All information about the Science Year.

 

Guidelines

AI Assessment Catalog

The AI Assessment Catalog is a structured free guide to designing trustworthy artificial intelligence. It contains a four-stage procedure for the assessment of AI applications and supports developers in the design and assessors in the evaluation and quality assurance of AI applications.

 

 

Interview

3 questions for

3 questions for Sebastian Schmidt, our Data Scientist for trustworthy AI, Fraunhofer IAIS: How does AI influence our freedom? How trustworthy are AI systems? How do you personally view the topic of freedom and autonomy?

Contact form

* Required

Salutation