Software

BSI: White paper on the explainability of artificial intelligence published

January 6, 2025. The German Federal Office for Information Security (BSI) published a white paper on January 6, 2025 that deals with the explainability of artificial intelligence (AI) in an adversarial context. The document focuses on the limitations of Explainable Artificial Intelligence (XAI). It comments on the current state of the art, particularly with regard to its use in the assessment process and the technical support of digital consumer protection.

Share this Post
Symbolic image of artificial intelligence (AI) / pixabay geralt

Contact info

Silicon Saxony

Marketing, Kommunikation und Ă–ffentlichkeitsarbeit

Manfred-von-Ardenne-Ring 20 F

Telefon: +49 351 8925 886

Fax: +49 351 8925 889

redaktion@silicon-saxony.de

Contact person:

Transparency for black box models through post-hoc methods

XAI aims to make the decision-making processes of AI systems comprehensible. Many AI models, especially those based on deep learning, act as a “black box” whose internal processes are difficult to understand. The BSI white paper focuses on post-hoc methods that provide subsequent explanations for these black box models and analyze the influence of individual features on decisions.

Challenges and opportunities of XAI

Although XAI offers opportunities for gaining knowledge and optimizing models, there are also challenges, such as the problem of disagreement and the susceptibility of explanations to manipulation. The explainability of AI is crucial for trust in these technologies and helps developers and users to better understand how they work. Nevertheless, developing standardized methods to consistently ensure explainability remains a key challenge.

– – – – – –

Further links

👉 www.bsi.bund.de 
👉 Explainability of AI in an Adversarial Context (PDF)
👉 Explainable Artificial Intelligence in an Adversarial Context (PDF)

Photo: pixabay

You may be interested in the following