So-called generative AI, in particular large language models, lowers the entry barriers for cyberattacks and increases the scope, speed and impact of malicious activities in the digital space. In addition to general productivity gains for malicious actors, the BSI is currently seeing malicious use, particularly in the area of social engineering and the generation of malicious code.
AI enables attackers with minimal knowledge of foreign languages to create high-quality phishing messages: Conventional methods of detecting fraudulent messages, such as checking for spelling mistakes and unconventional language usage, are thus no longer sufficient for detecting phishing attacks.
One step further than supporting cyberattacks carried out by humans is the creation of malware by AI: large language models are already capable of writing simple malicious code. In addition, the first proofs of concept exist, according to which AI can be used for the automatic generation and mutation of malware. However, malicious AI agents that can compromise IT infrastructures completely independently – i.e. artificial intelligence that leads to complete attack automation – are not currently available and are highly unlikely to be available in the near future. However, AI is already capable of automating parts of a cyberattack today.
BSI President Claudia Plattner: “In our current assessment of the impact of AI on the cyber threat landscape, we assume that there will be no significant breakthroughs in the development of AI, especially large language models, in the near future. As the federal cybersecurity agency, we are and will continue to keep our finger on the pulse of research and advise companies and organizations in particular to make cybersecurity a top priority. It will be important for all of us to keep pace with the attackers, i.e. to increase the speed and scope of our defensive measures: by patching faster, hardening our IT systems and detecting imminent attacks even earlier than before. AI is already helping us do this today. For open source projects in particular, it will be crucial to proactively use AI tools before malicious actors do. Furthermore, in view of the shortage of skilled workers, it is crucial that business, science and politics pool their expertise – across national and international borders.”
Cyber defenders also benefit from general increases in productivity through the use of AI – e.g. in code generation, analyzing source code for vulnerabilities, detecting malware or creating situational awareness. The BSI investigation will address how artificial intelligence can support cyber defense in detail as part of an update. In a further study, the BSI will provide information on the opportunities and risks of generative AI language models for industry and authorities.
– – – – – –
Further links
👉 www.bsi.bund.de
Photo: pixabay