Generative artificial intelligence is a hot topic. But how is it impacting cybersecurity? Are cyberdefence teams now inundated with a wave of AI-powered attacks, or is it all just hype? We assess the situation.
Text: Andreas Heer, Image: Swisscom, Date: 29 April 2024 4 Min.
The huge amount of interest in generative artificial intelligence (generative AI) is matched by the hype surrounding the possibilities. Naturally, organised cybercriminals are also keen to exploit the opportunities afforded by large language models (LLMs) such as GPT, Llama, Mistral and Claude. Are companies overrun with a flood of AI-driven cyberattacks right now, or is it all just hype?
‘We have indeed seen specific attacks already,’ says Florian Leibenzeder, Head of the internal Security Operations Center of Swisscom. Leibenzeder cites as an example how LLMs might be used to analyse and continue e-mail threads with targeted phishing e-mails employing business e-mail compromise (BEC) tactics. ‘If an e-mail sounds like it comes from the actual discussion partner and fits with the context of the e-mail thread to date,’ says Leibenzeder, ‘recipients are naturally more likely to click on any link provided.’
Of course, there are scenarios in which generative AI might conceivably be used directly for attacks. It is quite feasible that AI agents may independently attack and compromise websites having been trained on relevant information – but so far only in theory, as shown by researchers from the University of Illinois in a publication(opens in new tab).
‘Apart from phishing, we’ve not yet encountered any specific attack clearly involving AI at Swisscom,’ says Leibenzeder. ‘However, I can well imagine AI being increasingly used to prepare attacks, for example to analyse vulnerability scanner logs or examine software source code for deficiencies.’ Leibenzeder can conceive of a scenario in which AI is applied to analyse smart contracts in the blockchain environment in order to identify vulnerabilities that could then be exploited to steal cryptocurrencies.
Discover the latest cybersecurity trends and relevant threats.
If it takes less effort to prepare attacks due to automation, will the number of cyberattacks increase? Leibenzeder is sceptical of this: ‘Cybercriminals are so professionalised that services for the individual attack steps can already be purchased relatively cheaply today – think initial access brokers and Ransomware as a Service.’ In other words, the hurdles to be cleared by those with malintent are already low today and will not be significantly lowered by AI.
Nevertheless, the World Economic Forum(opens in new tab) predicts that artificial intelligence will significantly influence cyberattacks. With more and better information about targets at their disposal, attackers will be able to tailor attacks using phishing or deepfakes, for example. While the use of AI may not result in a greater number of attacks, ‘quality’ will increase thanks to improved personalisation.
Machine learning and generative AI are also playing a more important role in preventing and detecting cyberattacks. Leibenzeder mentions obfuscated code found in malware as an example: ‘While attackers can use AI to disguise the code and make it difficult to detect, defenders can apply the same methods to analyse the code and understand how it works.’
The ability of LLMs to run pattern recognition algorithms is also taking some of the grunt work away from cyberdefence teams, such as detecting the different phases of a cyberattack in the event of a major incident. ‘AI can help to plot the alerts and events from different log files on a timeline and describe what has happened.’ This transparency, in turn, means that cybersecurity professionals can react to incidents in a targeted and effective manner. At the same time, AI tools can act as ‘security assistants’ and help to write management summaries of incidents.
AI attack scenarios and threats have already been incorporated into common cybersecurity frameworks such as Mitre Att&ck and OWASP.
While unlikely to completely upend cybersecurity at the moment, artificial intelligence instead heralds the next stage in the constant battle between attackers and cybersecurity professionals. ‘It’s important that defenders are just as open and engaged with the new technology as the attackers,’ says Leibenzeder. ‘Cyberdefence teams must learn to understand how the new attack techniques are used and can be defended against.’
Professionals should therefore stay on top of developments and build up the relevant expertise. The WEF is advising companies to generally give greater consideration to supply chains in their cybersecurity strategy – and to raise awareness among employees. With the advent of artificial intelligence, human intelligence is also gaining in importance in terms of the need for increased security awareness.