How AI is exacerbating cybercrime
Adam Peckman
THE recent news of a multinational firm being deceived by a generative artificial intelligence (AI) “deepfake” into paying US$25 million to a threat actor has signalled the need for businesses to increase vigilance against a new wave of AI-powered cybercrime.
Unfortunately, even before generative AI technology started being widely leveraged as a tool to drive more crime, cyberattacks in 2024 were already trending upwards. According to Aon’s analysis of risk-based security data, ransomware attacks were up 214 per cent on a year-on-year basis in the fourth quarter of 2023, and had surged by over 1,280 per cent when indexed against ransomware frequency before the pandemic (Q1 2019).
Compounding a worsening risk landscape, recent advances in AI, particularly generative AI, are enabling more novel and scalable cybercrimes to proliferate. As with the above fraud example, open-source and off-the-shelf generative AI tools are increasingly being used by threat actors to facilitate social engineering attacks. This may involve combining large language models (LLMs) with other commercial software-as-a-service (SaaS) tools, such as call centre automation software, to create a range of synthetic content to target victims – for instance, realistic video or audio cloning of senior executives.
These “deepfakes” are proving highly effective in deceiving employees, vendors, and executives into unwittingly disseminating sensitive information, sharing login credentials, or facilitating fund transfers.
Beyond deception and social engineering, open-source generative AI tools are also being leveraged by threat actors to bring “nation-state” levels of computing scale and automation to their hacking campaigns.
Recently, researchers at the University of Illinois Urbana-Champaign reported that OpenAI’s latest LLM, GPT-4, was able to exploit 87 per cent of “one-day” vulnerabilities. These cost-efficient advances in readily available generative AI tools are having the unintended consequence of significantly lowering the barrier to entry for many threat actors to exploit newly disclosed weaknesses at scale. They can now move faster to exploit discovered liabilities than the target companies can remediate them.
Additionally, the rapid deployment of unsecured AI tools – or “shadow AI” – by companies is itself becoming the target of cybercrime. According to Bergur Thormundsson at Statista, global spending on AI technology reached US$154 billion in 2023. And Asia-Pacific data from a recent Aon study of intangible and tangible risks found that 79 per cent of companies are currently deploying AI products or plan to do so in the next 12 months, suggesting a rush to bring products to market and leveraging tools to remain competitive.
However, with the push to experiment and release new AI-enabled technologies at pace, security vulnerabilities have emerged. Attacks on AI tools include poisoning the data used to train the AI agents as a tactic to degrade their performance, through to attempts to extract the underlying training data of these AI tools in order to expose trade secrets or sensitive information.
Impact on businesses
The US Federal Bureau of Investigation estimates that cybercrime is already costing the global economy more than US$12 trillion in annual losses – a trend that is only accelerating due to AI-enabled attacks. These costs represent an unbudgeted tax on future investments towards innovation and productivity, in areas such as AI itself. One immediate economic impact to companies from AI-enabled cybercrime will be from reputational harm, as deepfakes target brands to cause mistrust and reputational damage, and ultimately impact market value. Aon’s analysis of cyber crises found the average loss of shareholder value from a cyber event amounted to US$3 billion.
Compounding the direct economic hit from AI-fuelled cybercrime, the growing weaponisation of AI by state actors in the region will further exacerbate the problem. These actions are geared towards geopolitical objectives such as creating strategic leverage with rivals through AI-enabled disinformation campaigns, or building competitive advantages for local industries (predominantly via intellectual property theft).
Managing risks
Findings from Aon’s 2023 Global Risk Management Survey show that only 23.6 per cent of companies are assessing cyber risks, and 14 per cent are formally quantifying the financial exposure from the risks.
It is critical that companies approach the construction of their risk mitigation and transfer strategies for AI-triggered losses through the lens of the potential balance sheet exposure from a cyber event. Insights on the potential financial volatility created by cyber risks ensure the total cost of risks is optimised.
Presenting cyber risks in the context of financial loss and shareholder value erosion helps the C-suite, who may not have the technical knowledge, to better understand the investment priorities to protect client, colleague, and shareholder interests. While there may be obvious productivity gains from adopting generative AI, businesses should take great care and conduct regular risk assessments as they embrace this new world. Factors to consider include audits of the AI models that will ensure that algorithms or data sets do not promote unwanted biases.
Businesses must work with technology experts, legal counsel and risk consultants to set policies and establish a governance framework that aligns with regulatory requirements and industry standards.
The writer is head of cyber solutions for Asia-Pacific and global head of cyber risk consulting at Aon.