The Role of Risk Management in the Age of Generative Artificial Intelligence

The Role of Risk Management in the Age of Generative Artificial Intelligence
Aon Insights Series Asia

07 of 09

This insight is part 07 of 09 in this Collection.

January 7, 2025 7 mins

The Role of Risk Management in the Age of Generative Artificial Intelligence

The Role of Risk Management in the Age of Generative Artificial Intelligence

Artificial Intelligence (AI) has already begun to transform a range of industries. With the transformative potential to harness this new technology for improving efficiency, quality and business performance comes a new set of risks.

At our Aon Insights Series event in Singapore three of Aon’s leading cyber solutions specialists in Asia Pacific explored how risk professionals can take steps to engage with their stakeholders - from supply chain to insurance partners - to manage AI risks.

We have already seen significant financial losses when Generative AI is used to commit fraud. At the same time technology failures like the CrowdStrike outage have delivered widespread business disruption and economic damage . Highlighting the impacts that can cascade through a company’s operations and supply chain when a single critical technology fails.

Quote icon

For most companies the exploration and experimentation with new technologies, like Generative AI, is occurring at the ‘digital frontier’ of their operations which can be three to five years ahead of ‘business as usual’.

Adam Peckman
Global Practice Leader of Cyber Risk Consulting, Head of Risk Consulting & Cyber Solutions, Asia Pacific

The Evolving Nature of AI Risk

The intersection of risks from sophisticated cyber threats and rapid adoption of AI can be grouped into three major categories:

1. Generative AI cyber-attacks

Threat actors are employing Generative AI to improve the efficacy and efficiency of hacking campaigns. Perhaps the most high-profile case to date is the use of Generative AI to impersonate a Chief Financial Officer and other executives on a video call to defraud a business out of USD$25 million. This is one example of how improper use of a likeness or voice can be used to commit a cybercrime, elevating the effectiveness of social engineering attacks. Threat actors are also employing Generative AI tools to bring speed and scale to the development of new malware tools. The development of WormGBT and FraudGBT are examples of how the advantages of Gen AI is being extended to organised crime.

2. Unsecure exploration of AI use cases

We expect a rise in cyber incidents that take full advantage of the rapid expansion of AI capabilities. This risk can be heightened by ‘shadow’ AI – AI use that hasn’t gone through the approved development processes or legal, compliance, and security checks. As companies try to keep up with their competitors and customer expectations, they are under pressure to fast-track innovation which can increase the risk of bypassing proper governance and risk mitigation. Resulting in an enlarged and unsecure digital attack surface that can be exploited by threat actors.

Approved AI use cases may also be targeted, including by data poisoning attacks where threat actors introduce malicious content into training data, such as profanity into a chatbot data set, or data extraction, where ‘adversarial’ prompts are used to get AI agents to disclose underlying data, unlocking valuable proprietary information and intellectual property as a result.

3. Unintended impacts of AI uptake

We are already seeing companies held liable for the actions of AI-powered chatbots and directors investigated for AI-related disclosures. This demonstrates that if organisations are harnessing AI, they need to be answerable for its performance. The presence of biases or inaccuracies can adversely impact a company’s reputation among stakeholders, from employees and clients to investors. Like the vulnerability of AI to malicious attack, this risk is also heightened when shadow AI is a widespread issue.

Quote icon

Risk leaders need to gain increased visibility across AI deployment within their organisation, working with their technology teams, suppliers and insurance partners. Armed with an awareness of the range of AI use cases and knowledge about the associated risks, they can then consider how existing coverage responds to those risks or where the company needs to find alternative solutions.

Andrew Mahony
Head of Cyber Solutions, Asia, Aon

Three Ways for Risk Professionals to Engage on AI Risk

1. Engage with technology owners

Determining how insurance policies can tackle AI is on the agenda for risk leaders as they engage with stakeholders in the business. Risk leaders need to have a seat at the table with technology teams and other business units on current and planned uses of AI to map these to potential risks. Once they have this visibility from a risk perspective, there are many more stakeholders to inform and manage.

2. Engage with suppliers, vendors and insurance partners

For identified risks outside of shadow AI, there are a number of options for risk transfer, with suppliers and insurers being the chief stakeholders to work with. “For the time being, many companies will have what we call ‘silent’ AI cover, where conditions across a range of policies neither affirm nor exclude cover for their AI use cases,” says Andrew Mahony, Head of Cyber Solutions for Aon in Asia. “We recommend stress testing existing policies against your specific AI use case to determine whether risks and losses are transferable.”

The CrowdStrike incident demonstrated the importance of risk transfer across a company’s whole supply chain and technology ecosystem. “In some cases, CrowdStrike triggered cover under a direct business interruption loss,” Andrew says. “For others who were depending on businesses using CrowdStrike, they were looking to both their own contingent business interruption cover and contract terms with that supplier to determine liability for costs.”

For AI, companies should investigate how to transfer risk through contracts with certain service providers as well as organising their own risk transfer arrangements for losses that occur outside of their business boundary but within the business ecosystem.

Quote icon

A company’s visibility of their AI use cases and their frameworks around AI risk management will become increasingly relevant in the renewal discussions they are having across a range of insurance policy lines.

Sara Kobes
Director, Cyber Solutions, Asia, Aon
3. Engage with leadership

Risk leaders also need to engage with senior leaders to manage potential risks arising from the messaging delivered on progress with AI. “Just as we’ve seen with greenwashing, listed companies see commercial upsides in referencing AI in their public declarations, but this will come under scrutiny” says Andrew. “We are now seeing SEC enforcement actions and class actions in the US which is often where this exposure starts to emerge. Where businesses have been talking up AI capabilities with benefits for their share price, and those announcements are out of line with their actual deployment of the technology, they are now being held accountable.”

Visibility around AI use cases and the risks associated is critical for managing upwards. That work begins internally, but risk leaders should lean on trusted advisors to provide insights into peer experience and the application of existing and emerging risk transfer solutions.

General Disclaimer

This document is not intended to address any specific situation or to provide legal, regulatory, financial, or other advice. While care has been taken in the production of this document, Aon does not warrant, represent or guarantee the accuracy, adequacy, completeness or fitness for any purpose of the document or any part of it and can accept no liability for any loss incurred in any way by any person who may rely on it. Any recipient shall be responsible for the use to which it puts this document. This document has been compiled using information available to us up to its date of publication and is subject to any qualifications made in the document.

Terms of Use

The contents herein may not be reproduced, reused, reprinted or redistributed without the expressed written consent of Aon, unless otherwise authorized by Aon. To use information contained herein, please write to our team.

Explore more from our Aon Insights Series:

Subscribe CTA Banner