Using AI in risk management may be slightly scary because there are so many unknowns. The possible efficiencies and insights are certainly tantalizing – but are the risks significant enough to be a dealbreaker?

The truth is generative AI can greatly enhance your risk management efforts if implemented wisely. Around 90% of mostly financial services organizations surveyed by Risk.net said AI is an opportunity for the sector, the firm, and their role. But interestingly, a separate survey by Riskonnect revealed that more than nine in 10 companies across a variety of industries anticipate significant threats from AI.

Generative AI is a tool. And like any tool, its value depends on how you use it. What are you trying to achieve? What objectives are you trying to accomplish from new technology? Are you mostly looking to speed up processes? Or are you looking to uncover new insights?

One thing that operational risk expert and frequent Risk@Work guest Dr. Ariane Chapelle cautions against is creating a separate framework for managing artificial intelligence risk. “Integrate AI into your ERM framework. Try very hard to have the same framework for all your risks – risk under one roof,” she says. “You have to be very firm on your objectives to see the uncertainties around those objectives.”
Impact on Risks

Impact on Risks

AI will reduce some risks to process and accuracy with automation, such as human error in manually entering data. It can analyze massive amounts of data and text to help you understand risk at a deeper level that can be used to guide strategy.

However, it may add other risks that need to be checked. Occasionally, the technology can “hallucinate,” that is, answering questions with made-up information. Human verification and common sense can curb these responses before they are used to inform decisions.

Bias is another risk to watch out for. Generative AI uses historical information to build new content. The problem is what was acceptable a decade ago may not be today.

Data privacy is also a significant risk to consider. Know what the model is trained on and how your data will be used. Large language models like ChatGPT capture everything in your prompt and incorporate it back into the model. Be aware of what information you are sharing outside your organization. You wouldn’t your proprietary knowledge to end up in your competitors’ hands.

It’s important not to overestimate or underestimate the power of AI. You can’t feed it a stack of data, for instance, and expect it to predict next year’s biggest operational risk like some kind of technological fortune teller. Nor should you assume that it will only generate a lot of useless or even false information, so it isn’t worth your while.
Impact on People

Impact on People

Will AI replace you? That depends. Like any new technological advancement, AI will mean the end of some jobs, but it will create others.

AI can perform administrative tasks better than humans. It can also analyze data, find patterns, and devise solutions faster than any human.

But as dazzling as AI can be, you can’t automatically accept its output as the final word. Humans will continue to be needed to review, validate, and refine the answers that AI generates. And humans are the ones to interpret and apply the insights discovered by AI.

Consider the output provided by AI as a starting point, a base on which to build more knowledge or think differently about an issue. It’s up to the people working directly with AI to bring their best human thinking qualities to refine the substance, tone, and voice to fit the organization.

Human oversight is essential. It is so essential, in fact, that the EU codified it in its EU AI Act. This groundbreaking legislation is the first globally to establish rules for responsible use of AI, including appropriate human oversight and robust cybersecurity.

In most cases, AI will likely add up to more of a redistribution or retraining of talent rather than a job killer. Those who were doing jobs now better performed by AI may welcome the opportunity to apply their knowledge and thinking power to a greater use.

Where to Use AI in Risk Management

Generative AI excels at scanning a large amount of information to instantly answer a question posed by a human. It can find insights, commonalities, trends, and connections at lightning speed. It can simplify complex data and translate technical information into everyday language. And it does so far more quickly and accurately than people can.

In risk management, consider using generative AI to:

  • Automate control testing.
  • Sort, classify, and organize data.
  • Analyze unstructured data.
  • Detect potential outliers.
  • Uncover correlations, trends, and emerging risks.
  • Model scenarios and simulations to understand risk impacts.
  • Collect and analyze third-party risk data and potential threats.

Start Now

Start Now

With such game-changing technology moving at breakneck speed, it can be hard to decide on the right time to jump in. To be sure, there are many uncertainties, and it’s understandable if you are inclined to wait until you know more before committing to the technology.

Yet you run an even larger risk by failing to integrate AI into your risk management processes now. The competitive edge will go to those that tap into the power of AI the fastest.

Used properly – with the right people, processes, and governance – AI in risk management can be an unparalleled business asset. You’ll run more efficiently by having it handle routine tasks. You’ll operate more effectively, freeing up resources to better analyze, monitor, and respond to risks. And you’ll be more agile, ready to address new challenges and take advantage of new business opportunities.

For a more detailed look at AI in risk management, download the ebook, Governance, Risk, and Compliance: The Definitive Guide, and check out how Riskonnect’s Artificial Intelligence solutions.