Regulatory Compliance: Can AI Help?
AI technologies are already finding use in helping financial organizations to meet their regulatory obligations and have the potential to do more. We look at the opportunities and risks of AI-assisted compliance.
While posing regulatory problems of its own, the latest generation of AI models offers a potential remedy to a sector where regulatory compliance is increasingly an issue. Financial institutions and rating agencies continue to suffer heavy penalties for failure to comply with a growing body of regulation. A total of $10.5 billion of fines was levied against financial services companies worldwide in 2023 for infractions ranging from conflict-of-interest offenses to money laundering and, at least in this case, AI may be the solution to the problem.
The technology has promise - but also limitations.
AI can handle the routine
Moody’s “Navigating the AI landscape: Insights from compliance and risk management leaders” showed “almost 90% of respondents are interested in the integration of AI tools by providers of risk and compliance solutions.” Still, in such a regulation-heavy sector, those in charge are circumspect about its use. “The findings show that the places early adopters or those trialing AI feel the most positive impact are in replacing manual processes (17%); augmenting staff performance (27%); or a combination of both (56%).”
We often hear about AI being excellent at automating time-consuming, repetitive tasks, and Moody’s research shows that’s where the financial industry is looking to implement AI. This is no surprise. AI-driven solutions can monitor vast amounts of data, ensuring adherence to regulations in real time.
For instance, Z Brain’s Regulatory Compliance Monitoring Agent “continuously monitors government regulation pages and official publications to identify new or revised regulations. Leveraging generative AI, it extracts key information and organizes it into a structured knowledge base that stakeholders can query through an integrated chatbot interface. This ensures that teams have real-time access to updated regulations without the need for manual searches or reviews.”
A more strategic role?
Still, the potential for what AI can do goes well beyond reading and summarizing documents. It could identify patterns that may indicate non-compliance, raise the red flag about potential issues, and let compliance managers look more closely at any potential problems.
Additionally, generative AI can generate compliance reports, an often tedious and dry task that may be better left to the machines. McKinsey reports, “From modeling analytics to automating manual tasks to synthesizing unstructured content, the technology is already changing how banking functions operate, including how financial institutions manage risks and stay compliant with regulations.”
AI may even have the ability to predict new regulations. As LeewayHertz reports, “Machine learning models can also predict future regulatory trends based on historical data, helping companies stay ahead of potential changes. By automating tasks such as data validation, risk assessment, and audit trails, AI allows compliance teams to focus on more strategic activities, ultimately reducing costs and improving overall compliance management.”
So, why is AI not more widespread throughout the financial services sector?
The need for caution
Moody’s research found that a third of organizations are actively using or trialing AI in compliance and risk management, “with 9% being active users and 21% in the trial or pilot phase. Just under half of firms are considering its use, while 21% are not.” As these companies move ahead, they must be aware of potential risks.
Forbes reports that the use of AI has the potential to create compliance issues of its own: “The reliance on machine learning algorithms and neural networks introduces the risk of biased decision-making, potentially leading to regulatory non-compliance.”
While embracing any new technology in the high-stakes work of regulatory compliance requires due diligence, Forbes has some thoughts on how to make the transition to the AI-driven future smoother. “Foster collaboration with regulatory bodies to align AI practices with industry standards and expectations,” suggests the Forbes article. “Proactively engage in discussions to address emerging challenges and ensure compliance frameworks evolve in tandem with technological advancements.”
As always, algorithms are only as good as the data that feeds them, which is why Moody’s reports, “The fact that two thirds of respondents rate their firm’s data quality in the two lowest categories helps explain why the majority are yet to start using AI for risk and compliance.” Essentially, there is a maturity gap, and the bigger companies with more mature data tend to be further ahead in investing in AI-driven efficiencies.
The 'black box' issue
Among the limitations of AI use in financial services is an issue that also restricts the use of GenAI in other critical sectors like law and medicine: the 'black box' issue. The hidden mechanisms of language models makes it difficult to monitor the processes behind their operations. As a recent IBM white paper puts it: "The complexity of LLMs makes it challenging to interpret their decision-making processes. This lack of transparency can hinder efforts to justify AI-driven decisions to regulators and stakeholders."
They go on to say: "Financial institutions must invest in research and development to enhance the interpretability of LLMs, ensuring that their decisions are transparent and accountable."
Moderate expectations
Whatever hurdles may still need to be crossed, those already testing AI are bullish. Moody’s found that 91% of respondents “feel AI has had significant or moderate impact on risk and compliance,” which is a resounding success. That’s compared with “9% who claim it has had minimal or no impact,” so even those who are not huge fans at least do not report any major problems thus far.
As companies move forward, McKinsey suggests chief risk officers base their assessments of specific applications on the “qualitative and quantitative dimensions of impact, risk, and feasibility.” In other words, are the risks and efforts to enable a technology worth the potential benefits?
It’s a question that, for the moment, can only be answered on a case-by-case basis.