Banks should refine AI safeguards on rising complexity and exposure: McKinsey
Having just one committee to oversee all gen AI applications is not a good fit.
Financial institutions are called to update their AI governance frameworks to account for the increased complexity and greater points of exposure related to generative AI, according to McKinsey & Co.
Most current arrangements involve a single group or committee overseeing all gen AI applications. However, this is not a good fit for gen AI systems, McKinsey said.
“They often comprise a blend of different models and software-like components, echo of which may need specialized oversight,” McKinsey analysts wrote in the March 2025 report, “How financial institutions can improve their governance of gen AI.”
Risk leaders must adopt new models to manage gen AI risk across their companies, to start.
“With new multitasking gen AI models, banks can do more. However, because the models are trained on both public and private data, they can produce information or responses that are incorrect, misleading, or fabricated,” McKinsey wrote.
Gen AI tools can also introduce liabilities involving inbound and outbound IP, and its oversharing. For instance, a gen AI coding assistant might suggest that a bank use computing code that has licensing issues. This may inadvertently expose the bank’s proprietary algorithms.
To combat this, McKinsey said that financial institutions should develop systems to track where data originates, how it’s used, and whether it adheres to privacy regulations.
“Not linking credit decisions to their source data could result in regulatory fines, lawsuits, and even the loss of license for noncompliance. Companies need to keep records for AI-generated content, which can change based on what’s entered,” the report said.
Financial institutions must also establish safeguards to manage risks related to legal and ethical factors.
“These models blur the lines between content and existing content protected by IP laws,” McKinsey warned. “Additionally, when gen AI models are trained on sensitive data, such as customer information, more attention is required for privacy and compliance.”
One suggestion McKinsey had is for financial institutions to use a risk scorecard to determine which elements of their gen AI governance require updates, and how urgent the need is.
The scale reflects the degree of customer exposure and the level of human expert oversight in the inner workings of the gen AI application. It also reflects the expected financial impact, stage of gen-AI-application development, and more, McKinsey said.
Ultimately, human oversight is essential to ensure the ethical use of gen AI.
“For example, reviewers need to redact sensitive data before models process it. When it comes to the quality of gen-AI-generated responses, financial institutions should create “golden lists” of questions for testing the models,” McKinsey suggested.
They should also solicit lots of feedback from customers and employees, it added.