Why AI is a Tech and Talent Challenge for FinServ Companies

To get value from AI, financial services have to mitigate the regulatory and compliance risks that come with the tech — and its human users.

While companies in sectors from retail to real estate are often eager to implement the latest and greatest technologies, those in regulated industries are notoriously late adopters. This tends to be especially true for financial services (finserv) companies, banks, and insurers—and often for good reason. In the U.S., for example, companies in these sectors are bound by strict regulations that govern how and where they store user information, how they communicate with those users, and with what kind of messaging.  

Despite these obstacles, there’s no shortage of enthusiasm for or investment in AI among finserv organizations. In fact, Eliassen’s 2025 Technology Leadership Pulse Survey found that more than 22% of financial services leaders said they plan to invest in AI in the next 12 months — more than any other technology. This enthusiasm is also partly why financial services as an industry is expected to account for almost 20% of enterprise AI spending by 2028 

But with so many restrictions and so much potential for regulatory and compliance issues, how can finserv organizations reap the rewards AI offers while mitigating the risks that come with it? And how can they prepare and augment their workforces with the training and additional talent needed to make and keep AI safe across the enterprise? 

 

AI and the Race to Achieve ROI 

Caution around AI may be warranted for finserv organizations, but many companies in the sector have already implemented or developed AI solutions. 

JPMorganChase first made headlines back in 2017 when it announced that its AI-powered Contact Intelligence (or COIN) system was able to analyze and extract data from commercial loan agreements and other legal documents, a process that previously required about 360,000 man hours per year. Following that success, the banking giant later developed internal products like IndexGPT for thematic investing and LOXM for executing trades at maximum speed and optimal prices. Bank of America now reports that its AI assistant, Erica®, is regularly used by 20 million customers. The internal version has been adopted by over 90% of employees, leading to a 50% reduction in helpdesk calls. Meanwhile, an Italian insurer used AI to increase the rate of fraud detection by 30% and improve the performance of its real-time reporting by 40%. 

“There are real AI use cases at work in some major finserv companies, and many of those are already delivering meaningful results,” said Kolby Kappes, leader for Eliassen Group's AI and Data Services Practice. “The organizations that got on board early, got buy-in from the top, and made AI a central priority have had a great deal of success. But their ongoing success — and their ability to avoid costly fines or even lawsuits — will depend on their ability to manage the risks that come with AI, both in terms of the technology and the humans who are using it.” 

 

Risky Business: Mitigating AI Compliance Challenges    

To avoid delays and get ahead of compliance and regulatory risks, finserv organizations may want to consider the following before beginning an AI project in earnest: 

Establish Clear Ownership and Governance 

With nearly $2 billion invested in AI, JPMorgan Chase has made governance a centerpiece of its overall AI strategy. The banking giant not only divested AI from the technology function, it also made the AI function report directly to its CEO and president.  

While not every finserv org may need to bring that much firepower to its AI initiatives, JPMorgan Chase’s example underscores the value of having involvement from and governance by executive leadership.  

“Rather than having multiple owners from multiple functions, the leaders in AI today are centralizing control and accountability at the top,” Kappes said. “Not only that, they’re bringing in entire teams of engineers, data scientists, and even ethicists to oversee these projects. Plus, they’re often establishing steering committees that include the CRO, CTO, CHRO, and more.” 

Trust the “Three Lines of Defense” Model 

Financial services organizations will be familiar with the “Three Lines of Defense” (or 3LOD) model often used in risk management. With some slight tweaks — like the JPMorgan Chase example above — 3LOD can be an effective bulwark against risk for AI projects, too: 

  • First Line: Management and Process Owners – The primary responsibility for identifying and managing operational risks falls to functional leaders.  
  • Second Line: Risk Management – Risk, compliance, and regulatory professionals provide oversight, identify emerging risks, and offer guidance in the form of processes, tools, policies, and more.  
  • Third Line: Executive Leadership – Leaders in the C-suite provide a final line of defense, validating decisions, reporting to the board and external auditors, and ensuring the first two lines of defense are operating effectively.  

Build in Ongoing Monitoring and Operational Controls  

Governance and ownership frameworks may be vital to AI success, but they only work if those accountable remain informed. By leveraging real-time dashboards and reporting on model performance, bias metrics, and achievement of business outcomes, finserv organizations can create transparency, enable smarter decision-making, and mitigate risk.   

McKinsey, for example, recommends a scorecard model for assessing AI risk across the organization that includes metrics for customer exposure, ethical risk, model and data complexity, financial risk, and more.  

Armed with these insights, leadership at each line of defense should be able to not only make better, faster decisions, but to address risks as they arise.  

Solving the Human Equation  

As the examples above demonstrate, finserv organizations can successfully implement AI without running afoul of regulatory and compliance issues. But, as Kappes noted, the technology itself is only half the equation.  

“The human half of the AI equation is harder to get right,” he said. “There are issues around training, talent, and more that organizations — especially regulated ones — have to address early and often.”  

Make AI Training a Priority from Day One  

“One thing we see fairly often is that, in the race to implement AI, training becomes an afterthought,” he said. “But in regulated industries, having staff using AI without extensive training and clear guidelines creates a host of risks, from potentially sharing sensitive information to accessing data they might not have permissions for, just to name a few.” 

Making AI training a must for employees generally, and technical workers in particular, is already paying dividends for many organizations. In fact, McKinsey recently reported that 68% of the organizations it calls “Gen AI high performers” have made AI risk and awareness and mitigation required skills for technical talent. 

Furthermore, employees actually want more training on AI: 48% of workers say they’d benefit from additional training on AI, and finserv organizations like KPMG, Ally Financial, PwC, and more have already instituted a variety of training programs for employees.  

“Ensuring that staff knows when, where, and how to use AI can not only help them feel secure and empowered, it can also go a long way to mitigating cybersecurity and compliance risks,” Kappes said. “In other words, it’s a small price to pay for a more productive workforce — and for avoiding potentially costly fines.”  

Clear Communication is Key  

It’s no surprise that workers across the board are concerned about how AI will impact their roles and careers. But for companies in regulated industries, employees who become disgruntled or disengaged as a result of AI can pose serious risks. 

“You might have an unhappy employee who shares or accesses sensitive information,” Kappes said, “which is why it’s critical to make sure all employees understand the role AI will play in the organization and how it will impact their jobs going forward.”  

He offered the example of Morgan Stanley’s AI suite, which the company calls “AI @ Morgan Stanley Debrief”. This collection of tools is designed to augment the company’s financial advisor teams by streamlining tasks like document access, notetaking, and more.  

“Morgan Stanley leadership has communicated frequently that their AI investments are there not to replace people, but to enable their people to focus more on the ‘human’ aspects of their roles,” he said. “Not only that, they solicit feedback from employees about their experiences with these AI tools. Their willingness to listen and their continued focus on workers’ experiences is something organizations can learn from.”  

Takeaways for Tech Leaders

When it comes to AI in financial services, both the technology itself and the people using it represent very real regulatory compliance risks. To realize the gains offered by AI while mitigating the risks that come with it, organizations should: 

  • Ensure that risk management is baked into the AI implementation process from the start 
  • Establish clear guardrails and ownership, not just at the team level, but across the enterprise 
  • Bake in ongoing controls and continuous monitoring to measure bias drift, model performance, business goal attainment, and more  
  • Communicate clearly about the role of AI and how it can augment — not replace — human team members  
  • Make training on AI, as well as the risk and compliance issues that come with it, mandatory and frequent  

Lastly, Kappes said that tech leaders in financial services should remember that AI is changing and becoming more sophisticated with unprecedented speed, so adaptability will be key.  

“What works today may not work tomorrow,” he said. “But organizations that keep the foundational pieces — controls, governance, accountability, training, and communication — top-of-mind as AI evolves should be set up for success, no matter what comes next.”  

To get more expert insights like these on AI, cybersecurity, operational efficiency, technology talent, and more, visit our resources page today