Blog

4 Strategies to Consider before Launching Your Next Enterprise AI Project

Written by Eliassen Group | Aug 13, 2025 6:34:11 PM

AI capabilities continue to expand at an unprecedented rate, promising groundbreaking gains in productivity, efficiency, speed of innovation, customer service, and more. C-suite leaders in virtually every industry are, unsurprisingly, eager to capitalize. Eliassen Group surveyed 1,000 U.S. technology professionals in Q1 of 2025. We learned that today’s tech leaders are determined to keep building, eager to get the most out of AI, cautious about tomorrow’s threats, and looking for innovative ways to do it all with fewer resources and smaller teams. 

More than a fifth of technology decision-makers in our 2025 Technology Leadership Pulse Survey said they plan to roll out some form of AI in the next 12 months.  

And yet, implementing AI at an enterprise level remains surprisingly difficult for many organizations. We’ve covered the common reasons why AI projects fail, and what technology leaders can do to avoid them, but what about the problems that often occur before the project begins?   

“Within many organizations today, there’s a real fear of missing out when it comes to AI,” said Kolby Kappes, leader for Eliassen Group's AI and Data Services Practice. “They want to move fast, and they don’t want to be left behind — but they don’t always know where to start.”  

If your organization is feeling the same FOMO as so many others, you're not alone. To avoid the common pitfalls and achieve real ROI, keep these strategies in mind before launching your next AI initiative. 

 

1. Start with Internal Alignment — and One Big Use Case 

With so many AI products to choose from and so many potential gains to be made, it’s easy to see why organizations often allow multiple teams to work on parallel AI implementations. But according to Kappes, this decentralized approach may not always set teams up for success. 

“Because AI is still very new, there isn’t a proven road map to follow,” he said. “As a result, many companies are taking small, measured swings at implementing it.” 

Instead, Kappes advises organizations to align on a single, impactful use case and then dedicate the time and resources necessary to bring it to fruition.  

“Before even evaluating potential AI solutions — whether built in-house or implemented off the shelf — designate a company-wide ‘owner’ who’s informed and accountable. Have defined goals and milestones, and measure performance against those milestones. And be realistic about your timelines, because no one’s really done this yet.”  

That last point, Kappes stressed, may be the single most important piece of advice he has to offer. 

“Once you get an AI project across the finish line, you can apply those learnings to the next one and the one after that. But focus on getting one done, and done right. That will show you what your teams are capable of, help you identify skills gaps, and give you a stronger foundation for the future.”  

 

2. Don’t Be Held Back by Data (Im)Maturity  

The quality, quantity, and type of data your organization has on hand plays a massive role in determining what an AI solution can deliver. This is likely why respondents to our Technology Leadership Pulse Survey identified data quality and availability as their chief concerns when it comes to AI implementations. But, according to Kappes, the problem of data quality is largely overstated.  

“Eighteen months ago, if you wanted to implement AI, your data had to be pristine,” he said. “Since then, the tooling in most major enterprise AI products has advanced to the point that that’s no longer the case. Data quality and accuracy are still very important, but tools like Microsoft Fabric and Google’s BigQuery and Vertex have reached the point that they can work with imperfect data.”  

For organizations where data quality is a concern, a range of solutions, like Talend and Apache NiFi, can help with the Extract, Transform, Load (ETL) process, ensuring data is clean, consistent, and usable by AI platforms. Between these and what the major platforms are already capable of, Kappes said, there’s rarely a reason to be held back by data quality issues. 

“The major platforms can likely work with your data in its present condition,” he stressed. “And if they can’t, just give them a few months.”    

 

3. Put Plug-and-Play First 

While some organizations have had success in developing custom AI solutions in house, implementing commercial AI platforms often comes with a much lower price tag — and a much lower risk of failure.  

“A year ago, many companies we spoke with believed that a custom-built solution was the only way AI could add value,” Kappes said. “Today, however, we’re seeing more organizations warming to the idea of commercial AI products. Their capabilities have expanded greatly, and making them work across the enterprise has gotten much, much easier.”  

That’s likely one reason why our 2025 Technology Leadership Pulse Survey found that just 18% of tech leaders said they plan to build custom AI solutions, compared to almost 40% who plan to implement off-the-shelf options.  

But before signing any contracts, Kappes said that organizations need to have the right vendor-agnostic infrastructure in place. Using a mix of APIs, microservices, and standardized data pipelines that are interoperable with leading AI providers can not only make implementation faster and less risky, it can also make it easier to switch AI models down the line.  

“These solutions are going to keep evolving, and evolving quickly. The right solution today may be outclassed by something else in six months or a year,” he stressed. “By keeping your data infrastructure vendor agnostic, you’re saving yourself time and trouble when that day comes.” 

 

4. Work within Your Regulatory Framework 

For regulated industries, like financial services or healthcare, AI implementations often come with unique risks. While companies in unregulated industries can train models on customer data or implement AI solutions to provide customer support with relative freedom, those in regulated industries have many more guardrails to operate within. 

Customer data, like financial information or health records, must be siloed and protected, or else organizations may face severe financial and legal penalties. Meanwhile, the kind of potentially sensitive interactions that are common in “last mile” CX, like sharing a diagnosis with a patient or addressing a missed mortgage payment, may not be ideal for AI agents to handle.  

For these reasons, among others, Kappes noted that many companies in regulated industries have been hesitant to experiment with AI in meaningful ways.  

“The risk is heightened within these industries, and so is their level of caution,” he said. “But the risk of not implementing AI may eventually be just as great. Waiting isn’t an option, so find areas where AI can add value without incurring unnecessary risk.” 

One pharmaceutical giant, for example, is using a machine learning tool that can mine for predictive insights within patient data. A major bank rolled out a voice assistant to help Spanish-speaking users navigate its app. These solutions, he noted, don’t need access to every piece of sensitive data an organization owns in order to provide value. They do, however, require strong risk and compliance governance policies.  

That’s why Kappes suggested forming an AI governance team or committee involving the chief compliance officer, corporate counsel, chief information security officer, and other senior stakeholders to develop policies and ensure adherence.  

“What matters most is that these organizations proceed with caution — but proceed nonetheless.”  

 

Takeaways for Tech Leaders 

If your organization hasn’t already started a major AI implementation, it’s not too late. Far from it, in fact: The sky-high failure rate of AI projects to date has provided some valuable insights that organizations can use to avoid the pitfalls that early adopters encountered: 

  • Gain alignment on a single AI project with a clear end goal. Set realistic, achievable milestones, and measure progress against them to ensure accountability is maintained.  
  • Don’t be held back by the relative maturity — or immaturity — of your data. Unlike a short time ago, today’s leading solutions eliminate much of the need for extensive data cleaning and can operate with relatively clean data sets.  
  • Focus on interoperable, vendor-agnostic data infrastructure to enable easy switching as AI solutions advance.   
  • For companies in regulated industries, look for areas where AI can add value without incurring regulatory and compliance risks. Also, leverage the expertise of legal, tech, and  compliance leaders to establish an AI governance structure with clear guardrails.  

To get more insights like these on AI, technology talent, cybersecurity, and more, visit our resource library