Blog

The Wild West of AI: Why Companies Need to Standardize AI Procurement

Written by Eliassen Group | Mar 25, 2026 6:08:39 PM

In many organizations, AI procurement is often something like the Wild West: Every team is using some kind of AI solution, but the rules governing their procurement and use are few and far between.

The marketing team may be using an unsanctioned Claude license to build email lists. The sales team may be using ChatGPT to craft cold outreach messaging. The new software product may have been built using Codex. The productivity gains may be great, but the risks to compliance, legal, data privacy, and more may be as untenable as they are unseen.

Multiple teams using multiple AI solutions is a challenge that’s almost everywhere, and it’s one that few organizations seem prepared to handle. According to our 2026 Technology Leadership Pulse Survey:

  • 66% of technology leaders said that other non-technology departments had procured their own AI solutions
  • 82% said they were “very involved” in the procurement process, while 18% said they were “somewhat involved” (no respondent said they weren’t involved)
  • When asked how involved their departments should be in other non-tech departments’ AI procurement process, 79% said “very involved,” while 21% were content to be “somewhat involved”

What risks do organizations face from so many teams procuring their own unsanctioned AI solutions, and what steps can they take to get ahead of such a pervasive problem without impacting AI’s clear benefits to innovation, productivity, and more?

 

Overcoming the “Shadow AI” Problem

Just a few years ago, implementing a new technology required executive sponsors, review and approval by procurement and legal, and a lengthy review and risk assessment process. Today, almost any team — or any employee — can deploy AI using a credit card. This results in what IBM refers to as shadow AI, or the “unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the information technology (IT) department.”

Gartner identified this unsupervised, unsanctioned shadow AI use as a critical enterprise risk for the near future, and it’s easy to see why: One of their studies revealed that 69% of organizations either know or suspect that employees are using prohibited AI tools, and Gartner further predicts that by 2030, 40% of enterprises will experience “security or compliance incidents linked to unauthorized shadow AI.”

But what’s behind the meteoric rise in unsanctioned AI use?

One culprit is undoubtedly the sheer speed of AI advancement. Within just four weeks at the end of 2025, for example, four major AI companies released new frontier models, like Grok 4.1, Gemini 3, and Claude Opus 4.5. Now, just a few months later, all three of these models have already received significant updates — but that’s just the tip of the iceberg. The AI tracker LLM Stats currently shows 274 LLM releases and updates across 26 companies in just the last 12 months.

As a result, teams waiting on IT and procurement to approve an AI deployment will likely see the tool they submitted for approval be outpaced or outperformed well before the review and approval process is complete.

So what can technology leaders do about it?

 

Establish Clear Accountability and Governance

One reason so many teams and even individual employees are procuring their own “shadow” AIs may simply be because they don’t know that they aren’t allowed to do so. After all, without clear policies and governance in place, what’s to stop a team leader from simply using a credit card to give his or her team access to a possibly game-changing new tool?

Curbing this unsanctioned AI use is likely why 76% of technology leaders told Eliassen that their organizations are implementing new policies and processes to support the adoption of AI by non-technology teams. A further 23% said they hadn’t implemented new policies but are actively considering it, while just 1% said their organizations were unlikely to implement any new AI procurement policies.

To make these policies effective, start by establishing a leadership team with clear ownership over the AI procurement process. An AI governance team should include at least the following roles and responsibilities:

  • Procurement: Examining vendor risks and performing contract review (where applicable)
  • Legal and Compliance: Evaluating regulatory exposure and privacy impacts
  • Cybersecurity: Identifying external and internal threats and data security risks
  • Data Analysis/Engineering: Considering model feasibility and interoperability

The goal should be a leaner, more efficient — but no less diligent — version of your existing procurement processes. AI solutions may or may not have the same lifecycles of a traditional long-term technology implementation, and the rate at which they’re advancing often means they’ll be obsolete long before a traditional procurement cycle concludes. Aim to move quickly, but carefully.

 

Map Implementations to Business Cases

The past two years have seen organizations experimenting with AI on a massive scale, but now, well into 2026, the case for casual experimentation is becoming harder to justify. AI capabilities are now clear, as are the risks, and there’s little justification for organizations allowing their teams to deploy AI solutions “just to see.”

While the speed of AI advancement may not allow for a full traditional procurement cycle, there’s still value in requiring departments to provide clear business cases for AI adoption. Every business case for AI deployment should at least be able to address the following questions:

  • Is this solution mission critical? Is it a must-have for a department (or the organization) to achieve its goals? Would not having this solution put the team at a competitive disadvantage?
  • What impact does it have on data, regulations, and privacy? How does this solution touch sensitive data, whether internal, external, or both? What controls does it come with, and what controls would have to be implemented in order for it to be deployed safely? What regulatory or compliance risks may be created by implementing it?
  • What provisions does the vendor include for liability or indemnity? Does the solution vendor provide indemnity for potential harm, IP infringement, or legal or regulatory breaches?
  • Who owns the input, output, training, and risk? Which team and leader have ownership over training data, maintenance, quality of output, and the risks that may arise from the use of this AI solution? What guardrails will they establish to mitigate harm, and how will they evaluate risk going forward?

This list is by no means exhaustive, but it does provide a general framework that any team should be able to answer when requesting AI approval. (For a more complete list of potential legal and regulatory-focused requirements when considering an AI solution, see the excellent list provided by the Association of Corporate Counsel.)

 

Accept That Change is Now a Constant (Again)

Tech leaders who lived through the era of cloud migrations are undoubtedly experiencing a familiar feeling when it comes to AI. Once again, they’re racing to adapt to a new standard and navigating risky, unfamiliar territory, all while pushing, pulling, or dragging their organizations into the future.

Those same stresses are back again, and they’re unlikely to go away any time soon. The rate of AI adoption is still climbing, with almost 48% of organizations having implemented AI in one form or another, and the problems and risks that come with it are still becoming clear. At the same time, AI costs continue to mount, meaning the budget-friendly, almost disposable AI solutions many rely on today may soon be a thing of the past.

To stay competitive — and, ideally, sane — tech leaders should consider what they do with their organizations’ approaches to AI today to be foundational for AI-powered success tomorrow. In addition to establishing smart governance and more adaptable procurement policies, they can empower their organizations to be more resilient in the face of AI-based change by:

  • Creating formal AI training offerings for all employees: In our recent survey, 20% of tech leaders identified AI and machine learning as their organizations’ biggest skills gaps. But a 2026 Ipsos/Google study found that 27% of US workers surveyed said their organizations provided AI tools, while 37% said their organizations provided guidance on using AI. Giving workers a solid foundation for AI use today will help build adaptability for whatever tomorrow brings.
  • Avoiding the “vendor lock-in” trap: Moving quickly on AI may be critical, but moving too quickly can lead to “vendor lock-in,” in which moving to another vendor becomes too costly or disruptive. 45% of enterprises today say vendor lock-in is already damaging their ability to adopt better tools, and this problem may only compound as AI advances and tools become even more expensive.
  • Maintaining and elevating data quality across the board: As Deloitte puts plainly, “legacy data and infrastructure architectures cannot power real-time, autonomous AI.” To avoid the trap of legacy systems, Deloitte advises creating “modular, cloud-native platforms that securely connect, govern, and integrate all data types” to anticipate the demands of emerging AI capabilities.

Takeaways for Tech Leaders

AI adoption is accelerating, and traditional procurement processes are incapable of keeping pace. To avoid the “shadow AI” problem, build more resilient and secure organizations, and get ahead of whatever may be over the horizon, today’s tech leaders should focus on building:

  • Smart, agile AI governance
  • Leaner, faster procurement processes
  • Strong foundations for AI success built on training and data quality

With these foundational aspects in place, organizations will be more able to weather whatever developments and challenges emerge as AI capabilities advance.

To get more expert insights like these on AI, cybersecurity, tech talent, and more, visit our resources page today.