Auditing and Governing Shadow AI

A Practical Governance Perspective

Author: Kathryn Fortino, Managing Director, AI Governance & Risk, MIT Certification in AI Strategy

Artificial intelligence is no longer a centralized IT initiative. It has become embedded in daily workflows across functions, often without formal approval, oversight, or documentation. Employees are using AI tools to summarize documents, draft communications, analyze datasets, automate processes, and generate code. In most cases, this behavior is not reckless; it reflects initiative and a desire to improve productivity.

In one organization, for example, a marketing analyst began using a public generative AI tool to accelerate campaign copy and summarize customer sentiment data. The intent was efficiency. Over time, however, customer data exports were routinely uploaded into the tool to “improve context.” No one paused to ask where that data was stored, how long it was retained, or whether it could be incorporated into broader model training. What began as initiative quietly became exposure.

Productivity gains achieved outside formal governance structures introduce material risk. These risks extend beyond operational concerns and into regulatory exposure, data protection, intellectual property protection, bias, model reliability, and reputational trust.

The goal of governing shadow AI is not to suppress innovation. It is to ensure that AI usage aligns with organizational risk tolerance, regulatory obligations, and core values. When governance is intentional and proportionate, organizations can encourage innovation while maintaining accountability.

 

What Shadow AI Actually Looks Like

Shadow AI typically emerges in three primary forms:

  1. Unapproved external tools: Employees may use publicly available AI platforms to process business data, sometimes including sensitive or proprietary information, without understanding how that data is stored, retained, or used by the provider.
  2. Unregistered internal solutions: Teams may develop AI-enabled workflows, scripts, or models without formal review, documentation, validation, or monitoring processes. In one case, an operations team quietly deployed a small orchestration layer that connected multiple AI agents to automate vendor onboarding decisions. The solution worked efficiently, but no one had assessed error tolerance, override controls, or escalation paths.
  3. Embedded vendor AI: Third-party platforms increasingly incorporate AI functionality. In many cases, organizations adopt these tools without full transparency into model behavior, training data sources, or downstream impacts on decision-making.

In each case above, risk was not primarily driven by the use of AI; the absence of visibility and structured oversight increased risk.

 

Why Shadow AI Requires Audit Attention

Shadow AI introduces several categories of risk that warrant structured evaluation:

  • Data protection and privacy exposure: Sensitive data may be transmitted to systems outside organizational control.
  • Regulatory exposure: AI-assisted decisions may fall under employment law, financial regulation, healthcare regulation, consumer protection requirements, or emerging AI-specific legislation.
  • Bias and fairness concerns: Models may produce discriminatory or inequitable outcomes if not assessed for bias.
  • Reliability and decision integrity: Generative and predictive models can produce inconsistent or non-deterministic outputs.
  • Reputational risk: Stakeholder trust may erode if AI-driven processes lack transparency or governance.

An audit does not exist to penalize innovation; it exists to provide independent assurance regarding where AI is used, how it is governed, and whether appropriate safeguards are in place.

 

A Structured Approach to Auditing Shadow AI

 

Phase 1: Establish Visibility

Governance begins with awareness. Organizations cannot manage what they cannot see.

Building visibility may involve:

  • Interviews and structured surveys across departments
  • Review of procurement and subscription records
  • Assessment of vendor contracts and embedded AI capabilities
  • Review of cloud usage and automation platforms
  • Review of web logs and outbound traffic patterns to identify access to external AI platforms or high-volume data transfers
  • Targeted inquiry in high-innovation functions (marketing, product, engineering, customer operations)

The outcome of this phase should be a living inventory of AI use cases, tools, models, and workflows across the enterprise.

 

Phase 2: Risk Categorization

Not all AI use cases carry the same level of exposure. Risk-based prioritization is essential.

Evaluation criteria may include:

  • Sensitivity of the data processed
  • Degree of automation in decision-making
  • Potential impact of erroneous outputs
  • External customer or public exposure
  • Applicable regulatory jurisdiction
  • Type and complexity of the AI application (e.g., simple prompt-based assistance, single-model deployment, or multi-agent orchestration with autonomous decision loops)

Complex, orchestrated agent environments generally present greater operational and governance risk than single-model implementations, which in turn present greater exposure than limited prompt-based support tools. Understanding this gradient allows audit teams to allocate scrutiny proportionately.

Use cases that materially influence hiring, pricing, lending, healthcare, or customer eligibility decisions require enhanced scrutiny and formal oversight mechanisms.

 

Phase 3: Control Evaluation

Once visibility and risk classification are established, control effectiveness must be assessed.

Areas of evaluation typically include:

  • Data handling and access controls
  • Model validation and testing practices
  • Bias assessment processes
  • Monitoring and performance tracking
  • Incident management protocols
  • Documentation and change management
  • Role clarity and accountability structures

 

The objective is to determine whether controls are:

  1. Designed appropriately
  2. Implemented consistently
  3. Operating effectively

 

Moving from Audit to Governance

The output of a shadow AI audit should not be a static report. It should lead to:

  • Defined ownership for AI oversight
  • Clear escalation pathways
  • Risk-based approval standards
  • Ongoing monitoring processes
  • Proportionate guardrails that do not stifle productivity

When governance is overly restrictive, it drives AI further underground. When it is risk-based and transparent, it enables responsible adoption. Shadow AI is not a failure of control; it is a signal that employees see value in AI tools. The responsibility of governance is to channel that energy constructively.

Organizations that succeed will not be those that ban AI. They will be those that build disciplined oversight structures while allowing innovation to thrive within clearly defined boundaries.

 

Author

Copy of Copy of Copy of Copy of Copy of Blog Authors (5 x 5 in) (1)

 

Kathryn Fortino

Managing Director, SOX Compliance and Internal Audit Solutions

KFortino@eliassen.com 

https://www.linkedin.com/in/kathryn-f-389a8235/