Author: Kathryn Fortino, Managing Director, AI Governance & Risk, MIT Certification in AI Strategy
Artificial intelligence is no longer a centralized IT initiative. It has become embedded in daily workflows across functions, often without formal approval, oversight, or documentation. Employees are using AI tools to summarize documents, draft communications, analyze datasets, automate processes, and generate code. In most cases, this behavior is not reckless; it reflects initiative and a desire to improve productivity.
In one organization, for example, a marketing analyst began using a public generative AI tool to accelerate campaign copy and summarize customer sentiment data. The intent was efficiency. Over time, however, customer data exports were routinely uploaded into the tool to “improve context.” No one paused to ask where that data was stored, how long it was retained, or whether it could be incorporated into broader model training. What began as initiative quietly became exposure.
Productivity gains achieved outside formal governance structures introduce material risk. These risks extend beyond operational concerns and into regulatory exposure, data protection, intellectual property protection, bias, model reliability, and reputational trust.
The goal of governing shadow AI is not to suppress innovation. It is to ensure that AI usage aligns with organizational risk tolerance, regulatory obligations, and core values. When governance is intentional and proportionate, organizations can encourage innovation while maintaining accountability.
Shadow AI typically emerges in three primary forms:
In each case above, risk was not primarily driven by the use of AI; the absence of visibility and structured oversight increased risk.
Shadow AI introduces several categories of risk that warrant structured evaluation:
An audit does not exist to penalize innovation; it exists to provide independent assurance regarding where AI is used, how it is governed, and whether appropriate safeguards are in place.
Governance begins with awareness. Organizations cannot manage what they cannot see.
Building visibility may involve:
The outcome of this phase should be a living inventory of AI use cases, tools, models, and workflows across the enterprise.
Not all AI use cases carry the same level of exposure. Risk-based prioritization is essential.
Evaluation criteria may include:
Complex, orchestrated agent environments generally present greater operational and governance risk than single-model implementations, which in turn present greater exposure than limited prompt-based support tools. Understanding this gradient allows audit teams to allocate scrutiny proportionately.
Use cases that materially influence hiring, pricing, lending, healthcare, or customer eligibility decisions require enhanced scrutiny and formal oversight mechanisms.
Once visibility and risk classification are established, control effectiveness must be assessed.
Areas of evaluation typically include:
The objective is to determine whether controls are:
The output of a shadow AI audit should not be a static report. It should lead to:
When governance is overly restrictive, it drives AI further underground. When it is risk-based and transparent, it enables responsible adoption. Shadow AI is not a failure of control; it is a signal that employees see value in AI tools. The responsibility of governance is to channel that energy constructively.
Organizations that succeed will not be those that ban AI. They will be those that build disciplined oversight structures while allowing innovation to thrive within clearly defined boundaries.
Kathryn Fortino
Managing Director, SOX Compliance and Internal Audit Solutions
https://www.linkedin.com/in/kathryn-f-389a8235/