I was working with a financial services company last year. They told me they had two AI systems: a chatbot and a recommendation engine. Both approved, both documented. I spent a week interviewing their teams. By day three, we'd found twelve more. A CRM that scored leads by likelihood to convert. A fraud detection system embedded in their transaction processor. An email optimization tool that determined when to send messages. A recruiting platform that screened CVs. A demand forecasting tool. An internal tool that recommended which customers should be offered credit increases. A vendor platform they paid for that made decisions about data access. And more.
None of these were secret. They weren't rogue projects or shadow IT in the traditional sense. They just weren't called AI. The finance team didn't think their fraud detection was AI. It was a tool. The recruiter didn't think their CV screening was AI. It was software. The CRM vendor's marketing materials never said "AI-powered." They said "intelligent lead scoring."
This is shadow AI, and it's the biggest compliance risk most organizations face under the EU AI Act.
Why Shadow AI Happens
Shadow AI emerges because we've all been trained to think of AI as something special. Something new. Something deliberate. But the reality is that the line between a rule-based decision system and an AI system is blurry and increasingly irrelevant. If a system uses statistical models, machine learning, neural networks, or any method that learns from data to make predictions or decisions, it's AI under the regulation. Whether anyone marketing it as such is beside the point.
Shadow AI also happens because accountability is diffuse. The CRM vendor's platform has one owner. The implementation has another. The department using it has a third. When a vendor updates their tool and adds an ML-based feature, often nobody in procurement notices. Often nobody in IT notices. It just gets used one day and nobody marks that anything changed.
Shadow AI thrives in decentralized organizations. When different business units buy their own software, when product teams build their own features, when each department is responsible for their own tooling, you get fragmentation. You get shadow AI. You also get shadow AI in organizations that are overly centralized. When there's no transparency about what different departments are doing, shadow systems accumulate in silos.
Why Shadow AI Matters Under the EU AI Act
Under Article 26, the deployer of a high-risk AI system is responsible for compliance. The deployer is the natural or legal person, other than the provider, that makes the system available to end-users. If you're running a high-risk system and you don't know it exists, you can't comply with it. You can't implement human oversight if you don't know someone's using the system to make decisions about hiring. You can't maintain documentation if you don't know the system exists. You can't audit bias if you don't monitor the system. You can't explain decisions if you can't explain the system.
The regulatory risk is real, but it's not the biggest risk. The bigger risk is that shadow AI often handles the highest-stakes decisions. Fraud detection. Credit decisions. Hiring. Resume screening. These are the systems that touch people's lives. These are the systems that need oversight, documentation, bias monitoring, and audit trails. These are exactly the systems that often disappear into shadow IT because nobody realized they were AI.
The reputational risk is also real. If your organization makes a decision that harms someone, and that decision was made by a system nobody knew existed, the story isn't "we made a mistake." The story is "they didn't even know their own systems were running."
How to Find Shadow AI
Shadow AI discovery requires structure and honesty. Here's the process that actually works.
First, do a department sweep. Talk to every function that makes decisions: HR, Finance, Operations, Sales, Marketing, Customer Service, Risk, Fraud, IT, Product. Ask them: what tools do you use to make decisions or predictions? What software decides whether something happens? Ask specifically about tools that learn. The conversation matters more than the list. People often don't realize what they're describing is AI until you ask the right question.
Second, review contracts. Look at software contracts from the past three years. Look at SaaS agreements. Look at vendor statements of work. Search for the words "machine learning," "AI," "statistical," "predictive," "algorithm," "learning." You'll find systems that were implemented and forgotten. You'll find features added to existing contracts that nobody noticed. This is where many high-risk systems hide.
Third, do a data flow analysis. Which systems have automated access to sensitive data? Which systems push data into critical business processes? CRM systems. Hiring systems. Finance systems. Payment systems. These are containers for shadow AI. A system might be described as "contact management software," but if it's doing lead scoring or opportunity prioritization, that's AI.
Fourth, build a register and maintain it. Don't do this as a one-time exercise. Do it as an ongoing discovery process. Build a register of all AI systems. Mark what you've verified and what you've simply been told about. Mark what's high-risk and what's minimal risk. Mark what has an owner and what doesn't. Then maintain it. Every quarter, add new systems. Every year, review for changes.
The Real Cost of Shadow AI
I worked with an organization that discovered a bias in their credit decisioning system by accident. A journalist asked about a potential pattern in their lending decisions. They didn't even know the system had bias because they weren't monitoring it. They weren't monitoring it because they didn't know it was AI. By the time they discovered the problem, the damage was done.
Shadow AI isn't just a compliance problem. It's a governance problem. It's a risk management problem. It's an accountability problem.
The first step in building real AI governance is knowing what you have. That's harder than most organizations want to admit, but it's also the most important step. Until you know what AI systems are running, you can't govern them. Until you govern them, you can't be sure they're working as intended. Until you're sure they're working as intended, you're taking risks you don't understand.
The AI Governance Starter Kit begins with inventory. For good reason. You can't protect what you don't see.