I've watched organizations approach AI governance three ways, and all three fail predictably. The first group waits. They wait for the EU AI Act to be fully clarified. They wait for their industry body to release guidelines. They wait for their parent company to mandate a program. The second group hires. They bring in a Chief AI Officer or an AI governance lead, give them a mandate to "build governance," and somehow expect structure to emerge. The third group writes. They commission a policy document from legal, distribute it to the company, and declare victory.

None of these work because they skip what actually matters: understanding what you have, who owns it, and what you're trying to protect. Policies without inventory are fiction. Leadership without ownership is theater. Governance without a plan is just meetings.

The Four-Step Starter Kit

Here's what actually needs to happen before you hire someone or wait for guidance. These four steps take 4-6 weeks and require no external consultants. They're the foundation that makes everything else possible.

Step One: AI System Inventory

You don't know what you have. Most organizations think they're running 3-5 AI systems. In practice, they're running 15-50. The rest are invisible. They live in CRM lead scoring, email optimization, recruitment screening, fraud detection, chatbots, recommendation engines, and demand forecasting tools. They're embedded in enterprise software. They're built by individual teams without central knowledge.

The inventory isn't a spreadsheet. It's a structured workshop where representatives from every department (HR, Finance, Operations, Product, Sales, Marketing, IT, Legal) answer three questions for each system they know about. What does it do? Who uses it? What data does it use? From these conversations, you discover shadow AI. You learn about critical systems nobody realized were critical. You uncover contractual risks when your vendor has licensed an AI to do something they didn't tell you about.

This step produces three outputs: a complete register of known systems, a list of shadow AI candidates to investigate further, and agreement on who owns each system. That agreement is worth more than the inventory itself. It's the first time the organization has acknowledged that AI systems have owners, and those owners are accountable.

Step Two: Risk Classification

Once you know what you have, you classify it. The EU AI Act provides the structure for this: prohibited risk (social scoring, emotion recognition), high-risk (hiring, lending, critical infrastructure), limited risk (chatbots that must disclose they're AI), and minimal risk (spam filters). This isn't academic. It determines what you have to do next.

The classification requires the same group from step one. You work through each system against the criteria. Does this system make autonomous decisions that significantly affect human rights? Does it influence credit, employment, education, or essential services? Is it used to detect or prevent crime? The criteria are objective. The conversations are not. Teams will debate whether their system is truly high-risk, and those debates are valuable. They surface assumptions and force clarity about actual impact.

This step produces two outputs: a risk assessment for every system, and clarity on which systems require mandatory compliance actions. For prohibited systems, the answer is straightforward: stop using them. For high-risk systems, you now know that you need human oversight, documentation, risk management, bias monitoring, and audit trails. For limited risk, you need transparency. For minimal risk, you can monitor and adjust.

Step Three: Assign Ownership

This is the step that breaks down most governance efforts. Every AI system needs a single owner. Not a committee. Not a task force. One person with authority and accountability. That person owns the day-to-day operation, escalates problems, and answers for compliance.

The owner doesn't have to be technical. They need to understand the system, understand the risks, and understand the business impact. In most organizations, this is the product manager, the operations leader, or the department head who uses the system most. You're not reassigning people. You're making explicit what's already implicit, and you're giving those people a channel to the governance structure.

For high-risk systems, you also assign an overseer. Someone from a different function who acts as an independent check. In hiring systems, this might be the HR director and someone from Legal. In credit systems, it might be Risk Management and Compliance. The overseer doesn't run the system. They audit it. They have independent authority to escalate concerns and recommend changes.

Step Four: 90-Day Plan

With inventory, classification, and ownership in place, you now have clarity about what needs to happen. The 90-day plan is simple: for each high-risk system, document what compliance means, who's responsible, and when it will be done. For prohibited systems, set a decommissioning date. For shadow AI, set a deadline for final assessment or removal.

The plan is not a detailed project. It's a commitment. You're saying: we know what we have, we know what we need to do, and we're going to do it in the next three months. You're also saying: if something changes, we have a framework to assess it.

Why This Order Matters

This sequence is important. Inventory first, because you can't classify what you don't know. Classification second, because you can't assign ownership without understanding risk. Ownership third, because a plan without owners is a wish list. Plan last, because a plan without inventory and ownership is just guesswork.

Most governance efforts start at the wrong end. They start with a plan ("we need a framework") or with ownership ("hire a Chief AI Officer") or with policy ("let's get legal to write something"). This fails because they're trying to build without a foundation.

This starter kit is your foundation. It takes 4-6 weeks. It costs almost nothing. It doesn't require external expertise or regulatory guidance. And when you're finished, you'll have the clarity and accountability that makes everything else possible. You'll know what you have. You'll know what it does. You'll know who owns it. And you'll have a realistic plan to make it compliant.

The program doesn't start when you hire someone. It starts when you understand what you've already built.