The Dutch government spent five years defending SyRI, an algorithm that denied social benefits to thousands of people. It took a court ruling to shut it down. The Australian government spent nearly a billion dollars on Robodebt, an automated recovery system that charged citizens illegally. A Royal Commission was required to investigate. Both countries had governance frameworks. Both had compliance conversations. Both failed to stop systems that caused demonstrable harm.

Denmark will repeat this pattern. Not because Danish governance is weaker, but because the structure of Danish public institutions makes AI governance fundamentally impossible at scale. The problem isn't policy. The problem is power.

Power, Not Process

I've spent eighteen years managing programs in regulated environments: pharma, financial services, critical infrastructure. I learned early that regulation does not create safety. Power clarity does. When you have clear authority, you can stop a process. When you have process, you have negotiation. When you have negotiation, you have delay, committee reasoning, and systems that remain live because nobody had authority to kill them.

The EU AI Act is comprehensive. It defines high-risk systems, technical documentation requirements, human oversight obligations, and audit trails. Article 11 requires extensive documentation. Article 9 requires continuous risk management. Article 14 mandates human oversight. These are concrete requirements. But they all assume something the public sector does not have: a single person with authority to enforce them.

In a private company, the Chief Information Security Officer or a Chief AI Officer can say "no." They can halt a deployment. They can require additional testing. They have budget authority and they answer to a board with commercial consequences for failure. In the public sector, there is no equivalent authority. Every decision is distributed across strategy, legal, IT, operations, and the department that uses the system. Nobody owns consequences. Therefore, nobody owns decisions.

The Competence-Authority Gap

Here's what I observe in Danish municipalities and government bodies: the people who understand the systems lack decision authority, and the people with decision authority lack technical competence.

A municipality's sagsbehandling system that uses AI to route social benefits applications is typically understood by the IT team that manages it, the operations manager who oversees caseworkers, and whoever bought the system five years ago. The IT team knows it's high-risk under Annex III. They understand that Article 50 transparency requirements mean caseworkers must disclose to citizens that AI is involved in their decision. They know the system logs decisions inadequately. But they have no authority to halt adoption or require fixes. That requires budget approval, which requires departmental consensus, which requires legal review, which requires political approval.

Meanwhile, the people with actual authority—the mayor, the chief administrative officer, the compliance officer—are generalists. They understand governance in abstract terms. They do not understand the system. They do not know it violates Article 50. They cannot evaluate whether the automated decision-making architecture poses unacceptable risk. So they defer to "the experts," who then discover that "the experts" (legal and compliance) lack technical depth and cannot make the decision either.

This is the competence-authority gap. Those who understand the systems lack decision authority. Those who possess authority cannot understand the systems. The result is stagnation disguised as governance.

The Pilot-to-Production Failure Point

This gap becomes catastrophic at one specific moment: the transition from pilot to production. Every AI system deployment fails here.

In a pilot phase, an AI system runs under controlled conditions. Usage is limited. The organization is monitoring it carefully. When problems emerge—bias in hiring screening, demographic disparities in credit decisions, false positives in fraud detection—they can be corrected or the pilot can be halted. The decision is contained within a pilot team.

But pilots must eventually transition to production. The system must go live at scale. This requires a different kind of decision: the organization must commit to operating this system, with all its imperfections and residual risks, at scale. This decision cannot be made by the pilot team. It requires infrastructure support, operations readiness, legal clearance, compliance certification, and political approval.

At this transition point, responsibility fragments. Strategy says "this is our digital transformation priority." IT says "we can operate it." Legal says "document the risks and we'll review them." Operations says "we'll follow procedures." And the caseworkers who will interact with it daily say nothing, because they have no authority to say anything. The system goes live because nobody had authority to stop it.

When SyRI caused harm, it was not because the Dutch government lacked governance. It was because the system had been piloted, reviewed, and approved. Then it went live. Then it harmed thousands of people. And by that time, stopping it required legal action by citizens. The governance framework had been satisfied. The harm happened anyway.

Cost Pressure as Driver

The urgency that pushes systems from pilot to production is rarely technical. It's financial. Danish municipalities are under budget pressure. A sagsbehandling AI promises to reduce caseworker time by 20 percent. That's savings equivalent to hiring decisions. So the system is adopted not because it's proven safe, but because it's financially necessary.

When cost pressure is the driver, safety becomes a constraint on profitability, not a prerequisite. The system goes live "as soon as it's defensible," not "when we're confident it's safe." This is not unique to Denmark. It's universal in public sector IT. The system that seemed impossible to stop in the pilot phase becomes impossible to stop in production, because stopping it means admitting the budget projections were wrong.

This is the context in which AI governance must operate: systems adopted for cost reasons, deployed across jurisdictions that lack unified authority, operated by staff without power to halt them, and overseen by governance structures that lack technical depth to understand failure modes.

Three Required Changes

The Danish public sector will not solve this without structural change. Here's what would actually be required:

First, name a single accountable authority for operational AI. Not a committee, not a task force. One person in each public body who owns every AI system used in that organization. That person has authority to halt deployment. That person is accountable for violations of Article 11 and Article 14. That person cannot delegate this authority. When something goes wrong, that person faces consequences. In a private company, this is a standard structure. In the public sector, it would be revolutionary.

Second, define explicit stop-criteria before deployment. Before a system goes live, the organization must document what outcomes would justify halting it. What level of bias is unacceptable? What error rate triggers review? What demographic disparities require investigation? These criteria must be measurable and pre-agreed, not debated after harm occurs. This is standard in pharmaceutical development. It's standard in clinical trials. It's standard in financial audits. It's nearly non-existent in public sector AI.

Third, separate political ambition from operational veto power. A mayor can want an AI system for strategic reasons. A chief administrative officer can commit to digitization. But they must not have authority to override the operational AI owner's decision to halt a system. This is the critical check. Otherwise, cost pressure and political momentum override safety decisions at the moment of maximum harm.

What Will Actually Happen

Without these changes, Danish public AI will follow the pattern established in the Netherlands and Australia. Some systems will be deployed without adequate documentation. Some will cause demonstrable harm. Journalists will investigate. Citizens will sue. Courts will eventually halt the systems. The organization will apologize and commission a retrospective review. The governance framework will be found to have been "followed technically" but insufficient in practice. And the organization will then deploy the next system with slightly different wording in the governance checklist.

The EU AI Act is good regulation. It's specific. It's technically sound. But it cannot create authority where none exists. It cannot distribute decision-making power to scattered governance committees and expect clarity. It cannot mandate human oversight when no human has authority to enforce it.

Regulation does not create safety. Power clarity does. Denmark has good regulation. It lacks power clarity. Until that changes, AI governance in the public sector will remain theater with high-stakes consequences.