AI is becoming infrastructure. Choose your dependencies

AI sovereignty is not a binary or a product. It is a question of control, dependency, and what organizations are willing to give away by default.

AI is becoming infrastructure. Choose your dependencies
El Lissitzky, Beat the Whites with the Red Wedge, 1919. Wikimedia Commons, public domain.

AI infrastructure is being decided now.

Most organizations are not making those decisions deliberately. They are accepting defaults: default providers, default APIs, default deployment models, default trade-offs around data, continuity, and control.

That is what makes the current moment important. AI is becoming infrastructure, and infrastructure choices rarely stay technical for long. They become operational, legal, economic, and eventually political.

“Sovereign AI” is often framed as a choice between cloud and on-premise. I think that framing is too shallow. The more useful question is what control you retain, under what contract, and whether you can inspect, modify, and move the systems you depend on.

AI is becoming infrastructure

We have seen this pattern before.

In the early days of the web, most organizations did not think much about who controlled their email servers, their domain registrars, or the routing of their traffic. It felt like a technical detail. Then it became a political one. Governments discovered that critical communications ran through foreign infrastructure. Hospitals found patient records subject to foreign law. Journalists learned that the platforms they depended on could be throttled, shut down, or repurposed without their knowledge.

AI is now following the same arc, only faster.

The models that summarize legal briefs, triage patient intake forms, draft policy memos, and power customer support are not neutral tools sitting on a shelf. They are services: hosted somewhere, owned by someone, governed by terms that can change. Every inference request is a handshake with someone else’s infrastructure. Every prompt is a data point leaving your control.

The question of who controls AI infrastructure is not just technical. It is a governance question. And most organizations are not answering it consciously.

Sovereignty is about control

“Sovereign AI” means different things depending on who you ask.

For governments and public institutions, it is about jurisdiction and capability. Can critical systems run on infrastructure that is physically and legally under their control?

For organizations, it is mainly about compliance, continuity, and accountability. Can they audit outputs, meet regulatory requirements, and keep operating when providers change terms, pricing, or product direction?

For individuals and communities, it is more personal. It is the right to understand, modify, and not be unnecessarily surveilled by the tools they use.

These are related concerns, but they are not identical. A solution designed for enterprise compliance does not automatically restore individual freedom. A system that is developer-friendly does not automatically satisfy public-sector or national-security constraints. Any serious conversation about AI sovereignty has to hold these different levels together.

The hidden cost of dependency

When you call a model, you are not just using a tool. You are entering a dependency.

When inference is outsourced to a third-party API, the cost is not just measured in dollars per token. What you give away is harder to price, and harder to recover.

Data. Every prompt sent to a hosted model is data leaving your control. Even with contractual protections, you are trusting that the provider’s interests align with yours. In regulated industries, that trust has a compliance cost. In sensitive contexts, it may simply not be acceptable.

Control. Model behavior can change between versions, sometimes silently. Rate limits, context windows, safety behavior, and pricing can all shift without warning. A system prompt that worked yesterday may not work tomorrow. You are building on someone else’s roadmap.

Continuity. Cloud services get deprecated. APIs get versioned out of existence. Providers get acquired. If a core operational capability depends on a commercial relationship, you are one contract negotiation away from disruption.

Accountability. When an AI system produces a harmful output, responsibility becomes blurry if the underlying model is a black box hosted by a third party. You cannot inspect the weights, audit the training data, or fully trace what changed.

Hosted AI services have their place. The problem is not using them. The problem is using them by default, without being clear about the trade-offs.

A useful example is federated learning: instead of centralizing sensitive data first, you move training or updates across local environments and aggregate what is learned. Frameworks like Flower have made that approach more accessible across different ML stacks and use cases, especially where privacy and data locality matter from the start. It does not solve sovereignty on its own, but it shows that “ship the data to the model provider” is not the only architecture available.

We explored a small version of this question in earlier work on phishing detection, asking whether useful models could be trained without centralizing browsing data first.

What real sovereignty requires

Sovereignty is not binary. It is not “cloud versus on-premises.”

The meaningful question is not where the compute runs. It is what control you retain, and under what contract.

You can run a closed proprietary model on your own hardware and still have very little sovereignty if you cannot audit it, modify it, or move away from it. Conversely, you can run open-weight models on external infrastructure and still retain meaningful control if the model is auditable, the data stays in the right jurisdiction, and the workload can be moved.

What matters is not purity. It is optionality and control.

In practice, real sovereignty requires four things.

On-premises capability. Not every workload has to run this way, but critical workloads should be able to run fully within your own infrastructure when needed.

No critical runtime dependency. Your core inference capability should not require a live connection to a third party. Real systems will still use external tools and data sources, but the base layer should not need to call home.

Auditability. You need to be able to answer basic questions: what model is running, with what configuration, under what policy, producing what outputs? Without that, compliance and accountability remain weak.

Interoperability. Portability matters, but interoperability matters more. The goal is not just to move a model from one place to another. It is to have a stack where components can be swapped without rebuilding everything else.

You can run open models on external infrastructure and still retain control, if the contract preserves control. Some providers, such as Nebius, now emphasize region choice and data residency to make that model more viable for regulated or sensitive workloads. Cloud platforms like Azure are also bundling governance, observability, and responsible-AI controls into their AI platforms because enterprise trust now demands it.

The question is not where you run. It is what you give up to run there.

Open source as a prerequisite

Open source is not an ideology. It is a control mechanism.

If you cannot inspect, modify, and run a system independently, you do not control it. You are relying on a promise.

A lot of organizations talk about transparency while deploying systems they cannot meaningfully inspect or operate independently. “Responsible AI” commitments from vendors are treated as a substitute for actual control. They are not.

Open weights and open code make control possible. They let you run the system yourself, inspect behavior, and adapt it when needed. Without that, transparency remains a claim rather than a property of the system.

The same logic applies to inference runtimes. Sovereignty is not only about model weights. It is also about how models are packaged, distributed, and run. That is why tools like llamafile matter. They make it easier to run open-source LLMs locally from a single executable, with more control over where inference happens and less dependence on a live provider connection. That does not, by itself, solve sovereignty, but it moves a critical layer of the stack away from default cloud dependency.

What still needs to be built

The sovereign AI stack does not fully exist yet. What exists today are fragments.

Fleet management for on-premise models is still immature. Access control and auditability are inconsistent. Observability is often shallow. Governance remains too manual.

The direction is clear. The systems are not.

That does not make sovereignty unrealistic. It just means that, today, it is still something to be built rather than something to be bought off the shelf.

An open question

Sovereign AI is not something you buy. It is something you build through a series of choices.

Right now, most of those choices are still being made by default.

Convenience is winning. Control is being traded away.

AI infrastructure is being decided now. The question is not which provider you use.

It is what you are willing to depend on.