Zeaware
PRODUCTS | SERVICES
RSS Feed
Related Products
Tony Bain (Zeaware CEO)
February 20, 2026

From Interoperability to Agency: A Zeaware View on Practical AI Sovereignty

This article responds to and builds on Why AI Sovereignty Depends on Interoperability Standards by Eileen Donahoe and Konstantinos Komaitis, published by Tech Policy Press on 17 February 2026.

AI sovereignty is no longer a fringe concern reserved for national security agencies or technology ministries. As artificial intelligence becomes embedded in public services, regulatory decision-making, and critical infrastructure, sovereignty has become a practical question: who ultimately controls how these systems behave, evolve, and can be changed.

Donahoe and Komaitis argue persuasively that sovereignty in the AI era is exercised less through ownership of models and more through interoperable standards, particularly at the interfaces, orchestration layers, and governance mechanisms that shape how systems operate.

Where this discussion can be extended is in how interoperability becomes operational rather than aspirational.

Interoperability is not about language

A subtle assumption often appears in AI discussions: because large language models operate in shared natural languages, particularly English, interoperability is largely implicit.

In practice, this is not the case.

Models differ in ways that matter deeply for governance and control, including tool invocation semantics, refusal behaviour, confidence expression, logging formats, memory handling, and agent execution patterns.

Without a platform layer that absorbs these differences, they become embedded dependencies. A system may technically be able to switch models, but doing so can alter behaviour, auditability, and risk posture in ways that are difficult to predict or control.

English is a shared interface for users, not a control surface for systems.

Sovereignty is the ability to change models without changing the system

True AI sovereignty does not require exclusive ownership of models. It requires the ability to replace them without losing control.

This is the design principle behind Zeaware Avalon.

Zeaware Avalon abstracts model-specific behaviour behind a consistent orchestration and governance layer so that, from a user and operator perspective, prompts remain stable, agent logic remains stable, policy constraints remain stable, and audit and decision records remain comparable.

The model becomes a replaceable execution component, not the defining feature of the system.

This matters for governments and regulated organisations because procurement strategies change, risk tolerances evolve, regulatory obligations shift, and geopolitical or supply-chain constraints emerge.

If changing a model requires rewriting prompts, re-authoring workflows, or rebuilding governance processes, sovereignty has already been compromised.

Governance must survive model changes

One of the risks highlighted implicitly in the interoperability debate is that governance is often embedded inside models rather than enforced above them.

This includes safety rules embedded in weights, opaque refusal logic, undocumented prioritisation of values, and uninspectable moderation thresholds.

Zeaware Avalon deliberately moves governance up the stack into explicit, enforceable system controls. These include policy-aligned constraints, deterministic validation rules, structured decision checkpoints, human-in-the-loop escalation, and replayable execution histories.

Because these controls sit outside the model, they persist regardless of which model is used.

Interoperability without governance persistence is only surface-level sovereignty.

Agent orchestration is the real control plane

As AI systems become increasingly agentic, invoking tools, accessing data, and acting autonomously, the strategic question shifts.

Who controls the orchestration logic.

From an Zeaware Avalon perspective, orchestration is treated as a first-class, inspectable, governed layer.

Agents follow defined execution paths. Tool access is policy-gated. Decisions can pause, escalate, or be overridden. Execution histories can be audited, reviewed, and replayed.

This ensures behavioural authority remains with the system owner, whether a government department or a regulated enterprise, rather than being delegated to a model provider by default.

Interoperability as experienced sovereignty

A useful test of sovereignty is not theoretical compliance, but lived operational experience.

In practice, interoperable sovereignty means models can be trialled and compared safely, regulatory changes do not require replatforming, better models can be adopted without re-authoring systems, and vendors can be exited without losing institutional knowledge.

This is what sovereignty looks like when it is experienced, not just asserted.

A complementary conclusion

The Tech Policy Press article is right. Open standards, modularity, and coordination matter. But standards alone do not deliver sovereignty.

Sovereignty emerges when model differences are abstracted, governance is externalised from models, orchestration is controlled and inspectable, and exit is operationally real rather than merely contractual.

Platforms that make interoperability practical rather than theoretical are where AI sovereignty is actually realised.

Sovereignty is no longer something governments must wait for global consensus to provide. It is something they can design into systems today, deliberately, visibly, and on their own terms.

© 2026, Zeaware Pty Ltd or its affiliates. All rights reserved.