Here’s a number that should stop any martech leader in mid-sentence. Frans Riemersma’s April analysis found that 90.3% of companies report using AI agents, but only 23.3% have them in production. Only 6.3% have fully integrated AI into their marketing strategy.
This represents a gap of 84 points between experimentation and governance. And the platform that most teams trust to shut it down was never designed for this task.
Why is your AI agent making commitments that no one can keep?
Your customer data platform (CDP) works. A unified customer profile. Each touchpoint feeds a single record. The promise of a decade of martech investment, finally fulfilled.
So why is your AI agent offering a personalized level of service that requires legal approval and has never been approved for external communication?
The CDP has seen it all. The agent was authorized to access this data. What he lacked was permission to act in this specific way.
Data access and decision-making authority are two different things. The martech stack only solved one.
The reflex is to patch at the tool level. Add guardrails to marketing automation platform. Add a review step to the CRM. Configure the chat agent to escalate certain topics.
Each patch addresses a single symptom in a single system. Three months later, another agent in another system makes another unauthorized commitment. The patchwork gets bigger. This is not the case with consistency.
There is a second reason why tool-level fixes fail. Even when a single system correctly governs a decision, the outcome crosses system boundaries and loses its authority. The receiving system rechecks, reinterprets, or reauthorizes the decision before acting. A governed output from your marketing platform does not come into your CRM as something that the CRM can directly trust.
The hidden cost does not only lie in the production of the governed decision. It’s about rebuilding trust before the next system can act.
What gap did the CDP never want to fill?
A CDP governs access to data. It answers a question: who can see this disc?
Decision-making governance answers another question: given this record, what is AI allowed to do with it?
This distinction becomes more important, not less.
The latest federal guidance on Trusted AI goes beyond access and visibility to operational issues: explainability, deterministic behavior when necessary, fail-safe operation, and measurable governance throughout the lifecycle. The emerging standard is not just about clean data.
It is a governable action.
Most of the AI governance market focuses on the Manage layer: monitoring drifts, flagging anomalies, and generating reports after deployment. But NIST’s AI risk management framework doesn’t start there. It starts with Govern and Map.
Before being able to manage AI risk, you need to define who owns the system, what it is allowed to do, and where the boundaries lie. Most organizations have invested heavily in managing the first problem and almost nothing in designing the second.
The practical model is simple. Permissions define what the agent can engage in autonomously. The obligations define what it must do in all cases where specific signals appear. Bans define hard stops that no agent can cross, regardless of optimization pressure.
The difference between vague and sovereign is the difference between “helping customers with refunds” and “approving refunds up to $250 for customers with a 90+ day tenure and no prior fraud reports.” The first relies on the judgment of the AI. The second is binary. Either it works or it doesn’t. It can be audited. This can be applied.

Why is BI the next priority in infrastructure?
Batteries on a plane mapped the path from applications to infrastructure points to decision as a potential standalone service: a consumer of context rather than a provider of it.
This framing is correct. When decision governance is a shared service rather than built into each tool separately, each agent in the stack queries the same rules. An update is propagated on each system. Legal approves the limit once and each agent inherits the approval.
This is also how you solve the trust problem between systems. When each agent interrogates a layer of shared authority, the decision retains legitimacy at the boundary. The next system does not need to reevaluate. Authority is centralized and the record is portable.
CDPs have won the data unification war. This problem is largely resolved. The next architectural problem is the unification of decisions via a sovereign operational layer, which I call the AI Operating System for Brand Experience (BXAIOS). Until every agent queries the same rules about what they are allowed to do, you have unified data powering ungoverned decisions.
The second half of the problem has a name: Decision architecture. It’s the model that tells the application layer what to apply and how to translate executives’ risk appetite into machine-speed behavior. Without it, each new AI deployment risks becoming another silent cost center instead of a sustainable source of leverage.
And these silent costs have been accumulating for longer than most teams realize.





