Writings

Platform Agentic: The Compliance Trigger Is Autonomy, Not AI

1. Foreword

TLDR. The conversation about AI and compliance keeps asking the wrong question. The question isn't whether existing frameworks cover AI agents. They do. The question is whether the teams building and deploying agents understand where the obligations attach — and why.

I'm the LangChain ambassador for the Netherlands and the author of Platform Agentic — a book on compliance, governance, and accountability for teams building agentic AI systems.

I've spent 15+ years in software engineering, with 8+ of those as a solution architect in regulated industries: finance, banking, enterprise infrastructure. I've navigated GDPR audits, HIPAA risk assessments, SOC 2 Type II engagements, and PCI-DSS scoping exercises. In none of those engagements did the regulators or auditors ask whether the system was AI. They asked what it did, to whom, with what data, and whether there was a record.

That experience shapes how I read the compliance landscape for agentic systems. The obligations aren't new. The systems are.

This post is the foundational argument. Everything that follows in the Platform Agentic series builds on it.


2. The Assumption Load-Bearing Every Framework

Every major compliance framework was designed with the same implicit assumption baked in: a human made the decision.

That assumption isn't incidental. It is load-bearing. It determines who is accountable, what records must be kept, what rights the people affected can exercise, and who bears the obligation when something goes wrong.

Look at how that assumption shows up in the text:

  • GDPR grants data subjects the right to meaningful information about automated decisions that affect them — because a human was expected to be accountable for each one.
  • HIPAA's minimum necessary standard assumes a clinician decided which records were relevant to access — because access was assumed to be a deliberate human choice.
  • SOC 2's processing integrity criterion assumes a system did what a human instructed it to do at the time of processing — because the instruction was assumed to come from a human, at a specific moment, with a specific intent.

These assumptions held for decades of enterprise software. Traditional systems wait to be asked. A human triggers an action, the system responds, the interaction ends. The request and response happen in a single moment. Nothing persists. Nothing continues without input.

Agents break this model entirely.


3. What an Agent Actually Is

An agent perceives its environment, reasons about what to do next, acts across one or more external systems, and observes the results — often without a human involved at any step. The components are consistent across implementations: a model that reasons, memory that persists state across steps, tools that interact with external systems, and a feedback loop that continues until the task is complete.

graph LR
    A[Perceive] -->|context| B[Reason]
    B -->|decision| C[Act]
    C -->|result| D[Observe]
    D -->|update| A
    M[(Memory)] --> B
    D --> M

Consider a document review agent deployed inside a legal operations team. It polls a contract repository on a schedule. It identifies clauses that match a risk pattern. It flags records in a tracking system. It emails a summary to a team inbox. It logs the run. All without a single click. The paralegal sees the output. They weren't involved in any of the steps that produced it.

Every one of those actions has compliance implications. The data access was not explicitly authorized by a human at the moment it happened. The email was sent by a process, not a person. The log entry was written by software acting on its own judgment about what constituted a completed run.

That is the compliance trigger. Not the AI model. Not the language model. The autonomy.


4. The Frameworks Were Never Silent

The most common assumption in compliance circles is that AI agents represent a new category — something so novel that existing frameworks simply don't cover them. That assumption is wrong. Acting on it is expensive.

GDPR's automated decision-making provision was written specifically to address systems that act on people's data without human review at each step. It doesn't say "software." It doesn't say "AI." It addresses the condition: automated processing that produces decisions affecting people. An agent that screens job applications, scores loan requests, or flags accounts for review meets that condition whether it uses a language model or a rules engine.

HIPAA's minimum necessary standard applies to any system accessing protected health information. It doesn't require a human at the keyboard. A clinical agent that retrieves patient records to generate a discharge summary is accessing protected health information. The standard applies.

SOC 2's processing integrity criterion doesn't ask who processed the data. It asks whether the processing was complete, accurate, and authorized. An agent that processes customer data outside the scope it was authorized for fails that criterion — regardless of whether a human or a model made the call.

PCI-DSS governs any system that stores, processes, or transmits cardholder data. An agent that handles billing inquiries and routes card information to an external model provider has just placed that provider in PCI scope. The standard applies to the agent's architecture choices, not just its intent.

The problem was never that the frameworks were silent. They were written around behaviors and conditions — not around technologies. When "AI agent" doesn't appear anywhere in the text, the natural read is that nothing applies. That natural read is wrong.

The gap isn't in the frameworks. It isn't in the teams. It's in the translation — and that's what the Platform Agentic series is for.


5. Where the Translation Breaks Down

Three specific properties of agentic systems break the translation most severely.

Autonomous action without per-step authorization

In traditional software, a human initiates each significant action. The access log entry has a user attached. The decision has an approver. The email has a sender. In an agentic system, the actions happen because the agent reasoned that they should — not because a human authorized each one.

This breaks the accountability model that every framework assumes. When a regulator asks "who authorized this?", the answer cannot be "the model decided." The deploying organization is accountable for every action the agent takes. The log must show that. The governance structure must enforce it.

Multi-step chains that cross multiple frameworks simultaneously

A single agent run can touch patient data, payment information, and EU personal data in the same thirty-second window. HIPAA, PCI-DSS, and GDPR don't apply sequentially — they apply simultaneously, to the same action, at the same moment. Most compliance programs are organized by framework. Agentic systems don't respect that organization.

Classification drift over time

A document review agent classified as limited risk at deployment can cross into high-risk territory six months later — not because anyone decided to change its classification, but because someone added a tool that lets it initiate contract amendments. The classification doesn't update automatically. The obligations do.


6. Six Frameworks, One Translation Problem

The Platform Agentic book maps the compliance landscape for agentic systems across six frameworks:

graph LR
    subgraph Data["Data & Privacy"]
        GDPR
    end
    subgraph AI["AI-Specific"]
        EUAI["EU AI Act"]
    end
    subgraph Security["Security & Trust"]
        SOC2["SOC 2"]
    end
    subgraph Sector["Sector-Specific"]
        HIPAA
        PCI["PCI-DSS"]
    end
    subgraph Governance["Risk Governance"]
        NIST["NIST AI RMF"]
    end
  • GDPR — the foundational data protection law. Its automated decision-making provision (Article 22) is the most directly relevant clause in any privacy regulation for agent deployments.
  • EU AI Act — the first binding AI-specific regulation in the world. Risk tiers and mandatory obligations for high-risk deployments. High-risk obligations enforce on 2 August 2026.
  • SOC 2 — the de facto security and trust standard for B2B software. Now being extended to cover AI vendors and AI-powered features.
  • HIPAA — mandatory for any organization where agents touch protected health information.
  • NIST AI RMF — the governance framework for AI risk management. Sets the vocabulary for how organizations map, measure, and manage AI risk.
  • PCI-DSS — mandatory where agents interact with payment flows.

Each framework applies to behaviors and conditions. Agents produce those behaviors and conditions continuously, autonomously, and often across multiple framework jurisdictions simultaneously.

That is the translation problem. Not "does the law apply?" It always did. But "where exactly does it attach, and what does it require when it does?"


7. What This Means in Practice

The practical consequence of the compliance trigger being autonomy — not AI — is that compliance obligations attach earlier in the system's lifecycle than most teams realize.

They attach at design time, when the agent's capabilities and tool permissions are defined. An agent given broad access that it will never be allowed to use is already a compliance gap — because the access exists, even if it hasn't been exercised.

They attach at deployment time, when the system's risk classification is established. If that classification doesn't reflect what the system can actually do, the documentation is wrong, and the obligations that follow from the wrong classification are wrong with it.

They attach at runtime, on every tool call, every data retrieval, every action taken without explicit human authorization. Each of those events is a compliance event under one or more of the six frameworks above.

And they attach over time, as the system changes. A model upgrade, a new tool integration, an expanded deployment context — any of these can shift the system's risk profile without anyone formally deciding to change it.

The compliance obligation doesn't begin when the regulator asks. It begins when the agent is deployed.


8. Closing

The goal of this post — and the Platform Agentic series — is not to make compliance harder. It's to make the translation visible, so that the teams building agentic systems can do the work once, correctly, and build systems that are defensible when the questions come.

The questions will come. Regulators are already asking them. Auditors are already asking them. Enterprise buyers are already asking them.

The answer is not to wait until they're directed at you. The answer is to understand where the obligations attach, build systems that satisfy them, and be able to show your work.

The autonomy is the trigger. The frameworks are already loaded. The translation is the work.


References