Sri Rang

Platform Agentic: Five Things Every Compliance Framework Agrees On

@srirangan  ·  srirangan.net  ·  LangChain Ambassador NL

The Cross-Framework Pattern

Six frameworks. Different words. Same five instructions.

GDPR. EU AI Act. HIPAA. SOC 2. PCI-DSS. NIST AI RMF.

Written by different authors, for different industries, in different jurisdictions, at different moments.

And yet — five things they all agree on.

Why this matters.

The five principles give teams a shared vocabulary.

Business and legal teams use them as governance requirements.

Developer teams use them as architectural constraints.

Without a shared vocabulary, the two sides talk past each other — and compliance gaps open in the space between.

The five principles.

mindmap root((5 Principles)) Transparency GDPR EU AI Act SOC 2 Data Minimization GDPR HIPAA PCI-DSS Human Oversight GDPR EU AI Act HIPAA Audit Trails HIPAA SOC 2 PCI-DSS NIST AI RMF Accountability GDPR HIPAA EU AI Act

1. Transparency

Every consequential decision must be explainable.

To the person it affects.

To the regulator who asks.

To the auditor who reviews it.

Where it comes from.

GDPR Article 22 — the right to meaningful information about automated decisions. Specific to the individual case. Actionable enough to support a challenge.

EU AI Act — interpretability for high-risk systems is a design requirement, not a documentation requirement. A log read after the fact does not satisfy this.

SOC 2 — processing integrity requires demonstrating the full chain: inputs, tool calls, reasoning steps, output. Not just the final decision.

Why agents make this harder.

Traditional software: deterministic, traceable. Same input → same output. Trace the execution → see every step.

LLM-based agents: neither by default.

The same input may produce a different output on a different run.

The intermediate reasoning exists nowhere unless it was deliberately captured.

The design obligation.

The transparency layer must be built before the agent goes live — not retrofitted after the first subject access request arrives.

An agent making consequential decisions cannot be a black box. Opacity is a design choice. Transparency requires choosing differently.

2. Data Minimization

GDPR calls it data minimization.

HIPAA calls it the minimum necessary standard.

PCI-DSS encodes it as scope restriction.

The instruction is the same.

The default is too much.

An agent with broad tool access and a large context window will pull in far more data than any individual task requires.

Not because it was designed to be reckless.

Because doing so is easier than scoping access precisely.

The predictable failure mode.

  1. Developer grants broad access during testing to avoid friction
  2. Access is never narrowed before deployment
  3. Agent goes live with read permissions across the entire CRM, patient record system, or payment stack
  4. No individual request is obviously wrong
  5. The aggregate exposure is substantial

The engineering and compliance consequence is the same.

flowchart LR A["Load everything\n(maximum context)"] B["Task-scoped retrieval\n(minimum necessary)"] A --> C["Unpredictable outputs\nHard to eval\nSlow to debug"] B --> D["Predictable outputs\nEasy to test\nFast root cause"]

Minimization is not just a privacy requirement. It is a reliability requirement.

3. Human Oversight

The agent cannot be the last word on decisions that affect people's lives, livelihoods, or rights.

Three frameworks. Same requirement.

GDPR — right to obtain human intervention and contest automated decisions that significantly affect you.

EU AI Act — high-risk systems must be designed to allow effective human oversight. Not theatrical — measurable.

HIPAA — clinical judgment remains in the loop. The agent is a tool. Not the clinician.

What "effective oversight" actually requires.

The human must be able to understand what the loop produced — not just observe its outputs.

A review step that exists only in the system prompt does not count.

A human who rubber-stamps every agent output without capacity to evaluate it does not count.

Three patterns. Each is both a compliance requirement and a failure prevention tool.

Pattern Compliance function Engineering function
Confirmation gate EU AI Act Art. 14 Stops unauthorized actions
Review queue GDPR Art. 22 Catches hallucinated outputs
Escalation path HIPAA clinical judgment Routes uncertain decisions

A human checkpoint is a compliance requirement. Not a design preference.

4. Audit Trails

If it is not logged, it did not happen.

As far as any auditor, regulator, or court is concerned.

Four frameworks. One artifact.

HIPAA — record and examine activity in systems containing PHI. Both: the log must exist and be reviewed.

SOC 2 — evidence that access was restricted and data was processed only as authorized.

PCI-DSS — all access to cardholder data logged, tamper-protected, retained for 12 months.

NIST AI RMF — you cannot quantify what the agent got wrong if you did not record what it did.

What must be in every record.

  • Precise UTC timestamp
  • Agent identity and version
  • Invoking user identity
  • Session / run ID — ties all steps in a multi-step run together
  • Action type and tool called
  • Reference to data accessed
  • Decision or output produced
  • Any human approval — who, when
  • Outcome

The run ID deserves emphasis.

Without it: dozens of tool calls are unconnected events.

With it: they form a coherent record of a single agent run — reconstructible end to end.

The audit trail that HIPAA, SOC 2, and PCI-DSS require is the same artifact you need to reproduce and fix a production incident. One requirement. Two beneficiaries.

5. Accountability

Someone must own the agent's actions.

Not the model provider.

Not the framework vendor.

The organization that deployed it.

Three frameworks. Same direction.

GDPR — the controller determines the purpose and means of processing. Delegating processing does not delegate accountability.

HIPAA — covered entity obligations follow the entity, not the technology it uses.

EU AI Act — deployer responsibilities attach to the organization that puts a high-risk system into operation.

"We use a third-party model" is not a defense.

The deploying organization is in control —

because legally, it is.

Accountability is structural. Two requirements make it concrete.

Named owners. Not a team, not a shared inbox — a person with a role that includes agent oversight and the authority to act on it. Identified before the agent is deployed.

An audit trail that reflects ownership. A log that records "model returned decision X" without capturing what instructions it was given and what controls were in place is not an accountability record.

Accountability is not a cultural value. It is a structural property of the system.

Closing

Five principles. Six frameworks. One shared vocabulary.

graph LR L["Part 1 — For the Business\nFrameworks & Obligations"] --> M["5 Principles\n―――――――――――――\nTransparency\nData Minimization\nHuman Oversight\nAudit Trails\nAccountability"] --> R["Part 2 — For the Developer\nArchitecture & Implementation"]

The regulations are complicated.

The principles are not.

Start with the five.

The rest is translation.

Sri Rang  ·  srirangan.net  ·  @srirangan  ·  platformagentic.com