@srirangan · srirangan.net · LangChain Ambassador NL
GDPR. EU AI Act. HIPAA. SOC 2. PCI-DSS. NIST AI RMF.
Written by different authors, for different industries, in different jurisdictions, at different moments.
And yet — five things they all agree on.
The five principles give teams a shared vocabulary.
Business and legal teams use them as governance requirements.
Developer teams use them as architectural constraints.
Without a shared vocabulary, the two sides talk past each other — and compliance gaps open in the space between.
To the person it affects.
To the regulator who asks.
To the auditor who reviews it.
GDPR Article 22 — the right to meaningful information about automated decisions. Specific to the individual case. Actionable enough to support a challenge.
EU AI Act — interpretability for high-risk systems is a design requirement, not a documentation requirement. A log read after the fact does not satisfy this.
SOC 2 — processing integrity requires demonstrating the full chain: inputs, tool calls, reasoning steps, output. Not just the final decision.
Traditional software: deterministic, traceable. Same input → same output. Trace the execution → see every step.
LLM-based agents: neither by default.
The same input may produce a different output on a different run.
The intermediate reasoning exists nowhere unless it was deliberately captured.
The transparency layer must be built before the agent goes live — not retrofitted after the first subject access request arrives.
An agent making consequential decisions cannot be a black box. Opacity is a design choice. Transparency requires choosing differently.
The instruction is the same.
An agent with broad tool access and a large context window will pull in far more data than any individual task requires.
Not because it was designed to be reckless.
Because doing so is easier than scoping access precisely.
Minimization is not just a privacy requirement. It is a reliability requirement.
GDPR — right to obtain human intervention and contest automated decisions that significantly affect you.
EU AI Act — high-risk systems must be designed to allow effective human oversight. Not theatrical — measurable.
HIPAA — clinical judgment remains in the loop. The agent is a tool. Not the clinician.
The human must be able to understand what the loop produced — not just observe its outputs.
A review step that exists only in the system prompt does not count.
A human who rubber-stamps every agent output without capacity to evaluate it does not count.
| Pattern | Compliance function | Engineering function |
|---|---|---|
| Confirmation gate | EU AI Act Art. 14 | Stops unauthorized actions |
| Review queue | GDPR Art. 22 | Catches hallucinated outputs |
| Escalation path | HIPAA clinical judgment | Routes uncertain decisions |
A human checkpoint is a compliance requirement. Not a design preference.
As far as any auditor, regulator, or court is concerned.
HIPAA — record and examine activity in systems containing PHI. Both: the log must exist and be reviewed.
SOC 2 — evidence that access was restricted and data was processed only as authorized.
PCI-DSS — all access to cardholder data logged, tamper-protected, retained for 12 months.
NIST AI RMF — you cannot quantify what the agent got wrong if you did not record what it did.
Without it: dozens of tool calls are unconnected events.
With it: they form a coherent record of a single agent run — reconstructible end to end.
The audit trail that HIPAA, SOC 2, and PCI-DSS require is the same artifact you need to reproduce and fix a production incident. One requirement. Two beneficiaries.
Not the model provider.
Not the framework vendor.
The organization that deployed it.
GDPR — the controller determines the purpose and means of processing. Delegating processing does not delegate accountability.
HIPAA — covered entity obligations follow the entity, not the technology it uses.
EU AI Act — deployer responsibilities attach to the organization that puts a high-risk system into operation.
The deploying organization is in control —
because legally, it is.
Named owners. Not a team, not a shared inbox — a person with a role that includes agent oversight and the authority to act on it. Identified before the agent is deployed.
An audit trail that reflects ownership. A log that records "model returned decision X" without capturing what instructions it was given and what controls were in place is not an accountability record.
Accountability is not a cultural value. It is a structural property of the system.
The principles are not.
Start with the five.
The rest is translation.
Sri Rang · srirangan.net · @srirangan · platformagentic.com