@srirangan · srirangan.net · LangChain Ambassador NL
An agent decides to act.
That single difference changes everything about how compliance applies.
Every major framework was designed with one implicit assumption baked in:
A human made the decision.
That assumption determines who is accountable, what records must be kept, and what rights the people affected can exercise.
Agents break all three assumptions simultaneously.
Nobody pressed a button for any of those actions.
It is the autonomy.
Not the language model. Not machine learning. Autonomous action without a human in the loop at the moment of decision.
"AI agents are so novel that existing frameworks don't cover them."
That assumption is wrong.
Acting on it is expensive.
Article 22 addresses automated processing that produces decisions affecting people.
It doesn't say "software." It doesn't say "AI."
An agent that screens job applications meets that condition — whether it uses a language model or a rules engine.
It doesn't require a human at the keyboard.
A clinical agent that retrieves patient records to generate a discharge summary is accessing protected health information.
The standard applies.
It doesn't ask who processed the data.
It asks whether the processing was complete, accurate, and authorized.
An agent that processes customer data outside its authorized scope fails that criterion — regardless of whether a human or a model made the call.
It isn't in the teams.
It's in the translation.
In traditional software: a human initiates each significant action. The access log has a user attached.
In an agentic system: actions happen because the agent reasoned they should.
"The model decided" is not an answer. The deploying organization owns the deployment context — and everything that follows from it.
A single agent run can touch patient data, payment information, and EU personal data in the same 30-second window.
HIPAA, PCI-DSS, and GDPR don't apply sequentially.
They apply simultaneously, to the same action, at the same moment.
A document review agent classified as limited risk at deployment can cross into high-risk territory six months later.
Not because anyone decided to change its classification.
Because someone added a tool that lets it initiate contract amendments.
The classification doesn't update automatically. The obligations do.
Agents produce those behaviors and conditions:
The translation problem: not "does the law apply?" — it always did. But where exactly does it attach, and what does it require?
At design time — when the agent's capabilities and tool permissions are defined.
At deployment time — when the system's risk classification is established.
At runtime — on every tool call, every data retrieval, every autonomous action.
Over time — as the system changes.
These are agent systems. They carry compliance implications. They belong in the register.
It begins when the agent is deployed.
It's in the translation —
and the translation is the work.
Sri Rang · srirangan.net · @srirangan · platformagentic.com