@srirangan · srirangan.net · LangChain Ambassador NL
The system you are evaluating is a single, identifiable unit with:
That assumption makes risk classification tractable.
Distributed capabilities — the orchestrator's capability profile includes every sub-agent it can invoke.
Dynamic boundaries — the effective boundary at runtime is determined by the orchestrator's reasoning, not its static configuration.
Fragmented audit trails — orchestrator logs and sub-agent logs are separate records by default.
This is the compliance gap.
An orchestrator routes a request to a summarization sub-agent.
The sub-agent reads a patient record to generate a clinical note.
That single step touches PHI.
It classifies the entire system — orchestrator included — as high-risk under the EU AI Act and HIPAA.
1. Apply the five classification questions to the full reachable graph — not just the orchestrator.
2. Document the full reachable graph. A classification that describes only the orchestrator's direct capabilities is not a compliance artifact.
3. Re-classify when a new sub-agent is added. Adding a sub-agent that touches PHI, financial records, or biometric data is a classification event.
If any step in the agent's chain would be high-risk in isolation, classify the whole system as high-risk.
Without explicit correlation, those are two separate records describing two separate systems.
The compliance chain doesn't exist.
HIPAA — audit controls require recording and examining activity in systems containing PHI. "The orchestrator didn't access PHI — only the sub-agent did" is not a valid answer.
SOC 2 — the full processing chain must be traceable.
PCI-DSS Requirement 10 — agent tool calls in payment environments must appear in the same audit infrastructure as every other system.
@traceable achieves this automatically.from langsmith import traceable
@traceable(name="orchestrator-run")
def orchestrator(task: str) -> str:
return subagent(task)
@traceable(name="subagent-run")
def subagent(task: str) -> str:
return process(task)
Every sub-agent call becomes a child span. The root trace ID is the compliance record for the entire execution.
An orchestrator that cannot account for everything its sub-agents did has not logged a session. It has logged a gap.
When a human approves an orchestrator action, they approve that specific decision.
They do not approve every downstream action the orchestrator subsequently triggers.
GDPR Article 22 — the right to human intervention attaches to each consequential decision. Not to an approval propagated through the chain.
EU AI Act human oversight requirements — each high-risk component requires its own oversight mechanism. An orchestrator approval does not transfer to sub-agents spawned at runtime.
Delegated authority is not the same as granted authority.
An orchestrator has read access to ten document categories.
It delegates a summarization task to a sub-agent.
The question is not what the orchestrator can access.
It is what the sub-agent needs to do this specific task.
summarizer = Agent(
tools=[read_doc_tool], # subset only
system_prompt=(
"Summarize the provided document."
),
)
# Never: tools=orchestrator.tools
A prompt injection attack against a sub-agent with the orchestrator's full tool list has access to everything the orchestrator could do.
The blast radius is the orchestrator's full capability set — not the sub-agent's assigned task scope.
An orchestrator that passes its own tool list to a sub-agent unchanged has not scoped the handoff. It has cloned itself.
1. Declare the full reachable graph at design time. Changes to that graph are classification events.
2. Trace across agent boundaries. Use a framework that creates parent-child spans automatically. The root run ID is the compliance record for the entire execution.
3. Narrow permissions at every handoff. Each sub-agent gets its own explicit tool list — a strict subset.
4. Route consequential sub-agent actions through their own human approval paths. Don't assume an orchestrator-level approval covers downstream decisions.
The multi-agent compliance problem is not a new discipline.
It is Transparency, Data Minimization, Human Oversight, Audit Trails, and Accountability — applied to a system with distributed capabilities, dynamic boundaries, and a fragmented default audit trail.
The audit chain must span everything it can direct.
The approval given to the orchestrator does not authorize the actions of the things it invokes.
Sri Rang · srirangan.net · @srirangan · platformagentic.com