Sri Rang

Platform Agentic: When Agents Orchestrate Agents

The Compliance Gap Nobody's Talking About

@srirangan  ·  srirangan.net  ·  LangChain Ambassador NL

The Single-System Assumption

Every compliance framework assumes one thing.

The system you are evaluating is a single, identifiable unit with:

  • A defined capability set
  • A defined data access profile
  • A defined set of actions it can take

That assumption makes risk classification tractable.

Multi-agent systems break it in three ways.

Distributed capabilities — the orchestrator's capability profile includes every sub-agent it can invoke.

Dynamic boundaries — the effective boundary at runtime is determined by the orchestrator's reasoning, not its static configuration.

Fragmented audit trails — orchestrator logs and sub-agent logs are separate records by default.

Most teams assess only the orchestrator.

graph TD O["Orchestrator\nAssessed ✓"] A["Sub-agent A\nNot assessed"] B["Sub-agent B — PHI access\nNot assessed"] O --> A O --> B B --> C["High-risk obligation\nnever triggered"]

This is the compliance gap.

The Multi-Agent Classification Rule

The chain inherits the classification of its highest-risk component.

One sub-agent changes everything.

An orchestrator routes a request to a summarization sub-agent.

The sub-agent reads a patient record to generate a clinical note.

That single step touches PHI.

It classifies the entire system — orchestrator included — as high-risk under the EU AI Act and HIPAA.

The classification rule visualized.

flowchart LR O[Orchestrator] --> A[Subagent A\nLimited risk] O --> B[Subagent B\nHigh risk — PHI] A & B --> C[System classification:\nHigh risk]

Three practical consequences.

1. Apply the five classification questions to the full reachable graph — not just the orchestrator.

2. Document the full reachable graph. A classification that describes only the orchestrator's direct capabilities is not a compliance artifact.

3. Re-classify when a new sub-agent is added. Adding a sub-agent that touches PHI, financial records, or biometric data is a classification event.

If any step in the agent's chain would be high-risk in isolation, classify the whole system as high-risk.

The Audit Chain

The orchestrator logs its decisions. The sub-agent logs its actions.

Without explicit correlation, those are two separate records describing two separate systems.

The compliance chain doesn't exist.

What every framework requires.

HIPAA — audit controls require recording and examining activity in systems containing PHI. "The orchestrator didn't access PHI — only the sub-agent did" is not a valid answer.

SOC 2 — the full processing chain must be traceable.

PCI-DSS Requirement 10 — agent tool calls in payment environments must appear in the same audit infrastructure as every other system.

The audit chain must span the entire execution.

flowchart TD O["Orchestrator run\nrun_id: root-abc123"] A["Sub-agent A\nparent: root-abc123"] B["Sub-agent B\nparent: root-abc123"] T1["Tool: fetch_record\nparent: sub-agent-A"] T2["Tool: write_summary\nparent: sub-agent-A"] O --> A O --> B A --> T1 A --> T2

LangSmith's @traceable achieves this automatically.

from langsmith import traceable

@traceable(name="orchestrator-run")
def orchestrator(task: str) -> str:
    return subagent(task)

@traceable(name="subagent-run")
def subagent(task: str) -> str:
    return process(task)

Every sub-agent call becomes a child span. The root trace ID is the compliance record for the entire execution.

An orchestrator that cannot account for everything its sub-agents did has not logged a session. It has logged a gap.

Non-Transitive Approval

Human approvals don't cascade.

When a human approves an orchestrator action, they approve that specific decision.

They do not approve every downstream action the orchestrator subsequently triggers.

Two frameworks make this explicit.

GDPR Article 22 — the right to human intervention attaches to each consequential decision. Not to an approval propagated through the chain.

EU AI Act human oversight requirements — each high-risk component requires its own oversight mechanism. An orchestrator approval does not transfer to sub-agents spawned at runtime.

Cascaded instructions ≠ approved instructions.

flowchart TD H["Human approves\norchestrator action"] O["Orchestrator"] SA["Sub-agent:\nconsequential action"] H -->|"approves"| O O -->|"delegates"| SA SA -->|"needs its own"| HA["Human authorization\n(non-transitive)"]

Delegated authority is not the same as granted authority.

Permission Scope at Every Handoff

Scope must narrow. Never expand.

An orchestrator has read access to ten document categories.

It delegates a summarization task to a sub-agent.

The question is not what the orchestrator can access.

It is what the sub-agent needs to do this specific task.

The answer is always: less.

flowchart LR O["Orchestrator\n10 tools"] -->|"narrow scope"| A["Subagent A\nread_doc only"] O -->|"narrow scope"| B["Subagent B\nwrite_summary only"]

In code: explicit, never inherited.

summarizer = Agent(
    tools=[read_doc_tool],      # subset only
    system_prompt=(
        "Summarize the provided document."
    ),
)
# Never: tools=orchestrator.tools

The security consequence is the same as the compliance consequence.

A prompt injection attack against a sub-agent with the orchestrator's full tool list has access to everything the orchestrator could do.

The blast radius is the orchestrator's full capability set — not the sub-agent's assigned task scope.

An orchestrator that passes its own tool list to a sub-agent unchanged has not scoped the handoff. It has cloned itself.

What This Means in Practice

Four engineering requirements.

1. Declare the full reachable graph at design time. Changes to that graph are classification events.

2. Trace across agent boundaries. Use a framework that creates parent-child spans automatically. The root run ID is the compliance record for the entire execution.

3. Narrow permissions at every handoff. Each sub-agent gets its own explicit tool list — a strict subset.

4. Route consequential sub-agent actions through their own human approval paths. Don't assume an orchestrator-level approval covers downstream decisions.

The same five principles. Applied to a distributed system.

The multi-agent compliance problem is not a new discipline.

It is Transparency, Data Minimization, Human Oversight, Audit Trails, and Accountability — applied to a system with distributed capabilities, dynamic boundaries, and a fragmented default audit trail.

The orchestrator inherits the classification.

The audit chain must span everything it can direct.

The approval given to the orchestrator does not authorize the actions of the things it invokes.

Sri Rang  ·  srirangan.net  ·  @srirangan  ·  platformagentic.com