Sri Rang

Building EU AI Act-Ready Agents with LangChain

@srirangan  ·  srirangan.net  ·  LangChain Ambassador NL

The €15M Question

EU AI Act high-risk obligations enforce on 2 August 2026.

Non-compliance: up to €15,000,000 or 3% of worldwide annual turnover, whichever is higher.

  • First comprehensive AI regulation anywhere in the world
  • Less than 3 months away as of writing
  • Most enterprise agents fall into "high-risk" — that's where the obligations bite

What this covers.

  • The 7 articles that drive high-risk operational requirements
  • GPAI obligations and what they mean for downstream builders
  • Deployment topology and EU data residency
  • A 90-day plan to get audit-ready

No runnable code. Conceptual diagrams and crosswalks only.

Risk Classification

Four tiers + a GPAI overlay.

flowchart TB Prohibited["PROHIBITED\ne.g. social scoring, manipulative AI, untargeted facial scraping"] HighRisk["HIGH-RISK\ncredit, HR, healthcare, biometric,\ncritical infrastructure, law enforcement"] Limited["LIMITED RISK\nchatbots, deepfakes\n(transparency obligations)"] Minimal["MINIMAL RISK\nspam filters, video games\n(no obligations)"] GPAI["GPAI Models\nseparate obligations\ncross-cutting"] Prohibited --> HighRisk HighRisk --> Limited Limited --> Minimal GPAI -.cross-cuts.-> HighRisk GPAI -.cross-cuts.-> Limited classDef prohibited fill:#fcc,stroke:#a00,color:#000 classDef high fill:#fde2e2,stroke:#a02020,color:#000 classDef limited fill:#fff4cc,stroke:#cc8800,color:#000 classDef minimal fill:#e6f5d0,stroke:#5a8a3a,color:#000 classDef gpai fill:#cfe6ff,stroke:#1a4d8c,color:#000 class Prohibited prohibited class HighRisk high class Limited limited class Minimal minimal class GPAI gpai

Are you high-risk?

You're high-risk if your agent makes or materially supports decisions in:

  • Financial services — credit scoring, insurance pricing
  • Employment — recruitment, screening, performance evaluation
  • Education — admissions, exam scoring
  • Healthcare — medical devices, triage, diagnostic support
  • Biometric identification — including emotion recognition
  • Critical infrastructure, law enforcement, migration, justice

If you're unsure, assume high-risk and downgrade with legal counsel.

Quick reality check.

Agent Risk tier
Finance copilot that approves loans High-risk
HR agent that screens CVs High-risk
Clinical triage assistant High-risk
Customer support chatbot Limited risk — transparency only
Internal documentation search Minimal risk

The Seven Articles

Seven articles. One crosswalk.

flowchart LR subgraph Articles["EU AI Act Articles"] direction TB A9["Art. 9\nRisk Management"] A10["Art. 10\nData Governance"] A12["Art. 12\nEvent Logging"] A13["Art. 13\nTransparency"] A14["Art. 14\nHuman Oversight"] A15["Art. 15\nAccuracy & Resilience"] A72["Art. 72\nPost-Market Monitoring"] end subgraph LC["LangChain v1 Capabilities"] direction TB Eval["Online Evaluators\n+ Custom Thresholds"] PII["Bias Evaluators\n+ PII Middleware"] Trace["LangSmith Tracing\n+ Retention Tiers"] Studio["LangSmith Studio\nVisual Execution Graphs"] Interrupt["LangGraph interrupt\n+ Annotation Queues"] AdvEval["Correctness +\nAdversarial Evaluators"] Drift["Drift Detection\n+ Dashboards"] end A9 --> Eval A10 --> PII A12 --> Trace A13 --> Studio A14 --> Interrupt A15 --> AdvEval A72 --> Drift

Articles 9, 10, 12

Foundation: Risk, Data, Logging

Art. 9 — Risk Management

A living risk management system across the development lifecycle — not a one-time document.

Requires: identify, analyze, evaluate, and mitigate risks — continuously updated.

LangChain v1: - Online evaluators scoring production traffic against custom thresholds - Custom evaluators for domain-specific risks — financial accuracy, clinical safety - Webhook → PagerDuty alerts when thresholds breach - Risk register kept in sync with evaluator outputs

Art. 10 — Data Governance & Bias

Data quality, representativeness, and explicit bias examination across protected characteristics.

Requires: documented data provenance, bias examination across race, gender, age, religion, nationality, disability, sexual orientation — and documented mitigations.

LangChain v1: - Bias and fairness evaluators — LangSmith ships templates per protected characteristic - PII Middleware — prevents leakage of protected attributes in inputs and outputs - Trace dataset documentation for evaluation provenance

Art. 12 — Automatic Event Logging

Logs spanning the full system lifecycle — sufficient for deployer oversight and regulatory inspection.

Requires: inputs, outputs, timestamps, agent context, sufficient detail for audit.

Tier Retention
Base traces 14 days
Extended traces 400 days
Bulk export Long-term archival
  • EU residency — LangSmith EU SaaS, BYOC, or self-hosted

Articles 13, 14, 15, 72

Transparency, Oversight, Accuracy, Monitoring

Art. 13 — Transparency to Deployers

Outputs must be interpretable enough that deployers can use the system appropriately.

Requires: clear instructions, documented capabilities and limitations, interpretable outputs.

LangChain v1: - LangSmith Studio — visual execution graph showing state transitions, tool calls, decisions - Full reasoning traces — every step inspectable - Documented agent specs — inputs, outputs, tool registry, system prompt

Art. 14 — Human Oversight

Humans must understand, intervene on, override, and interrupt the system. Not theatrical — measurable.

Requires: oversight designed into the architecture, humans able to intervene at decision points, auditable trail.

flowchart LR Agent["Agent reaches\nstate-change tool"] --> Int["LangGraph interrupt"] Int --> Q["Annotation Queue"] Q --> Reviewer["Human reviewer\nstructured feedback"] Reviewer -->|approve| Resume["Resume from checkpoint"] Reviewer -->|reject| Halt["Halt + log decision"] classDef ok fill:#cfe6ff,stroke:#1a4d8c,color:#000 class Int,Q,Reviewer,Resume,Halt ok

Art. 15 — Accuracy & Adversarial Resilience

Declared accuracy levels and demonstrable protection against common attack surfaces.

Requires: stated accuracy metrics, adversarial resilience, consistency over system lifetime.

LangChain v1: - Correctness, exact match, plan adherence, task completion evaluators - Prompt injection and jailbreaking evaluators — LangSmith templates - API leakage, code injection evaluators for tool-calling agents - Adversarial evaluation suites — run before every release

Art. 72 — Post-Market Monitoring

Continuous monitoring of production behavior with incident reporting to authorities.

Requires: continuous monitoring, drift detection, incident reporting to national supervisory authorities.

LangChain v1: - Online evaluators with custom thresholds - Drift detection dashboards - Webhooks → incident response system - Audit dashboards for compliance and regulator-facing reporting

GPAI & Data Residency

GPAI — Provider or downstream developer?

flowchart LR Foundation["Foundation Model\n(GPT, Claude, Gemini, etc.)"] -->|API call| YourAgent["Your LangChain Agent"] YourAgent --> EndUser["End User"] Foundation -.GPAI obligations.- ProviderRole["Model Provider\n(Anthropic, OpenAI, Google, etc.)"] YourAgent -.AI Act high-risk obligations.- DownstreamRole["You — Downstream Developer"] classDef provider fill:#ffe5b4,stroke:#996600,color:#000 classDef downstream fill:#cfe6ff,stroke:#1a4d8c,color:#000 class ProviderRole provider class DownstreamRole downstream

Most LangChain users are downstream developers — not GPAI providers. Fine-tuning and redistributing can shift you into the provider role. Get legal advice if you're close to that line.

Deployment topology.

Option Best for
Managed Cloud (US) General use, non-EU workloads
LangSmith EU SaaS High-risk EU systems — most common choice
BYOC Regulated industries: finance, healthcare
Self-hosted Maximum control: government, defense

If you're in scope and your customers are EU-based, default to LangSmith EU SaaS or BYOC. The audit story is dramatically simpler when traces never leave the jurisdiction.

90-Day Compliance Plan

Days 1–30: Map.

  • Identify which agents are high-risk under Article 6 + Annex III
  • Document use case, deployer, sector, decision impact for each
  • Determine your role: GPAI provider or downstream developer
  • Stand up the risk register
  • Pick deployment topology: EU SaaS / BYOC / self-hosted

Days 31–60: Wire up.

  • LangSmith tracing on every high-risk agent path
  • PIIMiddleware on inputs and outputs
  • Bias evaluators per relevant protected characteristic
  • LangGraph interrupt on every state-changing tool call
  • Online evaluators for prompt injection and adversarial inputs
  • Webhooks → incident response system

Days 61–90: Document.

  • Technical documentation per Annex IV
  • Risk management documentation per Article 9
  • Logging and retention policies per Article 12
  • Human oversight procedures per Article 14
  • Post-market monitoring plan per Article 72
  • Internal audit-readiness review before the August deadline

Documentation burden is real — budget engineering time, not just legal time.

The audit trail you build for the regulator

is the same audit trail that helps your team ship faster.

  • Map your agents, trace everything, keep humans in the loop
  • Build once for the AI Act — cover most other regimes with the same primitives
  • Assume the regulator will eventually ask.

Sri Rang  ·  srirangan.net  ·  @srirangan