EU LLM Providers: Data Processing Agreements (DPAs)

Published on September 28, 2025by Claudio Teixeira

A practical guide to the Data Processing Agreements (DPAs) of major LLM providers like Anthropic, including key compliance features for GDPR and the EU AI Act.

This guide provides a summary and links to the Data Processing Agreements (DPA) for major Large Language Model (LLM) providers. A Data Processing Agreement (DPA) is a legally binding document that governs the processing of personal data and is a mandatory requirement under GDPR when using a third-party service (like an LLM API) to process user data.

Most enterprise-grade LLM providers offer Standard Contractual Clauses (SCCs) for EU compliance and have certifications like ISO 27001 and SOC 2 referenced in their DPAs.


Anthropic

Anthropic provides a comprehensive Data Processing Addendum (DPA) that is automatically incorporated into its Commercial Terms of Service for European customers. It includes Standard Contractual Clauses (SCCs) to ensure GDPR compliance.

Key Compliance Features

  • EU Entity: For customers in the EEA, Switzerland, or the UK, the contracting entity is Anthropic Ireland, Limited.
  • Governing Law: The laws of Ireland apply, with dispute resolution in Irish courts.
  • Standard Contractual Clauses: Incorporates Module Two (controller to processor) and Module Three (processor to processor) of the EU SCCs.
  • Data Processing: Anthropic acts as the data processor and only processes data based on customer instructions. It commits not to "sell" or "share" customer personal data.
  • Security: Technical measures include AES-256 encryption for data at rest, TLS 1.2+ for data in transit, and annual third-party audits (SOC 2, ISO 27001).
  • Breach Notification: Commits to notifying customers within 48 hours of a security breach.
  • Data Deletion: Customer data is deleted or returned within 30 days of the agreement's termination.

Suitability for Sensitive Data (e.g., HealthTech)

Yes, a medtech company can use Anthropic’s APIs with personal data, including protected health information (PII/PHI), under clear GDPR-aligned restrictions. Anthropic is contractually barred from using submitted PII to train models or for any other purpose beyond providing the service.

Anthropic's Data Use Commitments
CommitmentDescription
No Training on Customer ContentAnthropic contractually agrees that all “Inputs” (including PII/PHI) submitted via their API will not be used to train their models.
Controller/Processor RelationshipYour company is the data controller, and Anthropic is the data processor. Anthropic is only allowed to process personal data per your documented instructions.
No Sale or SharingAnthropic cannot “sell,” “share,” or combine your PII/PHI with data from other customers.
GDPR, HIPAA & SCCsThe DPA is structured to meet EU (GDPR) and UK/Swiss requirements, using Standard Contractual Clauses for international transfers. For US healthcare, they also support HIPAA compliance and will sign Business Associate Agreements (BAAs).
Best Practice: Anonymization and Pseudonymization

While Anthropic provides strong contractual protections, you are far better protected—both legally and in terms of risk—if you anonymize or pseudonymize patient data before sending it. This is a recognized best practice under GDPR and the EU AI Act.

  • Lower Compliance Burden: Truly anonymized data falls outside the scope of GDPR and the EU AI Act’s high-risk provisions. Pseudonymized data, while still personal data, is viewed favorably by regulators as a risk mitigation measure.
  • Data Minimization: Sending only de-identified data helps you comply with the core GDPR principle of processing only what is strictly necessary.
  • Reduced Breach Impact: In the event of a data breach, the harm is significantly lower if no direct personal identifiers are exposed.
Your Responsibilities Under the EU AI Act

Using Anthropic for a medtech application does not automatically make you compliant. Since AI in healthcare is typically high-risk, you as the deployer have additional obligations:

  • Risk Management: You must establish a robust risk management system for your AI application.
  • Human Oversight: Critical decisions affecting patients must have a human in the loop to review and override the AI's output.
  • Conformity Assessment: High-risk AI systems require a conformity assessment before being placed on the market, often under MDR/IVDR regulations.
  • Data Governance & DPIA: You must conduct a Data Protection Impact Assessment (DPIA) for processing sensitive health data.
  • Transparency and Record-Keeping: You must maintain detailed logs of the AI's operation and be transparent with users about how their data is being processed.

Official Links & Documents

For your records, you can keep a local copy of the DPA.

Download a Summary of Anthropic's DPA (HTML)