Privacy Policy

Last updated: 9 March 2026

1. Background & Scope

1.1 This Privacy Policy describes how Ergodic Limited ("we", "us", or "our") handles personal data in the context of our agent platform, Cardamon (the "Platform").

1.2 The Agent Context. Unlike static software, our Platform utilizes autonomous and semi-autonomous AI agents ("Agents"). This policy specifically addresses the unique data flows associated with generative AI, including inputs (prompts), outputs (agent actions/responses), and model optimization.

2. Data Controller vs. Processor

2.1 Ergodic as Controller. We are the controller of personal data for Website visitors, direct marketing recipients, and account administrative data (e.g., billing contact info).

2.2 Ergodic as Processor. For most "Enterprise Use" cases, where you (the Customer) provide data or prompts to be processed by an Agent, you are the Controller and we are the Processor. Our processing in this capacity is governed by our Data Processing Addendum (DPA).

3. The Information We Collect (AI-Specific)

In addition to standard Identity and Contact Data, the "Agentic" nature of our platform requires us to collect:

  • Prompt/Input Data: Any personal data contained within the text instructions you provide to an Agent.
  • Agent Interaction Logs: A granular audit trail of the "thoughts," tool-calls, and actions taken by the Agent to fulfill a request.
  • Training Metadata: De-identified telemetry on how Agents perform (e.g., success rates, latency, and "hallucination" flags) to improve our underlying architecture.
  • Integration Data: If you connect an Agent to third-party tools (e.g., Slack, GitHub, Jira), we collect the tokens and permissions necessary for the Agent to act on your behalf.

4. How We Use Your Information (The Lawful Basis)

We rely on Legitimate Interests (Art. 6(1)(f) UK GDPR) for the following AI-specific purposes:

  • Platform Safety: To detect and prevent "prompt injection," "jailbreaking," or the use of Agents for illicit purposes.
  • Agent Debugging: To analyze "failed" Agent trajectories to improve the reliability of the service.
  • Model Optimization: We may use anonymised and aggregated data to fine-tune our proprietary orchestration layers.

Note: We do not use Customer "Input Data" to train foundational models for other customers unless you explicitly opt-in or use a public tier of our service.

5. Automated Decision-Making (ADM)

5.1 Our Platform allows for the creation of Agents that can perform tasks autonomously.

5.2 Customer Responsibility. If you deploy an Agent to make decisions that have legal or similarly significant effects on individuals (e.g., automated recruitment or credit scoring), you acknowledge that you are responsible for ensuring compliance with Article 22 of the UK GDPR, including providing a right to human intervention.

6. Who We Share Your Information With (The AI Ecosystem)

To provide the Platform, we share data with:

  • LLM Providers: We use "Sub-processors" such as Anthropic, OpenAI, or Google (via Vertex AI) to process prompts. We prioritize providers who offer "Zero Data Retention" (ZDR) or enterprise-grade privacy tiers.
  • Infrastructure Partners: Secure cloud hosting (e.g., AWS/Azure) located in the UK/EEA.
  • Tool Integrations: If you instruct an Agent to "post to Slack," that data is shared with Slack per your instruction.

7. Data Location & Transfers

7.1 While we aim to keep data in the UK, the distributed nature of high-compute AI means some processing may occur in the US or in other countries.

7.2 Where transfers occur, we utilize the UK International Data Transfer Agreement (IDTA) or the UK Addendum to the EU SCCs to ensure "essential equivalence" of protection.

8. Data Retention

8.1 Prompt/Output Data. Retained for the duration of your subscription plus a "buffer" period of 90 days for disaster recovery, unless a shorter period is requested via API/Settings.

8.2 Security Logs. Retained for 12 months to satisfy audit requirements and safety forensics.

9. Your Rights & "The Right to Erase" in AI

9.1 You have the standard rights (Access, Correction, Deletion).

9.2 Correction in AI. Please note that due to the probabilistic nature of LLMs, we cannot "correct" a specific hallucination within a generated output, but we can delete the record containing the inaccuracy.

9.3 Opt-Out. You have the right to object to your data being used for "Product Improvement" purposes. You can exercise this via the Privacy Settings in your dashboard.

Contact

If you have questions about this Privacy Policy, please contact us at privacy@cardamon.ai.