Senden
FeaturesGet Started

AI Statement

Last updated: February 2026

Our Position on AI

Senden takes a deliberate and conservative approach to artificial intelligence. We believe that using AI-generated code in production systems introduces unpredictable security risks — subtle logic errors, unvetted dependencies, and hard-to-audit patterns that can create vulnerabilities which are difficult to detect or trace.

To protect our users, AI is never used to write production code at Senden. Every line of code that runs on our servers and clients is written and reviewed by humans.

Why We Avoid AI in Production Code

  • AI-generated code can introduce subtle vulnerabilities that pass surface-level review but fail under adversarial conditions
  • Patterns produced by AI models are often trained on insecure or outdated codebases, propagating known weaknesses
  • Human-authored code is easier to reason about, audit, and hold accountable to a security standard
  • We cannot accept security risks we cannot fully audit — and AI-generated code is inherently harder to fully audit
  • If you want more information about why, read this blog article: https://epilogue.team/blog/vibecoding-and-the-future-of-code-security

Exceptions

There are two narrowly scoped contexts in which AI is used at Senden. In both cases, no data is sent to any third party and no external AI service or cloud API is involved.

CSAM Detection

AI is used to detect child sexual abuse material (CSAM) in images shared through the Service. This is a legal and ethical obligation we take seriously. The detection runs entirely on our own infrastructure — no image data, hashes, or signals ever leave our servers.

Translations

AI is used to provide in-app translation of messages. Like CSAM detection, this runs entirely on our own infrastructure. No message content is sent to any external translation service or third-party model provider.

Senden
GitHubDocsPrivacyTermsAI StatementStatus
© 2026 Senden