Senden takes a deliberate and conservative approach to artificial intelligence. We believe that using AI-generated code in production systems introduces unpredictable security risks — subtle logic errors, unvetted dependencies, and hard-to-audit patterns that can create vulnerabilities which are difficult to detect or trace.
To protect our users, AI is never used to write production code at Senden. Every line of code that runs on our servers and clients is written and reviewed by humans.
There are two narrowly scoped contexts in which AI is used at Senden. In both cases, no data is sent to any third party and no external AI service or cloud API is involved.
CSAM Detection
AI is used to detect child sexual abuse material (CSAM) in images shared through the Service. This is a legal and ethical obligation we take seriously. The detection runs entirely on our own infrastructure — no image data, hashes, or signals ever leave our servers.
Translations
AI is used to provide in-app translation of messages. Like CSAM detection, this runs entirely on our own infrastructure. No message content is sent to any external translation service or third-party model provider.