Meet Redakt: Practical GDPR Compliance for AI Teams
TL;DR
Telling employees "don't enter personal data into AI tools" doesn't work without giving them a way to comply. Redakt is an open-source PII anonymizer built on Microsoft Presidio that sits between your employees and their AI tools. Paste text in, get an anonymized version with placeholders. Paste the AI's response back, get the original values restored. The server never stores PII. It runs on your infrastructure, inside your network. No additional data processing agreements needed. The tool is free, the code is open.
Earlier this month I wrote about shadow AI and the compliance gap discussing how employees using unapproved AI tools with personal data are creating quiet GDPR liability across Europe, and how the gap between what the law requires and what companies actually do is growing every day.
The response made one thing clear: people know they have a problem. Current measures feel like we are all playing whack-a-mole.
The advice that post ended with was honest but incomplete. Department-specific guidelines, approved tool lists, clearer communication — all necessary, all insufficient. Because even with perfect policies, you still have the same fundamental problem: an employee sitting in front of ChatGPT with a paragraph of text containing a customer's name, email, and order history, and no practical way to strip it out before hitting enter.
So I built something.
A Tool Better Than Policy
A sales rep pastes a customer's name, email, and order history into ChatGPT's free tier, using a personal account, to draft a follow-up email. Thirty seconds later they have a polished result. They send it and never think about it again. That thirty seconds just created a potential data breach under GDPR Article 4(12): personal data transmitted to a third party without a DPA, without a legal basis, and without the data subject's knowledge. The 72-hour notification clock starts ticking once the company finds out.
The sales rep isn't reckless. They're doing what every productivity blog tells them to do. I can't blame them. The pressure to adopt AI is real. The problem isn't motivation. It's that "just anonymize it first" is advice without a mechanism.
What does "anonymize it" even mean to someone who isn't a data protection specialist? Manually find every name and replace it with "Person A"? You can't solve a behavioral problem with a policy document. You solve it with a tool.