When anonymizing data for LLMs, is replacing a name with XXXXX enough?
A great post by Franklin Cardenoso Fernandez argues that we can do better. While simple masking hides data, it often destroys the context that models need to perform well.
A more robust method is contextual anonymization, where PII is replaced with meaningful labels like [NAME] or [ADDRESS]. This protects privacy while preserving the data's structural integrity.
We were pleased to see our Ai4Privacy pii-masking-200k dataset featured in the article as a prime example of this best practice. Our dataset is designed to help developers implement this superior form of anonymization by providing tens of thousands of clear, labeled examples.
By enabling models to be trained on data that is both private and context-rich, we can build AI that is both smarter and safer. This is a core part of our mission.
What's your team's preferred method for data anonymization? Let's discuss best practices.
🛡️ At Ai4Privacy, our goal is to empower researchers to build a safer AI ecosystem. Today, we're highlighting crucial research that does just that by exposing a new vulnerability.
The paper "Forget to Flourish" details a new model poisoning technique. It's a reminder that as we fine-tune LLMs, our anonymization and privacy strategies must evolve to counter increasingly sophisticated threats.
We're proud that the Ai4Privacy dataset was instrumental in this study. It served two key purposes:
Provided a Realistic Testbed: It gave the researchers access to a diverse set of synthetic and realistic PII samples in a safe, controlled environment.
Enabled Impactful Benchmarking: It allowed them to measure the actual effectiveness of their data extraction attack, proving it could compromise specific, high-value information.
This work reinforces our belief that progress in AI security is a community effort. By providing robust tools for benchmarking, we can collectively identify weaknesses and build stronger, more resilient systems. A huge congratulations to the authors on this important contribution.
just submitted my plugin idea to the G-Assist Plugin Hackathon by @nvidia . Check it out, it's a great way to use a local SLA model on a windows machine to easily and locally get things done ! https://github.com/NVIDIA/G-Assist
In data privacy, 92% accuracy is not an A-grade. Privacy AI needs to be better.
That's the stark takeaway from a recent benchmark by Diego Mouriño
(Making Science), who put today's top PII detection methods to the test on call center transcripts using the Ai4Privacy dataset.
They pitted cutting-edge LLMs (like GPT-4 & Gemini) against traditional systems (like Cloud DLPs). The results show that our trust in these tools might be misplaced.
📊 The Hard Numbers:
Even top-tier LLMs peaked at a reported 92% accuracy, leaving a potential dangerous 8% gap where your customer's data can leak. They particularly struggled with basics like 'last names' and 'street addresses'.
The old guard? Traditional rule-based systems reportedly achieved a shocking 50% accuracy. A coin toss with your customers' privacy.
This tells us that for privacy tasks, off-the-shelf accuracy is a vanity metric. The real metric is the cost of a single failure—one leaked name, one exposed address.
While no tool is perfect, some are better than others. Diego’s full analysis breaks down which models offer the best cost-to-accuracy balance in this flawed landscape. It's a must-read for anyone serious about building trustworthy AI.
So every bio/med/chem meeting i go to i always the same questions "why are you sharing a gdrive link with me for this?" and "Do you have any plans to publish your model weights and datasets on huggingface?" and finally i got a good answer today which explains everything :
basically there is some kind of government censorship on this (usa, but i'm sure others too) and they are told they are not allowed as it is considered a "dataleak" which is illegal !!!!
this is terrible ! but the good news is that we can do something about it !
PII-Masking-1M Final Day (7/7)! 🚀 Today, we unveil 5 NEW Enterprise PII (E-PII) Dataset PREVIEWS!
Standard PII tools often miss sensitive *business* data. That's why we built E-PII previews for the data that powers your operations and compliance needs.
Get a first look (representing 100,000 samples each!) into datasets designed for real-world enterprise security across these categories:
🏥 **PHI Preview**: For Healthcare Data 💳 **PFI Preview:** For Financial Data 🏢 **PWI Preview:** For Workplace Data 💻 **PDI Preview:** For Digital Activity Data 📍 **PLI Preview:** For Location Data
That wraps up our #PIIMasking1M 7 days announcement! HUGE thanks for following along and for your engagement. Explore ALL our releases, including these E-PII previews, in the Ai4Privacy Hugging Face Collection & show some love ❤️ if you find them useful! 🔗 Visit the Collection:https://huggingface.co/ai4privacy
its based on orpheus - but really the model is irrelevant as i focus mostly on data augmentation / prep / pipelineing - its just the way to show progress
should be able to express fine even in a sfw context
probably the last release for a few weeks as i go back to the data pipeline and improve there ..
in the mean time, please do test and report problems or enjoyable generations you found - we have a growing discord community and i love to see what you get out of that early release !
(small colab is provided on the model page if you dont have the gpu to run that your self)
dataset is a copy of an existing one just added the emotional tags over 1200 samples - should be good enough to test if emotional tags stick in your finetune