One of the hardest challenges in AI safety is finding the right balance: how do we protect people from harm without undermining their agency? This tension is especially visible in conversational systems, where safeguards can sometimes feel more paternalistic than supportive.
In my latest piece for Hugging Face, I argue that open source and community-driven approaches offer a promising (though not exclusive) way forward.
✨ Transparency can make safety mechanisms into learning opportunities. ✨ Collaboration with diverse communities makes safeguards more relevant across contexts. ✨ Iteration in the open lets protections evolve rather than freeze into rigid, one-size-fits-all rules.
Of course, this isn’t a silver bullet. Top-down safety measures will still be necessary in some cases. But if we only rely on corporate control, we risk building systems that are safe at the expense of trust and autonomy.
IBM just released small swiss army knife for the document models: granite-docling-258M on Hugging Face 🔥
> not only a document converter but also can do document question answering, understand multiple languages 🤯 > best part: released with Apache 2.0 license 👏 use it with your commercial projects! > it supports transformers, vLLM and MLX from the get-go! 🤗 > built on SigLIP2 & granite-165M
🤖 As AI-generated content is shared in movies/TV/across the web, there's one simple low-hanging fruit 🍇 to help know what's real: Visible watermarks. With the Gradio team, I've made sure it's trivially easy to add this disclosure to images, video, chatbot text. See how: https://huggingface.co/blog/watermarking-with-gradio Thanks to the code collab in particular from @abidlabs and Yuvraj Sharma.
Tremendous quality of life upgrade on the Hugging Face Hub - we now have auto-complete emojis 🤗 🥳 👏 🙌 🎉
Get ready for lots more very serious analysis on a whole range of topics from yours truly now that we have unlocked this full range of expression 😄 🤔 🗣 🙊
Did you know you can use AI-Toolkit by Ostris (https://github.com/ostris/ai-toolkit) to train AI image and video models directly inside a Hugging Face space?
The benefit of doing so is that you will get a nice UI, if you do not want to deal with JSON files and CLI shenanigans!
I have created a ready-to-use template you can deploy to your own HF Space to train generative models in a few clicks.
All you have to do is to duplicate my space to your private space by going here: jbilcke-hf/ai-toolkit
This space requires a good GPU and most importantly a persistent storage, as everything is stored in /data.