
Journalists on Hugging Face
community
AI & ML interests
Democratizing access to useful AI tools and resources for journalists
Recent Activity
View all activity
JournalistsonHF's activity
aiscraper
#4 opened about 1 month ago
by
cyberconnectbe

Post
4187
Ever notice how some AI assistants feel like tools while others feel like companions? Turns out, it's not always about fancy tech upgrades, because sometimes it's just clever design.
Our latest blog post at Hugging Face dives into how minimal design choices can completely transform how users experience AI. We've seen our community turn the same base models into everything from swimming coaches to interview prep specialists with surprisingly small tweaks.
The most fascinating part? When we tested identical models with different "personalities" in our Inference Playground, the results were mind-blowing.
Want to experiment yourself? Our Inference Playground lets anyone (yes, even non-coders!) test these differences in real-time. You can:
- Compare multiple models side-by-side
- Customize system prompts
- Adjust parameters like temperature
- Test multi-turn conversations
It's fascinating how a few lines of instruction text can transform the same AI from strictly professional to seemingly caring and personal, without changing a single line of code in the model itself.
Read more here: https://huggingface.co/blog/giadap/ai-personas
Our latest blog post at Hugging Face dives into how minimal design choices can completely transform how users experience AI. We've seen our community turn the same base models into everything from swimming coaches to interview prep specialists with surprisingly small tweaks.
The most fascinating part? When we tested identical models with different "personalities" in our Inference Playground, the results were mind-blowing.
Want to experiment yourself? Our Inference Playground lets anyone (yes, even non-coders!) test these differences in real-time. You can:
- Compare multiple models side-by-side
- Customize system prompts
- Adjust parameters like temperature
- Test multi-turn conversations
It's fascinating how a few lines of instruction text can transform the same AI from strictly professional to seemingly caring and personal, without changing a single line of code in the model itself.
Read more here: https://huggingface.co/blog/giadap/ai-personas

evijit
authored
a
paper
27 days ago
Post
1777
PSA for anyone using
Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
Nymbo/Nymbo_Theme
or Nymbo/Nymbo_Theme_5
in a Gradio space ~Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
in-line code
is readable now! Both themes are now visually identical across versions.If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
Post
2521
New launch: See the energy use of chatbot conversations, in real time. =)
jdelavande/chat-ui-energy
Great work from @JulienDelavande !
jdelavande/chat-ui-energy
Great work from @JulienDelavande !

evijit
authored
2
papers
about 1 month ago
Post
1667
🤗 Just published: "Consent by Design" - exploring how we're building better consent mechanisms across the HF ecosystem!
Our research shows open AI development enables:
- Community-driven ethical standards
- Transparent accountability
- Context-specific implementations
- Privacy as core infrastructure
Check out our Space Privacy Analyzer tool that automatically generates privacy summaries of applications!
Effective consent isn't about perfect policies; it's about architectures that empower users while enabling innovation. 🚀
Read more: https://huggingface.co/blog/giadap/consent-by-design
Our research shows open AI development enables:
- Community-driven ethical standards
- Transparent accountability
- Context-specific implementations
- Privacy as core infrastructure
Check out our Space Privacy Analyzer tool that automatically generates privacy summaries of applications!
Effective consent isn't about perfect policies; it's about architectures that empower users while enabling innovation. 🚀
Read more: https://huggingface.co/blog/giadap/consent-by-design
Post
3291
Today in Privacy & AI Tooling - introducing a nifty new tool to examine where data goes in open-source apps on 🤗
HF Spaces have tons (100Ks!) of cool demos leveraging or examining AI systems - and because most of them are OSS we can see exactly how they handle user data 📚🔍
That requires actually reading the code though, which isn't always easy or quick! Good news: code LMs have gotten pretty good at automatic review, so we can offload some of the work - here I'm using Qwen/Qwen2.5-Coder-32B-Instruct to generate reports and it works pretty OK 🙌
The app works in three stages:
1. Download all code files
2. Use the Code LM to generate a detailed report pointing to code where data is transferred/(AI-)processed (screen 1)
3. Summarize the app's main functionality and data journeys (screen 2)
4. Build a Privacy TLDR with those inputs
It comes with a bunch of pre-reviewed apps/Spaces, great to see how many process data locally or through (private) HF endpoints 🤗
Note that this is a POC, lots of exciting work to do to make it more robust, so:
- try it: yjernite/space-privacy
- reach out to collab: yjernite/space-privacy
HF Spaces have tons (100Ks!) of cool demos leveraging or examining AI systems - and because most of them are OSS we can see exactly how they handle user data 📚🔍
That requires actually reading the code though, which isn't always easy or quick! Good news: code LMs have gotten pretty good at automatic review, so we can offload some of the work - here I'm using Qwen/Qwen2.5-Coder-32B-Instruct to generate reports and it works pretty OK 🙌
The app works in three stages:
1. Download all code files
2. Use the Code LM to generate a detailed report pointing to code where data is transferred/(AI-)processed (screen 1)
3. Summarize the app's main functionality and data journeys (screen 2)
4. Build a Privacy TLDR with those inputs
It comes with a bunch of pre-reviewed apps/Spaces, great to see how many process data locally or through (private) HF endpoints 🤗
Note that this is a POC, lots of exciting work to do to make it more robust, so:
- try it: yjernite/space-privacy
- reach out to collab: yjernite/space-privacy

BrigitteTousi
posted
an
update
about 2 months ago
Post
3130
AI agents are transforming how we interact with technology, but how sustainable are they? 🌍
Design choices — like model size and structure — can massively impact energy use and cost. ⚡💰 The key takeaway: smaller, task-specific models can be far more efficient than large, general-purpose ones.
🔑 Open-source models offer greater transparency, allowing us to track energy consumption and make more informed decisions on deployment. 🌱 Open-source = more efficient, eco-friendly, and accountable AI.
Read our latest, led by @sasha with assists from myself + @yjernite 🤗
https://huggingface.co/blog/sasha/ai-agent-sustainability
Design choices — like model size and structure — can massively impact energy use and cost. ⚡💰 The key takeaway: smaller, task-specific models can be far more efficient than large, general-purpose ones.
🔑 Open-source models offer greater transparency, allowing us to track energy consumption and make more informed decisions on deployment. 🌱 Open-source = more efficient, eco-friendly, and accountable AI.
Read our latest, led by @sasha with assists from myself + @yjernite 🤗
https://huggingface.co/blog/sasha/ai-agent-sustainability

fdaudens
updated
a
Space
about 2 months ago

giadap
authored
a
paper
about 2 months ago

fdaudens
published
a
Space
2 months ago
Post
2353
We've all become experts at clicking "I agree" without a second thought. In my latest blog post, I explore why these traditional consent models are increasingly problematic in the age of generative AI.
I found three fundamental challenges:
- Scope problem: how can you know what you're agreeing to when AI could use your data in different ways?
- Temporality problem: once an AI system learns from your data, good luck trying to make it "unlearn" it.
- Autonomy trap: the data you share today could create systems that pigeonhole you tomorrow.
Individual users shouldn't bear all the responsibility, while big tech holds all the cards. We need better approaches to level the playing field, from collective advocacy and stronger technological safeguards to establishing "data fiduciaries" with a legal duty to protect our digital interests.
Available here: https://huggingface.co/blog/giadap/beyond-consent
I found three fundamental challenges:
- Scope problem: how can you know what you're agreeing to when AI could use your data in different ways?
- Temporality problem: once an AI system learns from your data, good luck trying to make it "unlearn" it.
- Autonomy trap: the data you share today could create systems that pigeonhole you tomorrow.
Individual users shouldn't bear all the responsibility, while big tech holds all the cards. We need better approaches to level the playing field, from collective advocacy and stronger technological safeguards to establishing "data fiduciaries" with a legal duty to protect our digital interests.
Available here: https://huggingface.co/blog/giadap/beyond-consent

evijit
authored
a
paper
2 months ago

meg
authored
a
paper
2 months ago

fdaudens
updated
a
Space
2 months ago

louisbrulenaudet
posted
an
update
2 months ago
Post
1072
I’ve just released logfire-callback on PyPI, designed to facilitate monitoring of Hugging Face Transformer training loops using Pydantic Logfire 🤗
The callback will automatically log training start with configuration parameters, periodic metrics and training completion ⏱️
Install the package using pip:
First, ensure you have a Logfire API token and set it as an environment variable:
Then use the callback in your training code:
If you have any feedback, please reach out at @louisbrulenaudet
The callback will automatically log training start with configuration parameters, periodic metrics and training completion ⏱️
Install the package using pip:
pip install logfire-callback
First, ensure you have a Logfire API token and set it as an environment variable:
export LOGFIRE_TOKEN=your_logfire_token
Then use the callback in your training code:
from transformers import Trainer, TrainingArguments
from logfire_callback import LogfireCallback
# Initialize your model, dataset, etc.
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
# ... other training arguments
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
callbacks=[LogfireCallback()] # Add the Logfire callback here
)
trainer.train()
If you have any feedback, please reach out at @louisbrulenaudet

fdaudens
updated
a
collection
2 months ago