AI & ML interests

Breaking the opacity of language models for legal professionals 📖 Join us by smashing the button at top right 🤗

Recent Activity

HFforLegal's activity

fdaudens 
posted an update about 9 hours ago
view post
Post
550
🎵 Dream come true for content creators! TIGER AI can extract voice, effects & music from ANY audio file 🤯
This lightweight model uses frequency band-split technology to separate speech like magic. Kudos to @fffiloni for the amazing demo! fffiloni/TIGER-audio-extraction
AdinaY 
posted an update about 12 hours ago
view post
Post
623
🔥 New benchmark & dataset for Subject-to-Video generation

OPENS2V-NEXUS by Pekin University

✨ Fine-grained evaluation for subject consistency
BestWishYsh/OpenS2V-Eval
✨ 5M-scale dataset:
BestWishYsh/OpenS2V-5M
✨ New metrics – automatic scores for identity, realism, and text match
  • 1 reply
·
AdinaY 
posted an update about 13 hours ago
view post
Post
456
HunyuanVideo-Avatar 🔥 another image to video model byTencent Hunyuan

tencent/HunyuanVideo-Avatar

✨Emotion-controlled, high-dynamic avatar videos
✨Multi-character support with separate audio control
✨Works with any style: cartoon, 3D, real face, while keeping identity consistent
AdinaY 
posted an update 1 day ago
fdaudens 
posted an update 2 days ago
view post
Post
3262
Just completed the AI Agents course and wow, that capstone project really makes you understand how to build agents that can handle real-world complexity!

The final project uses the GAIA dataset - your agent has to solve tasks like analyzing Excel files, processing audio recordings, answering questions about YouTube videos, and diving into research papers. This isn't toy examples, it's the messy, multimodal stuff agents need to handle in practice.

Whether you’re just getting started with agents or want to go deeper with tools like LangChain, LlamaIndex, and SmolAgents, this course has tons of useful stuff. A few key insights:
- Code agents are incredibly versatile once you get the architecture right
- The sweet spot is finding the right balance of guidance vs autonomy for each use case
- Once the logic clicks, the possibilities really are endless - it's like letting LLMs break free from the chatbox

The course is free and the certification deadline is July 1st, 2025.

The Hugging Face team built something special here. If you're tired of AI that impresses in demos but fails in practice, this is your path to building agents that actually deliver. https://huggingface.co/learn/agents-course/unit0/introduction

Best part? There's the MCP course next!
AdinaY 
posted an update 2 days ago
view post
Post
2654
Orsta 🔥 vision language models trained with V-Triune, a unified reinforcement learning system by MiniMax AI

One-RL-to-See-Them-All/one-rl-to-see-them-all-6833d27abce23898b2f9815a

✨ 7B & 32B with MIT license
✨ Masters 8 visual tasks: math, science QA, charts, puzzles, object detection, grounding, OCR, and counting
✨ Uses Dynamic IoU rewards for better visual understanding
✨Strong performance in visual reasoning and perception
AdinaY 
posted an update 2 days ago
Tonic 
posted an update 3 days ago
view post
Post
2269
🙋🏻‍♂️ Hey there folks ,

Yesterday the world's first "Learn to Vibe Code" application was released .

As vibe coding is the mainstream paradigm , so now the first educational app is there to support it .

You can try it out already :

https://vibe.takara.ai

and of course it's entirely open source, so i already made my issue and feature branch :-) 🚀
clem 
posted an update 3 days ago
view post
Post
2812
It's just become easier to share your apps on the biggest AI app store (aka HF spaces) for unlimited storage, more visibility and community interactions.

Just pick a React, Svelte, or Vue template when you create your space or add app_build_command: npm run build in your README's YAML and app_file: build/index.html in your README's YAML block.

Or follow this link: https://huggingface.co/new-space?sdk=static

Let's build!
  • 1 reply
·
fdaudens 
posted an update 4 days ago
view post
Post
2426
Two lines in your terminal and you have an AI agent running whatever model and tools you want 🤯

Just tried the new Tiny Agents in Python. Asked it which team won the Italian Serie A soccer league and to export the final table to CSV. Coolest thing is you can interact with the agent, guide it, and correct its mistakes.

The agent connected to web browsing tools, searched for Serie A standings, identified the champion, and generated a CSV export.

The setup:
pip install "huggingface_hub[mcp]>=0.32.0"
tiny-agents run


That's it. The MCP protocol handles all the tool integrations automatically - no custom APIs to write, no complex setups. Want file system access? It's already there. Need web browsing? Built in.

You can swap models, change inference providers, run local models, or add new tools just by editing a simple JSON config. You can also use Gradio Spaces as MCP servers! The entire agent is ~70 lines of Python - essentially a while loop that streams responses and executes tools. Everything is open-source. ❤️ Hugging Face

Blog post: https://huggingface.co/blog/python-tiny-agents
  • 1 reply
·
fdaudens 
posted an update 5 days ago
view post
Post
2375
Here’s what happens when a national institution builds its own digital intelligence: France’s Ministry of Culture just released 17K+ real users testing 30+ chatbots in French. Raw, diverse, and a goldmine for studying LLMs in the wild.

ministere-culture/comparia-conversations
clem 
posted an update 7 days ago
view post
Post
3177
Playing with Veo3 this morning. Share your prompt if you want me to create videos for you (bonus point if they funnily reference HF/open-source). These videos are "a cat on the moon rapping "I love Hugging Face""!
·
AdinaY 
posted an update 8 days ago
view post
Post
2712
ByteDance is absolutely cooking lately🔥

BAGEL 🥯 7B active parameter open multimodal foundation model by Bytedance Seed team.

ByteDance-Seed/BAGEL-7B-MoT

✨ Apache 2.0
✨ Outperforms top VLMs (Qwen2.5-VL & InternVL-2.5)
✨ Mixture-of-Transformer-Experts + dual encoders
✨ Trained on trillions of interleaved tokens
AdinaY 
posted an update 9 days ago
view post
Post
2366
Dolphin 🔥 A multimodal document image parsing model from ByteDance
, built on an analyze-then-parse paradigm.

ByteDance/Dolphin

✨ MIT licensed
✨ Handles text, tables, figures & formulas via:
- Reading-order layout analysis
- Parallel parsing with smart prompts

AdinaY 
posted an update 9 days ago
view post
Post
551
Index-AniSora 🎬 an open anime video model released by Bilibili

👉https://huggingface.co/IndexTeam/Index-anisora

✨ Apache2.0
✨ Supports many 2D styles: anime, manga, VTubers, and more
✨ Fine control over characters and actions with smart masking
AdinaY 
posted an update 10 days ago
view post
Post
1827
Data quality is the new frontier for LLM performance.

Ultra-FineWeb 📊 a high-quality bilingual dataset released by OpenBMB

openbmb/Ultra-FineWeb

✨ MIT License
✨ 1T English + 120B Chinese tokens
✨ Efficient model-driven filtering
  • 2 replies
·
AdinaY 
posted an update 12 days ago
fdaudens 
posted an update 14 days ago
view post
Post
5047
Tried something new: an AI-generated podcast that breaks down the top research paper each day. Fully automated, now live on Spotify.

I built this prototype to help keep up with the rapid pace of AI developments and, hopefully, make cutting-edge research more accessible. I don’t know about you, but just listening to a conversation about a paper really helps the content sink in for me.

This build taught me a lot about full automation. If you’re into the technical weeds: Qwen3 runs on Inference to handle the script, Kokoro does the voice, and the whole thing gets published automatically thanks to the Hugging Face Jobs API and Gradio deployment.

It’s not perfect yet — I’ll be monitoring for hallucinations and incoherence. The voice model still needs polish, but it’s a promising start. Would love to build this with the community — submit a PR or send feedback. It’s just a beta of an experimental idea!

Big kudos to @m-ric , whose Open NotebookLM this is based on, and to @nielsr for his terrific work making research papers more accessible.

- Podcast on Spotify: https://open.spotify.com/show/3PTucIW1w1GIkqTYm32ka7?si=c7a851f83e6d4331 (Apple Podcasts soon)
- Code: fdaudens/podcast-jobs
- Open NotebookLM: m-ric/open-notebooklm
- Also super helpful, @qgallouedec 's tutorial on HF Jobs API: qgallouedec/run-hello-world
  • 1 reply
·
AdinaY 
posted an update 14 days ago