AI & ML interests

Breaking the opacity of language models for legal professionals 📖 Join us by smashing the button at top right 🤗

Recent Activity

HFforLegal's activity

AdinaY 
posted an update about 2 hours ago
view post
Post
93
After yesterday's wave of reveals, here's what's going down today in the Chinese AI community 🔥

✨ Kuaishou unveiled Kling AI 2.0
https://klingai.com/global/

✨ MiniMax AI dropped their latest TTS model Speech-02
https://minimax.io/audio

✨ Tencent Hunyuan teased the upcoming open model - Hunyuan Portrait
HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation (2503.18860)

✨ ModelScope launched an MCP Square, with 1,500 MCPs already online
https://modelscope.cn/mcp

And it's only Tuesday🌞
AdinaY 
posted an update 1 day ago
view post
Post
676
🔥 Big day for the Chinese open source AI community: zh-ai-community

> Skywork AI :
Released 7B/32B reasoning models excels in math & coding
Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

> Moonshot AI & Numina:
Dropped 1.5B/7B POWERFUL formal math reasoning models
AI-MO/kimina-prover-preview-67fb536b883d60e7ca25d7f9

> Zhipu AI :
Launched 9B/32B reasoning models powering their first general AI agent - AutoGLM ✨
THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e

> DeepSeek :
Announced to open source its internal inference engine: DeepSeek Inference Engine
https://github.com/deepseek-ai/open-infra-index/blob/main/OpenSourcing_DeepSeek_Inference_Engine/README.md

Can't wait for more exciting releases coming 🥳


  • 1 reply
·
AdinaY 
posted an update 1 day ago
view post
Post
2119
🔥 New reasoning models from the Chinese community, by Skywork 天工-昆仑万维

Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

✨Skywork OR1-Math-7B > Optimized for math reasoning
✨Skywork-OR1-7B-preview > Excels in math & coding
✨Skywork-OR1-32B-preview > Matches Deepseek-R1 on math (AIME24/25) and coding (LiveCodeBench)

Released under the Apache 2.0 license 🥳
Final version coming in 2 weeks!
fdaudens 
posted an update 4 days ago
view post
Post
1960
Want AI that truly understands your country's culture? Public institutions are sitting on the next AI revolution - and here's the practical guide to unlock it.

I've had fascinating conversations recently about sovereign AI, with people trying to solve this recurring question: "How do we build AI that truly understands our culture?"

This guide by @evijit and @yjernite brings lots of insights about this question. It's not just about throwing data at models. It's about partnering cultural expertise with tech infrastructure in ways we're just starting to figure out.

An example? The National Library of Norway already has 150+ AI models on Hugging Face. They're not just digitizing books - they're building AI that thinks in Norwegian, understands Norwegian values, and serves Norwegian citizens.

This is sovereign AI in practice: technology that understands your culture, values, and languages.

Especially loved the practical examples on how to do this:
- Real examples from museums, libraries, and government agencies
- How to convert complex documents (PDFs, PowerPoints) into ML-ready formats
- Code templates for processing public data
- Technical recipes for sharing datasets on open platforms

The stakes? Citizens' ability to leverage their collective digital intelligence.

The technology is ready. The infrastructure exists. The guide shows exactly how to use it. What's needed is your cultural expertise to shape these tools.

Check it out: https://huggingface.co/blog/evijit/public-org-data-ai

P.s.: Building cool projects in a public institution? Share them in the comments for others to learn from!
AdinaY 
posted an update 4 days ago
view post
Post
3067
Shanghai AI Lab - OpenGV team just released InternVL3 🔥

OpenGVLab/internvl3-67f7f690be79c2fe9d74fe9d

✨ 1/2/8/9/14/38/28B with MIT license
✨ Stronger perception & reasoning vs InternVL 2.5
✨ Native Multimodal Pre-Training for even better language performance
  • 1 reply
·
fdaudens 
posted an update 5 days ago
view post
Post
2701
Do chatbots lie about Céline Dion? We now have answers, not speculation.

Ai2 just released OLMoTrace and it's a game-changer for transparency. You can literally see where an AI's responses come from in its training data - in real time.

The demo shows results about Céline. So I tried it out myself! Watch what happens in the video.

For journalists, researchers studying hallucinations and anyone who needs to trust their AI, this is like getting X-ray vision into AI systems. When the model made claims, I could instantly verify them against original sources. When it hallucinated, I could see why.

You can finally 1) understand how LLMs actually work and 2) verify if what they're saying is true. No more blind trust.

This pushes the open data movement to the next level.

👉 Blog post: https://allenai.org/blog/olmotrace
👉 Paper: https://www.datocms-assets.com/64837/1743890415-olmotrace.pdf

P.S.: A word of caution: never use a chatbot as a knowledge base. It's not Google. Better use it with a connection to the internet.
  • 1 reply
·
fdaudens 
posted an update 6 days ago
view post
Post
3986
🎨 Designers, meet OmniSVG! This new model helps you create professional vector graphics from text/images, generate editable SVGs from icons to detailed characters, convert rasters to vectors, maintain style consistency with references, and integrate into your workflow.

@OmniSVG
  • 2 replies
·
AdinaY 
posted an update 6 days ago
view post
Post
2624
Moonshot AI 月之暗面 🌛 @Kimi_Moonshotis just dropped an MoE VLM and an MoE Reasoning VLM on the hub!!

Model:https://huggingface.co/collections/moonshotai/kimi-vl-a3b-67f67b6ac91d3b03d382dd85

✨3B with MIT license
✨Long context windows up to 128K
✨Strong multimodal reasoning (36.8% on MathVision, on par with 10x larger models) and agent skills (34.5% on ScreenSpot-Pro)
AdinaY 
posted an update 7 days ago
view post
Post
2278
IndexTTS 📢 a TTS built on XTTS + Tortoise, released by BiliBili - a Chinese video sharing platform/community.
Model: IndexTeam/Index-TTS
Demo: IndexTeam/IndexTTS

✨Chinese pronunciation correction via pinyin
✨Pause control via punctuation
✨Improved speaker conditioning & audio quality (BigVGAN2)
✨Trained on 10k+ hours


  • 1 reply
·
AdinaY 
posted an update 7 days ago
view post
Post
1744
MAYE🎈a from-scratch RL framework for Vision Language Models, released by GAIR - an active research group from the Chinese community.

✨Minimal & transparent pipeline with standard tools
✨Standardized eval to track training & reflection
✨Open Code & Dataset

Code:
https://github.com/GAIR-NLP/MAYE?tab=readme-ov-file
Dataset:
ManTle/MAYE
Paper:
Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme (2504.02587)
  • 1 reply
·
fdaudens 
posted an update 8 days ago
view post
Post
3574
I read the 456-page AI Index report so you don't have to (kidding). The wild part? While AI gets ridiculously more accessible, the power gap is actually widening:

1️⃣ The democratization of AI capabilities is accelerating rapidly:
- The gap between open and closed models is basically closed: difference in benchmarks like MMLU and HumanEval shrunk to just 1.7% in 2024
- The cost to run GPT-3.5-level performance dropped 280x in 2 years
- Model size is shrinking while maintaining performance - Phi-3-mini hitting 60%+ MMLU at fraction of parameters of early models like PaLM

2️⃣ But we're seeing concerning divides deepening:
- Geographic: US private investment ($109B) dwarfs everyone else - 12x China's $9.3B
- Research concentration: US and China dominate highly-cited papers (50 and 34 respectively in 2023), while next closest is only 7
- Gender: Major gaps in AI skill penetration rates - US shows 2.39 vs 1.71 male/female ratio

The tech is getting more accessible but the benefits aren't being distributed evenly. Worth thinking about as these tools become more central to the economy.

Give it a read - fascinating portrait of where AI is heading! https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf
·
clem 
posted an update 10 days ago
view post
Post
2598
Llama 4 is in transformers!

Fun example using the instruction-tuned Maverick model responding about two images, using tensor parallel for maximum speed.

From https://huggingface.co/blog/llama4-release
  • 1 reply
·
fdaudens 
posted an update 10 days ago
view post
Post
2339
See that purple banner on the Llama 4 models? It's Xet storage, and this is actually huge for anyone building with AI models. Let's geek out a little bit 🤓

Current problem: AI models are massive files using Git LFS. But with models getting bigger and downloads exploding, we needed something better.
Xet lets you version large files like code, with compression and deduplication, all Git-compatible. That means less bandwidth, faster sharing, and smoother collaboration.

Real numbers: ~25% deduplication on Llama 4 models, hitting ~40% for finetunes.

Scale matters here - the Hub served 2B model downloads in 30 days, Llama models alone at 60M. The upcoming Llama 4 Behemoth has 2T parameters! Xet's chunk-based system was built exactly for this.

This is the kind of engineering that makes the next wave of large models actually usable. Kudos to the team! 🧨

Check out the models collection: meta-llama/llama-4-67f0c30d9fe03840bc9d0164
fdaudens 
posted an update 11 days ago
view post
Post
2480
"Am I going to be replaced by AI?" - Crucial question, but maybe we're asking the wrong one.

📈 There's a statistic from my reads this week that stays with me: Tomer Cohen, LinkedIn's CPO, shares to Jeremy Kahn that 70% of skills used in most jobs will change by 2030. Not jobs disappearing, but transforming. And he calls out bad leadership: "If in one year's time, you are disappointed that your workforce is not 'AI native,' it is your fault."

🔄 Apparently, the Great Recalibration has begun. We're now heading into an era where AI is fundamentally redefining the nature of work itself, by forcing a complete reassessment of human value in the workplace, according to a piece in Fast Company. But it might be driven more by "the need for humans to change the way they work" than AI.

⚡ The Washington Post draws a crucial parallel: We're facing an "AI shock" similar to manufacturing's "China shock" - but hitting knowledge workers. Especially entry-level, white-collar work could get automated. The key difference? "Winning the AI tech competition with other countries won't be enough. It's equally vital to win the battle to re-skill workers."

Digging into these big questions in this week’s AI in the News: https://fdaudens.substack.com/publish/posts/detail/160596301

Also, I'm curious: how are you keeping up with this pace of change? What strategies are working for you?
AdinaY 
posted an update 11 days ago
lianghsun 
posted an update 11 days ago
view post
Post
2140

With the arrival of Twinkle April — Twinkle AI’s annual open-source celebration held every April — our community is excited to unveil its very first project:

📊 Twinkle Eval (https://github.com/ai-twinkle/Eval), a next-generation evaluation tool led by our contributor @tedslin .

Unlike traditional evaluation tools like iKala’s ievals (https://github.com/ikala-ai/ievals), which can only evaluate language models (LMs) one sample at a time, Twinkle Eval is designed with Large Reasoning Models (LRMs) in mind. As reasoning time increases with more complex models, traditional tools become increasingly inefficient 😲 — for example, evaluating LRMs on the ikala/tmmluplus benchmark could take *
half a day without finishing.

One question we were especially curious about:
Does shuffling multiple-choice answer order impact model accuracy? 🤔
→ See: "Change Answer Order Can Decrease MMLU Accuracy" – arXiv:2406.19470v1

To address these challenges, Twinkle Eval brings three key innovations to the table:

1️⃣ Parallelized evaluation of samples
2️⃣ Multi-round testing for stability
3️⃣ Randomized answer order to test robustness

After running experiments, we observed that Twinkle Eval can speed up evaluation by up to 15× 🚀🚀. Interestingly, most models scored slightly lower under the 2️⃣3️⃣ test settings compared to their claimed performance — suggesting further benchmarking is needed.

This framework also comes with additional tunable parameters and detailed logging of LM behavior per question — perfect for those who want to dive deeper. 😆

If you find Twinkle Eval useful, please ⭐ the project and help spread the word 🤗
·
clem 
posted an update 12 days ago
view post
Post
1918
Llama models (arguably the most successful open AI models of all times) just represented 3% of total model downloads on Hugging Face in March.

People and media like stories of winner takes all & one model/company to rule them all but the reality is much more nuanced than this!

Kudos to all the small AI builders out there!
  • 2 replies
·
fdaudens 
posted an update 13 days ago
view post
Post
2227
Did we just drop personalized AI evaluation?! This tool auto-generates custom benchmarks on your docs to test which models are the best.

Most benchmarks test general capabilities, but what matters is how models handle your data and tasks. YourBench helps answer critical questions like:
- Do you really need a hundreds-of-billions-parameter model sledgehammer to crack a nut?
- Could a smaller, fine-tuned model work better?
- How well do different models understand your domain?

Some cool features:
📚 Generates custom benchmarks from your own documents (PDFs, Word, HTML)
🎯 Tests models on real tasks, not just general capabilities
🔄 Supports multiple models for different pipeline stages
🧠 Generate both single-hop and multi-hop questions
🔍 Evaluate top models and deploy leaderboards instantly
💰 Full cost analysis to optimize for your budget
🛠️ Fully configurable via a single YAML file

26 SOTA models tested for question generation. Interesting finding: Qwen2.5 32B leads in question diversity, while smaller Qwen models and Gemini 2.0 Flash offer great value for cost.

You can also run it locally on any models you want.

I'm impressed. Try it out: yourbench/demo