AI & ML interests

Evaluating open LLMs

Recent Activity

open-llm-leaderboard's activity

AdinaY 
posted an update about 9 hours ago
view post
Post
272
🔥 Big day for the Chinese open source AI community: zh-ai-community

> Skywork AI :
Released 7B/32B reasoning models excels in math & coding
Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

> Moonshot AI & Numina:
Dropped 1.5B/7B POWERFUL formal math reasoning models
AI-MO/kimina-prover-preview-67fb536b883d60e7ca25d7f9

> Zhipu AI :
Launched 9B/32B reasoning models powering their first general AI agent - AutoGLM ✨
THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e

> DeepSeek :
Announced to open source its internal inference engine: DeepSeek Inference Engine
https://github.com/deepseek-ai/open-infra-index/blob/main/OpenSourcing_DeepSeek_Inference_Engine/README.md

Can't wait for more exciting releases coming 🥳


thomwolf 
posted an update about 13 hours ago
view post
Post
852
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.

At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!

You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at pollen-robotics

We're so excited to build and share more open-source robots with the world in the coming months!
AdinaY 
posted an update about 17 hours ago
view post
Post
1147
🔥 New reasoning models from the Chinese community, by Skywork 天工-昆仑万维

Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

✨Skywork OR1-Math-7B > Optimized for math reasoning
✨Skywork-OR1-7B-preview > Excels in math & coding
✨Skywork-OR1-32B-preview > Matches Deepseek-R1 on math (AIME24/25) and coding (LiveCodeBench)

Released under the Apache 2.0 license 🥳
Final version coming in 2 weeks!
AdinaY 
posted an update 4 days ago
view post
Post
3022
Shanghai AI Lab - OpenGV team just released InternVL3 🔥

OpenGVLab/internvl3-67f7f690be79c2fe9d74fe9d

✨ 1/2/8/9/14/38/28B with MIT license
✨ Stronger perception & reasoning vs InternVL 2.5
✨ Native Multimodal Pre-Training for even better language performance
  • 1 reply
·
AdinaY 
posted an update 5 days ago
view post
Post
2600
Moonshot AI 月之暗面 🌛 @Kimi_Moonshotis just dropped an MoE VLM and an MoE Reasoning VLM on the hub!!

Model:https://huggingface.co/collections/moonshotai/kimi-vl-a3b-67f67b6ac91d3b03d382dd85

✨3B with MIT license
✨Long context windows up to 128K
✨Strong multimodal reasoning (36.8% on MathVision, on par with 10x larger models) and agent skills (34.5% on ScreenSpot-Pro)
AdinaY 
posted an update 7 days ago
view post
Post
2262
IndexTTS 📢 a TTS built on XTTS + Tortoise, released by BiliBili - a Chinese video sharing platform/community.
Model: IndexTeam/Index-TTS
Demo: IndexTeam/IndexTTS

✨Chinese pronunciation correction via pinyin
✨Pause control via punctuation
✨Improved speaker conditioning & audio quality (BigVGAN2)
✨Trained on 10k+ hours


  • 1 reply
·
AdinaY 
posted an update 7 days ago
view post
Post
1732
MAYE🎈a from-scratch RL framework for Vision Language Models, released by GAIR - an active research group from the Chinese community.

✨Minimal & transparent pipeline with standard tools
✨Standardized eval to track training & reflection
✨Open Code & Dataset

Code:
https://github.com/GAIR-NLP/MAYE?tab=readme-ov-file
Dataset:
ManTle/MAYE
Paper:
Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme (2504.02587)
  • 1 reply
·
AdinaY 
posted an update 11 days ago
AdinaY 
posted an update 13 days ago
view post
Post
1376
MegaTTS3 📢 an open TTS released by ByteDance

✨ 0.45B with Apache2.0
✨ Support English & Chinese
✨ High quality voice cloning
✨ Accent Intensity Control
ByteDance/MegaTTS3
AdinaY 
posted an update 13 days ago
AdinaY 
posted an update 15 days ago
view post
Post
2071
AutoGLM 沉思💫 FREE AI Agent released by ZhipuAI

✨ Think & Act simultaneously
✨ Based on a fully self-developed stack: GLM-4 for general, GLM-Z1 for inference, and GLM-Z1-Rumination for rumination
✨ Will openly share these models on April 14 🤯

Preview version👉 https://autoglm-research.zhipuai.cn/?channel=autoglm_android
  • 1 reply
·
AdinaY 
posted an update 15 days ago
view post
Post
1933
AReal-Boba 🔥 a fully open RL Frameworks released by AntGroup, an affiliate company of Alibaba.
inclusionAI/areal-boba-67e9f3fa5aeb74b76dcf5f0a
✨ 7B/32B - Apache2.0
✨ Outperform on math reasoning
✨ Replicating QwQ-32B with 200 data under $200
✨ All-in-one: weights, datasets, code & tech report
  • 1 reply
·
Wauplin 
posted an update 15 days ago
view post
Post
2052
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years!

Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0.

🚀 Ready. Xet. Go!

Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

pip install -U huggingface_hub[hf_xet]


With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.

Blog post: https://huggingface.co/blog/xet-on-the-hub
Docs: https://huggingface.co/docs/hub/en/storage-backends#xet


⚡ Inference Providers

- We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

- Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate.

- Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations.

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="my-cool-company")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")


- No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
·
thomwolf 
posted an update 15 days ago
view post
Post
3167
The new DeepSite space is really insane for vibe-coders
enzostvs/deepsite

With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.

It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.

AI is eating the world and *open-source* AI is eating AI itself!

PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?

PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324
  • 1 reply
·
AdinaY 
posted an update 17 days ago
view post
Post
2373
Let's check out the latest releases from the Chinese community in March!

👉 https://huggingface.co/collections/zh-ai-community/march-2025-releases-from-the-chinese-community-67c6b479ebb87abbdf8e2e76


✨MLLM
> R1 Omni by Alibaba Tongyi - 0.5B
> Qwen2.5 Omni by Alibaba Qwen - 7B with apache2.0

🖼️Video
> CogView-4 by ZhipuAI - Apacha2.0
> HunyuanVideo-I2V by TencentHunyuan
> Open Sora2.0 - 11B with Apache2.0
> Stepvideo TI2V by StepFun AI - 30B with MIT license

🎵Audio
> DiffDiffRhythm - Apache2.0
> Spark TTS by SparkAudio - 0.5B

⚡️Image/3D
> Hunyuan3D 2mv/2mini (0.6B) by @TencentHunyuan
> FlexWorld by ByteDance - MIT license
> Qwen2.5-VL-32B-Instruct by Alibaba Qwen - Apache2.0
> Tripo SG (1.5B)/SF by VastAIResearch - MIT license
> InfiniteYou by ByteDance

> LHM by Alibaba AIGC team - Apache2.0
> Spatial LM by ManyCore

🧠Reasoning
> QwQ-32B by Alibaba Qwen - Apache2.0
> Skywork R1V - 38B with MIT license
> RWKV G1 by RWKV AI - 0.1B pure RNN reasoning model with Apache2.0
> Fin R1 by SUFE AIFLM Lab - financial reasoning

🔠LLM
> DeepSeek v3 0324 by DeepSeek -MIT license
> Babel by Alibaba DAMO - 9B/83B/25 languages
·
AdinaY 
posted an update 18 days ago
view post
Post
1762
Exciting release from 3D-focused startup - VastAIResearch
They just dropped 2 open 3D models on the hub 🚀

✨TripoSG: 1.5B MoE Transformer 3D model
Model: VAST-AI/TripoSG
Paper: TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models (2502.06608)

✨ TripoSF: 3D shape modeling with SparseFlex, enabling high-resolution reconstruction (up to 1024³)
Model: VAST-AI/TripoSF
Paper: SparseFlex: High-Resolution and Arbitrary-Topology 3D Shape Modeling (2503.21732)
  • 3 replies
·