Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
3
210
q
aust-t
Follow
Mi6paulino's profile picture
21world's profile picture
Gargaz's profile picture
3 followers
·
38 following
AI & ML interests
None yet
Recent Activity
reacted
to
merterbak
's
post
with 🔥
14 days ago
Qwen 3 models released🔥 It offers 2 MoE and 6 dense models with following parameter sizes: 0.6B, 1.7B, 4B, 8B, 14B, 30B(MoE), 32B, and 235B(MoE). Models: https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f Blog: https://qwenlm.github.io/blog/qwen3/ Demo: https://huggingface.co/spaces/Qwen/Qwen3-Demo GitHub: https://github.com/QwenLM/Qwen3 ✅ Pre-trained 119 languages(36 trillion tokens) and dialects with strong translation and instruction following abilities. (Qwen2.5 was pre-trained on 18 trillion tokens.) ✅Qwen3 dense models match the performance of larger Qwen2.5 models. For example, Qwen3-1.7B/4B/8B/14B/32B perform like Qwen2.5-3B/7B/14B/32B/72B. ✅ Three stage done while pretraining: • Stage 1: General language learning and knowledge building. • Stage 2: Reasoning boost with STEM, coding, and logic skills. • Stage 3: Long context training ✅ It supports MCP in the model ✅ Strong agent skills ✅ Supports seamless between thinking mode (for hard tasks like math and coding) and non-thinking mode (for fast chatting) inside chat template. ✅ Better human alignment for creative writing, roleplay, multi-turn conversations, and following detailed instructions.
liked
a model
14 days ago
Qwen/Qwen3-30B-A3B
liked
a model
14 days ago
Qwen/Qwen3-235B-A22B
View all activity
Organizations
None yet
spaces
1
Running
deepsite-blogcms
🐳
models
0
None public yet
datasets
0
None public yet