Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
onekqΒ 
posted an update 5 days ago
Post
1740
I tested Qwen3 235b and 32b and they are both worse than Qwen2.5 32b.
onekq-ai/WebApp1K-models-leaderboard

I used non-thinking mode because the thinking mode is too slow 🐒🐒🐒 to be usable in any way.

Sigh ...

do you mean the Qwen2.5-Coder 32B? If so, we need to wait for Qwen3-Coder
https://x.com/huybery/status/1909669114341417344

Β·
This comment has been hidden (marked as Spam)

I've been using it just fine for the last couple days. It does take a while to think usually up to 60 seconds usually per prompt, but I usually ask deep questions so I don't really mind waiting.

How would I activate the previous version that way I can test between the two?

Β·

You meant the non-thinking mode? If so, add /no_think in your prompt

it's trained to think - probably with the idea to punctually use /no_think for some messages in a conversation where you don't want it to :) (The no think is probably more a product feature than something means to be used as default)

Β·

Noted. It thinks too long which is the problem. R1 and QwQ also took longer but are acceptable.

When I tested Qwen3, the difference of two modes is between an hour and a day (maybe longer)

It didn't wow me either, but... the thinking model is gonna not be as good if you disable the thinking, lol.