宋小猫
SongXiaoMao
·
AI & ML interests
None yet
Recent Activity
new activity
about 18 hours ago
Qwen/QwQ-32B:missing opening <think>
new activity
7 days ago
Valdemardi/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-AWQ:非常感谢您的量化 求一个其他的量化模型可以吗?
new activity
9 days ago
FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview:非常喜欢这个模型
Organizations
None yet
SongXiaoMao's activity
missing opening <think>
17
#4 opened 4 days ago
by
getfit

非常感谢您的量化 求一个其他的量化模型可以吗?
#1 opened 7 days ago
by
SongXiaoMao

非常喜欢这个模型
#9 opened 9 days ago
by
SongXiaoMao

这个模型真的太好用了
#7 opened 9 days ago
by
SongXiaoMao

I tested dynamic 1.58bit and 2.22bit, All thoughts are empty?
9
#24 opened 28 days ago
by
SongXiaoMao

No think tokens visible
6
#15 opened about 1 month ago
by
sudkamath
How to Pair with Larger Models
4
#7 opened 2 months ago
by
windkkk
Very easy to use
#2 opened 2 months ago
by
SongXiaoMao

multi GPU inferencing
2
#18 opened 3 months ago
by
cjj2003
Use sample code to start error reporting
1
#45 opened 3 months ago
by
SongXiaoMao

vllm reply garbled
3
#29 opened 3 months ago
by
SongXiaoMao

vllm has problems running this model
3
#46 opened 3 months ago
by
SongXiaoMao

Can you officially support VLLM?
1
#48 opened 3 months ago
by
SongXiaoMao

Does vllm support this model yet?
#63 opened 7 months ago
by
SongXiaoMao

The model can be started using vllm, but no dialogue is possible.
3
#2 opened 8 months ago
by
SongXiaoMao

How should vllm start it?
2
#24 opened 8 months ago
by
SongXiaoMao

请问这个4bit模型支持vllm启动吗?
#11 opened 8 months ago
by
SongXiaoMao

This model can be used normally using oobabooga
#5 opened almost 2 years ago
by
SongXiaoMao

Running on a 3090 graphics card can actually cause flash memory?
#2 opened almost 2 years ago
by
SongXiaoMao
