Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
17
1
no
Rotating
Follow
21world's profile picture
1 follower
·
0 following
AI & ML interests
None yet
Recent Activity
new
activity
10 days ago
unsloth/Llama-4-Scout-17B-16E-Instruct:
DOA
new
activity
16 days ago
unsloth/DeepSeek-V3-0324-GGUF:
671B params or 685B params?
new
activity
22 days ago
unsloth/DeepSeek-V3-0324-GGUF:
Would There be Dynamic Qunatized Versions like 2.51bit
View all activity
Organizations
None yet
Rotating
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
unsloth/Llama-4-Scout-17B-16E-Instruct
10 days ago
DOA
1
15
#1 opened 10 days ago by
MrDevolver
New activity in
unsloth/DeepSeek-V3-0324-GGUF
16 days ago
671B params or 685B params?
6
#8 opened 17 days ago by
createthis
New activity in
unsloth/DeepSeek-V3-0324-GGUF
22 days ago
Would There be Dynamic Qunatized Versions like 2.51bit
8
#1 opened 22 days ago by
MotorBottle
New activity in
bartowski/Qwen_QwQ-32B-GGUF
about 1 month ago
Something wrong
12
#3 opened about 1 month ago by
wcde
New activity in
unsloth/DeepSeek-R1-GGUF
about 2 months ago
Q2_K_XL model is the best? IQ2_XXS is better than Q2_K_XL in mmlu-pro benchmark
11
#36 opened about 2 months ago by
albertchow
is it uncensored?
5
#33 opened about 2 months ago by
Morrigan-Ship
when using with ollama, does it support kv_cache_type=q4_0 and flash_attention=1?
3
#28 opened 2 months ago by
leonzy04
New activity in
unsloth/DeepSeek-R1-GGUF
2 months ago
Accuracy of the dynamic quants compared to usual quants?
19
#21 opened 2 months ago by
inputout
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
3
9
#13 opened 3 months ago by
ubergarm
New activity in
bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF
3 months ago
R1 32b is much worse than QwQ ...
22
#6 opened 3 months ago by
mirek190
New activity in
bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF
3 months ago
Prompt format
2
#1 opened 3 months ago by
Rotating
New activity in
bartowski/gemma-2-27b-it-GGUF
9 months ago
More gemma 2 llama.cpp merges, do they require GGUF regen again?
4
#6 opened 10 months ago by
IHadToMakeAccount
New activity in
bartowski/gemma-2-27b-it-GGUF
10 months ago
Prompt format <bos> not needed?
16
#3 opened 10 months ago by
eamag
'LlamaCppModel' object has no attribute 'model'
10
#2 opened 10 months ago by
DrNicefellow
New activity in
mradermacher/Qwen2-72B-GGUF
10 months ago
Is there any information on which prompt template to use?
2
#1 opened 10 months ago by
Debich
New activity in
miqudev/miqu-1-70b
about 1 year ago
Please upload the full model first
5
88
#1 opened about 1 year ago by
ChuckMcSneed
New activity in
TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF
over 1 year ago
Even this excellent high-end model doesn't follow my instructions
5
#8 opened over 1 year ago by
alexcardo
liked
a model
over 1 year ago
TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF
Updated
Dec 14, 2023
•
28.4k
•
615