momonga PRO
mmnga
AI & ML interests
None yet
Recent Activity
updated
a model
6 days ago
mmnga/Llama-4-Scout-17B-16E-Instruct-gguf
published
a model
6 days ago
mmnga/AXCXEPT-QwQ-32B-Distill-Qwen-1.5B-Alpha-gguf
updated
a model
6 days ago
mmnga/AXCXEPT-QwQ-32B-Distill-Qwen-1.5B-Alpha-gguf
Organizations
mmnga's activity
It seems completely broken
1
#1 opened about 2 months ago
by
aguspiza
Can we hire you for quantization and fine tuning?
1
#1 opened 4 months ago
by
rafa9
Fix
1
#1 opened 3 months ago
by
STATIKwitak

Would it be possible to have an 8bit gguf?
1
2
#1 opened 9 months ago
by
PurityWolf
Please use split ggufs instead of splitting files manually
1
1
#1 opened 9 months ago
by
lmg-anon
Usage in the model card seems to be ChatML format.
1
1
#1 opened 9 months ago
by
yamikumods
LM Studioでのエラー
3
#1 opened 11 months ago
by
alfredplpl

An idea
1
1
#1 opened 12 months ago
by
Cran-May
Please tell me how did you convert this FAST model into gguf file.
7
#1 opened about 1 year ago
by
wattai

Differences in output from the original model
2
#1 opened over 1 year ago
by
nitky
Librarian Bot: Add moe tag to model
#3 opened over 1 year ago
by
librarian-bot

Librarian Bot: Add moe tag to model
#1 opened over 1 year ago
by
librarian-bot

Librarian Bot: Add moe tag to model
#1 opened over 1 year ago
by
librarian-bot

Maybe a slerp or some other merge method will preserve the component experts better?
1
3
#2 opened over 1 year ago
by
BlueNipples

Responses somewhat related to the prompt but still gibberish
2
#1 opened over 1 year ago
by
JeroenAdam
Tritonのサポート切れによるColab A100への移行
2
#2 opened over 1 year ago
by
alfredplpl

bfloat16でなくfloat16による量子化
2
#1 opened over 1 year ago
by
alfredplpl

Missing tokenizer.model
4
#1 opened over 1 year ago
by
mmnga

is this related with GPT-Neo-2.7B-AID ?
1
#1 opened over 1 year ago
by
adriey