Raushan Turganbay
RaushanTurganbay
AI & ML interests
Generation and Multimodality
Recent Activity
updated
a model
4 days ago
hf-internal-testing/tiny-random-Gemma3ForCausalLM
updated
a model
7 days ago
hf-internal-testing/Mistral-Small-3.1-24B-Instruct-2503-only-processor
new activity
7 days ago
mistral-community/pixtral-12b:How do I load the model quantized?
Organizations
RaushanTurganbay's activity
How do I load the model quantized?
4
#33 opened 8 days ago
by
treehugg3
Convert to HF format
9
#55 opened 24 days ago
by
cyrilvallez

HF format chat template
3
#56 opened 23 days ago
by
RaushanTurganbay
Cannot apply chat template from tokenizer
1
#31 opened about 1 month ago
by
DarkLight1337
prefill assistant responses
1
#16 opened about 1 month ago
by
alexsafayan

Can't reproduce given example (no meaningful output)
1
#8 opened about 1 month ago
by
pzarzycki

Change chat template into the one for mistralai/Mistral-7B-Instruct-v0.2
1
#3 opened about 2 months ago
by
ruiqiRichard
Is this a working model?
4
#2 opened about 2 months ago
by
ruiqiRichard
Support of flash attention 2?
2
#29 opened 2 months ago
by
LuciusLan
Fix typo in chat template (assistant EOS token)
#43 opened 2 months ago
by
fabric
<\s> token in the chat template instead of the </s> EOS token
4
#41 opened 4 months ago
by
fabric
Incorrect Weights in Model Repositories
1
3
#2 opened 2 months ago
by
francescortu

Getting shape mismatch while loading saved Pixtral model
4
#24 opened 2 months ago
by
ss007
Update README with new chat template example
1
#18 opened 3 months ago
by
RaushanTurganbay
Is the chat template correct? (issue for vLLM)
7
#22 opened 3 months ago
by
MichaelAI23
TypeError: LlavaNextProcessor.__init__() got an unexpected keyword argument 'image_token'
1
#10 opened 3 months ago
by
JBod

Store chat template in its own file
1
#13 opened 3 months ago
by
RaushanTurganbay