Uthman Bilal
Winnougan
AI & ML interests
None yet
Recent Activity
new activity
about 23 hours ago
silveroxides/LTX-2.3-Quants:Dev INT8 version please? new activity
about 23 hours ago
silveroxides/LTX-2.3-Quants:Does this work with your Comfyui nodes? liked
a model 1 day ago
Bedovyy/Anima-INT8 Organizations
None yet
Dev INT8 version please?
๐ค 1
#2 opened about 23 hours ago
by
Winnougan
Does this work with your Comfyui nodes?
๐ 1
4
#1 opened 5 days ago
by
Winnougan
Have you tested this in Comfyui?
๐ 1
2
#1 opened 6 days ago
by
Winnougan
What's the best way to run AWQ models?
1
#1 opened 5 days ago
by
Winnougan
Regarding the loading of the model
4
#2 opened 18 days ago
by
someshijun
Output is a garbled mess
๐คฏ 1
9
#2 opened 17 days ago
by
Winnougan
Can we get the INT8 treatment for all the Wan2.2 models?
#5 opened 15 days ago
by
Winnougan
Can we get LTX-2 INT8?
#6 opened 16 days ago
by
Winnougan
Thanks for all the models. How to load the Small Mistral INT8 text encoder?
๐ฅ 1
1
#10 opened about 1 month ago
by
Winnougan
Could you provide the UMT5-XXL text encoder in INT8?
#1 opened about 1 month ago
by
Winnougan
How to use the int8 model
1
#7 opened 2 months ago
by
xiilei99
When converting fp16/bf16 diffuser models do you prefer int8 block-wise or tensor-wise
๐ฅ ๐ 1
1
#1 opened about 1 month ago
by
Winnougan
Thanks man it works awesome
6
#4 opened about 1 month ago
by
Winnougan
Please GGUF Quantize
๐ฅ 3
3
#30 opened about 1 month ago
by
MarioKartMTA
Can you do Wan2.2 or LTX-2?
3
#3 opened about 2 months ago
by
Winnougan
Any new updates for 2026? Your models rock
1
#2 opened about 2 months ago
by
Winnougan
nunchaku-1.1.0+torch2.8-cp312-cp312-win_amd64.whl
๐ 2
2
#5 opened 2 months ago
by
makisekurisu-jp
Does this work in Comfyui?
๐ 1
3
#1 opened 3 months ago
by
Winnougan
Thanks bro!
๐ฅ 4
9
#2 opened 3 months ago
by
Winnougan
How did you manage to make quants of these transformers?
๐ค 1
1
#1 opened 4 months ago
by
Winnougan