Please check if the model file is damaged and cannot chat normally
Please check if the model file is damaged and cannot chat normally
Please check if the model file is damaged and cannot chat normally
Hi we're going to reupload it just to make sure everything is in place
Will update you when we reupload
Will update you when we reupload
Please check unsloth/Qwen3-14B same, and the output cannot be normal after online 4bit quantization. At the same time, update model scope.
Will update you when we reupload
It is possible that the Qwen3-14B quantification will collapse
Will update you when we reupload
Please check unsloth/Qwen3-14B same, and the output cannot be normal after online 4bit quantization. At the same time, update model scope.
So you're saying even the full 16bit model has issues?
Will update you when we reupload
Please check unsloth/Qwen3-14B same, and the output cannot be normal after online 4bit quantization. At the same time, update model scope.
So you're saying even the full 16bit model has issues?
FastLanguageModel adds 16 bits, and cannot even output, using native transformers everything works fine for online 4bit or 16bit
Will update you when we reupload
Please check unsloth/Qwen3-14B same, and the output cannot be normal after online 4bit quantization. At the same time, update model scope.
So you're saying even the full 16bit model has issues?
8B The same is true