Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
cs2764
/
DeepSeek-V3-0324-BF16-mlx-4Bit-gs32
like
0
Text Generation
Transformers
Safetensors
MLX
English
deepseek_v3
deepseek
unsloth
mlx-my-repo
conversational
custom_code
text-generation-inference
4-bit precision
License:
mit
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
main
DeepSeek-V3-0324-BF16-mlx-4Bit-gs32
/
configuration_deepseek.py
Commit History
Upload MLX converted model with quantization settings
de0d100
verified
cs2764
commited on
Aug 6