This is for debug.
Made by llama.cpp-b4453 (Windows CUDA12 Binary) and convert_hf_to_gguf.py (released same time).

MIT License

Downloads last month
0
GGUF
Model size
41.9B params
Architecture
phimoe

4-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .