Custom GGUF quants of Meta’s Llama-3.2-Instruct's finetunes, where the Output Tensors are quantized to Q8_0 or F32 and the Embeddings are kept @F32
Joseph
Joseph717171
AI & ML interests
None yet
Recent Activity
new activity
1 day ago
arcee-ai/Homunculus:how did you do it?
new activity
3 days ago
nvidia/Nemotron-Research-Reasoning-Qwen-1.5B:Do this to arcee-ai/homunculus
liked
a model
3 days ago
nvidia/Nemotron-Research-Reasoning-Qwen-1.5B
Organizations
Collections
3
Custom GGUF quants of Llama-3.1-8B-Instruct fine-tunes, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. 🧠🔥🚀
-
Joseph717171/DeepHermes-3-Llama-3.1-8B-Preview-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 296 • 1 -
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 87 • 2 -
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 232 • 2 -
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_base_Embeds_Initialized_dtypeF32-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 57 • 1
models
36
Joseph717171/Models
Updated
•
151
•
4
Joseph717171/Imatrices
Updated
•
5
Joseph717171/Llama-3.1-8B-Instruct-UD-OQ8_0-F32.EQ8_0-F32.IQ4_K-Q8_0-GGUF
Updated
•
249
•
2
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
232
•
2
Joseph717171/DeepHermes-3-Llama-3.1-8B-Preview-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
296
•
1
Joseph717171/DeepHermes-3-Mistral-24B-Preview-OQ8_0-F32.EQ8_0-F32.IQ4_K-Q8_0-GGUF
Updated
•
97
•
1
Joseph717171/DeepHermes-3-Llama-3.2-3B-Preview-OQ8_0-F32.EQ8_0-F32.IQ4_K-Q8_0-GGUF
Updated
•
6
•
1
Joseph717171/Llama-3.2-1B-Instruct-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
38
•
1
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
87
•
2
Joseph717171/DeepSeek-R1-Distill-Llama-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
56
•
2
datasets
0
None public yet