Custom GGUF quants of Metaβs Llama-3.2-Instruct's finetunes, where the Output Tensors are quantized to Q8_0 or F32 and the Embeddings are kept @F32
Joseph
Joseph717171
AI & ML interests
None yet
Recent Activity
liked
a model
about 8 hours ago
LatitudeGames/Wayfarer-2-12B-GGUF
liked
a model
about 18 hours ago
LatitudeGames/Wayfarer-2-12B
reacted
to
sergiopaniego's
post
with π₯
3 days ago
You can now supercharge your TRL training pipelines with kernels
π· kernels is new library to load optimized compute kernels directly from the Hub
Combined with TRL, it makes you developer experience smoother & faster.
Check out the new guide to learn more! πΊ
Learn β‘οΈ https://huggingface.co/docs/trl/main/en/kernels_hub