GGUF adapter for llama-3.2 3B to equip it with ASCII cat generation capabilities

Examples of ascii cats generated by my ascii cat adapter:

  ^โ€”-^    
(_='.')  
//
||              |\_/|
 \\  .-"""--._,' e b 
  \\/         \   =A/
   \    \       /'
    \| _|___/\ |
     '-'-------'-
.       .         
\-"'"-'/
 } ^^ {     
=.  -  ,=   
  /^^^\  .
 /     \  )           
(   Y   ) |
=""'""...'Y 

For more see generation examples.

For inference you need to locally clone both this adapter as well as the llama-3.2 3B (https://huggingface.co/pookie3000/Llama-3.2-3B-GGUF) base model. You then need to invoke the resulting model with an empty prompt. You can get a variety of cats by playing with temperature, top-p and other inference parameters.

More info can be found on: https://github.com/vossenwout/ascii-cat-llm-finetuning

You can also checkout my python inference notebook on: https://github.com/vossenwout/ascii-cat-llm-finetuning/blob/main/src/inference/notebooks/llama_cpp_inference.ipynb

llama.cpp local example

./llama-cli -m Llama-3.2-3B.F16.gguf --lora Llama-3.2-3B-ascii-cats-lora.gguf --prompt ""
Downloads last month
29
GGUF
Model size
24.3M params
Architecture
llama
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including pookie3000/Llama-3.2-3B-ascii-cats-lora-GGUF