--- base_model: unsloth/meta-llama-3.1-8b language: - en license: llama3.1 tags: - text-generation-inference - transformers - unsloth - llama - trl - not-for-all-audiences datasets: - mpasila/Literotica-stories-short library_name: peft --- Dataset used: [mpasila/Literotica-stories-short](https://huggingface.co/datasets/mpasila/Literotica-stories-short) which contains only a subset of the stories from the full Literotica dataset and was chunked down to fit within 8192 tokens. Prompt format is: No formatting Merged model: [mpasila/Llama-3.1-Literotica-8B](https://huggingface.co/mpasila/Llama-3.1-Literotica-8B) Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using A40 for about 13 hours. # Uploaded model - **Developed by:** mpasila - **License:** Llama 3.1 Community License Agreement - **Finetuned from model :** unsloth/meta-llama-3.1-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)