KoLama / README.md
Neetree's picture
Update README.md
fd2bcbf verified
metadata
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft
license: apache-2.0
language:
  - en
datasets:
  - Neetree/raw_enko_opus_CCM

KoLama: Fine-Tuned Llama3.1-8B Model

Overview

KoLama is a fine-tuned version of the Meta-Llama-3.1-8B-bnb-4bit model, developed by Neetree. This model was trained using the Unsloth library, which significantly accelerated the training process, and Huggingface's TRL (Transformer Reinforcement Learning) library. The model is optimized for text generation tasks and is licensed under Apache-2.0.

Model Details

Key Features

  • Efficient Training: The model was trained 2x faster using Unsloth, making the fine-tuning process more efficient.
  • Text Generation: Optimized for text generation tasks, leveraging the power of the Llama3.1 architecture.
  • Reinforcement Learning: Fine-tuned using Huggingface's TRL library, which incorporates reinforcement learning techniques to improve model performance.

Usage

To use KoLama for text generation, you can load the model using the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Neetree/KoLama"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")

# Generate text
outputs = model.generate(**inputs, max_length=50)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(generated_text)

Training Details

  • Training Speed: 2x faster training using Unsloth.
  • Fine-Tuning Method: Supervised Fine-Tuning (SFT) with reinforcement learning via Huggingface's TRL library.
  • Dataset: The model was fine-tuned on the Neetree/raw_enko_opus_CCM dataset, which contains English-Korean parallel text data.

License

This model is licensed under the Apache-2.0 license. For more details, please refer to the LICENSE file.

Acknowledgments

  • Unsloth: For providing the tools to accelerate the training process.
  • Huggingface: For the TRL library and the transformers framework.
  • Meta: For the original Llama3.1-8B model.