Introduction of the fine-tuned model
This repository hosts a custom Llama3 7B model fine-tuned using LoRA on a dataset containing over 7,000 original texts from Guy de Maupassant's works. The fine-tuning process has endowed the model with a unique capability: generating text continuations that mirror the narrative style and literary nuances characteristic of Maupassant's novels.
Usage Installation To get started, install the necessary libraries and load the model using Hugging Face's Transformers.
pip install transformers
Loading the Model Below is an example of how to load and use the model in Python:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Lineimos/French-lora-finetuned-Llama3-8B-novel-maupassant"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = ("Voici une partie d'un roman. Veuillez vous en inspirer pour rédiger la suite:
Il parlait en gaillard tranquille qui connaît la vie, et il souriait en regardant passer la foule.
Mais tout d’un coup il se mit à tousser, et s’arrêta pour laisser finir la quinte, puis, d’un ton découragé :\n
– Est-ce pas assommant de ne pouvoir se débarrasser de cette bronchite ?
Et nous sommes en plein été. Oh ! cet hiver, j’irai me guérir à Menton. Tant pis, ma foi, la santé avant tout.")
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Loss evolution
Some examples
Dataset source
Novel texts source come from the public files of "La Bibliothèque électronique du Québec" The construction of instruction fine-tuning dataset were done by personal scripts.
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.