YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Fine-Tuned LoRA Adapters for LLaMA-3-8B CEFR Model
This repository contains the LoRA adapters for a fine-tuned version of unsloth/llama-3-8b-instruct-bnb-4bit
for CEFR-level sentence generation. The base model is available at Mr-FineTuner/llama-3-8b-instruct-base
.
- Base Model: Mr-FineTuner/llama-3-8b-instruct-base
- Fine-Tuning: LoRA with SMOTE-balanced dataset
- Training Details:
- Dataset: CEFR-level sentences with SMOTE and undersampling for balance
- LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.2
- Training Args: learning_rate=1e-5, batch_size=8, epochs=3, cosine scheduler
- Optimizer: adamw_8bit
- Early Stopping: Patience=2, threshold=0.01
- Evaluation Metrics (Exact Matches):
- CEFR Classifier Accuracy: 0.283
- Precision (Macro): 0.440
- Recall (Macro): 0.283
- F1-Score (Macro): 0.266
- Evaluation Metrics (Within ±1 Level):
- CEFR Classifier Accuracy: 0.617
- Precision (Macro): 0.747
- Recall (Macro): 0.617
- F1-Score (Macro): 0.593
- Other Metrics:
- Perplexity: 3.088
- Diversity (Unique Sentences): 1.000
- Inference Time (ms): 7171.863
- Model Size (GB): 4.8 (base model + LoRA adapters)
- Robustness (F1): 0.252
- Confusion Matrix (Exact Matches):
- CSV: confusion_matrix_exact.csv
- Image: confusion_matrix_exact.png
- Confusion Matrix (Within ±1 Level):
- Per-Class Confusion Metrics (Exact Matches):
- A1: TP=3, FP=2, FN=7, TN=48
- A2: TP=7, FP=24, FN=3, TN=26
- B1: TP=3, FP=8, FN=7, TN=42
- B2: TP=1, FP=6, FN=9, TN=44
- C1: TP=2, FP=3, FN=8, TN=47
- C2: TP=1, FP=0, FN=9, TN=50
- Per-Class Confusion Metrics (Within ±1 Level):
- A1: TP=8, FP=0, FN=2, TN=50
- A2: TP=9, FP=12, FN=1, TN=38
- B1: TP=10, FP=8, FN=0, TN=42
- B2: TP=3, FP=3, FN=7, TN=47
- C1: TP=5, FP=0, FN=5, TN=50
- C2: TP=2, FP=0, FN=8, TN=50
- Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model base_model = AutoModelForCausalLM.from_pretrained( "Mr-FineTuner/llama-3-8b-instruct-base", quantization_config=BitsAndBytesConfig(load_in_4bit=True), device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/llama-3-8b-instruct-base") # Load LoRA adapters model = PeftModel.from_pretrained(base_model, "Mr-FineTuner/noSynthetic-llama_3epoch_02dropout_lora") # Example inference prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Uploaded using huggingface_hub
. Saved with safetensors for efficiency.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support