Abdelatif-Abualya Fine-Tuned Model
This model is a LoRA fine-tuned adapter based on Qwen2.5-Coder-14B-Instruct. It has been trained on additional data to improve performance in specific coding-related tasks.
Acknowledgements
This model was fine-tuned using multiple datasets and resources, including:
- AWS SageMaker Example Guides (Licensed under Apache 2.0)
- DeepSeek-SM-Deploy-Options Notebook from the "host-deepseek-distilled-models-on-aws" repository (Licensed under MIT No Attribution)
โ ๏ธ Disclaimer This model is provided "as is" without warranty of any kind. While efforts have been made to fine-tune and optimize it for coding-related tasks, results may vary depending on input prompts.
๐ ๏ธ How to Use
To use this fine-tuned LoRA adapter, first load the base model and then apply the adapter:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = "Qwen/Qwen2.5-Coder-14B-Instruct"
adapter_model = "Abdelatif94/abdelatif-abualya-c7b863-lora"
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Test inference
input_text = "Write a Python function to reverse a string."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Abdelatif94/abdelatif-abualya-c7b863-lora
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-Coder-14B
Finetuned
Qwen/Qwen2.5-Coder-14B-Instruct