metadata
			language:
  - en
base_model: Qwen/Qwen3-8B
library_name: transformers
pipeline_tag: text-generation
tags:
  - malayalam
  - text-generation
  - lora
  - merged
license: apache-2.0
datasets:
  - NousResearch/Hermes-3-Dataset
  - QuixiAI/dolphin
Delphermes-8B-cpt-epoch2
This is a merged LoRA model based on Qwen/Qwen3-8B, fine-tuned using Hermes3 & Dolphin synth data.
Model Details
- Base Model: Qwen/Qwen3-8B
- Language: English (en)
- Type: Merged LoRA model
- Library: transformers, axolotl
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "justinj92/Delphermes-8B-cpt-epoch2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)
# Example usage
text = "Who are you?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
This model was created by merging a LoRA adapter trained for understanding and generation.
