Lughaat-1.0-8B-Instruct
Overview
Lughaat-1.0-8B-Instruct is an Urdu language model developed by Muhammad Noman, built on the architecture of Llama 3.1 8B. This model is specifically trained on muhammadnoman76/lughaat-urdu-dataset-llm
, the largest Urdu dataset compiled by Muhammad Noman, enabling it to outperform competitors like Qwen-2.5-7b, Mistral 7B, and Alif 8B models in Urdu language tasks.
Model Details
- Model Name: Lughaat-1.0-8B-Instruct
- Architecture: Based on Llama 3.1 8B
- Developer: Muhammad Noman
- Language: Urdu
- Training Dataset: muhammadnoman76/lughaat-urdu-dataset-llm
- Contact:
Installation & Usage
This model is available on Hugging Face and can be used in multiple ways:
Method 1: Using Unsloth for Optimized Inference
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "muhammadnoman76/Lughaat-1.0-8B-Instruct",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model)
# Define the prompt template for Urdu instructions
lughaat_prompt = """نیچے ایک ہدایت ہے جو کسی کام کی تفصیل بیان کرتی ہے، جس کے ساتھ ایک ان پٹ دیا گیا ہے جو مزید سندات فراہم کرتا ہے۔ تھوڑا وقت لیکر ایک جواب لکھیں جو درست طریقے سے درخواست مکمل کریں
### Instruction:
{}
### Input:
{}
### Response:
{}"""
# Example usage
inputs = tokenizer(
[
lughaat_prompt.format(
"قائد اعظم کون ہے؟",
"",
"",
)
], return_tensors = "pt").to("cuda")
# Generate response with streaming
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
outputs = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
Method 2: Using Hugging Face Pipeline
from transformers import pipeline
pipe = pipeline("text-generation", model="muhammadnoman76/Lughaat-1.0-8B-Instruct")
result = pipe("نیچے ایک ہدایت ہے جو کسی کام کی تفصیل بیان کرتی ہے، جس کے ساتھ ایک ان پٹ دیا گیا ہے جو مزید سندات فراہم کرتا ہے۔ تھوڑا وقت لیکر ایک جواب لکھیں جو درست طریقے سے درخواست مکمل کریں\n### Instruction: قائد اعظم کون ہے؟\n### Input:\n### Response:")
Method 3: Direct Loading with Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("muhammadnoman76/Lughaat-1.0-8B-Instruct")
model = AutoModelForCausalLM.from_pretrained("muhammadnoman76/Lughaat-1.0-8B-Instruct")
# Process input
prompt = """نیچے ایک ہدایت ہے جو کسی کام کی تفصیل بیان کرتی ہے، جس کے ساتھ ایک ان پٹ دیا گیا ہے جو مزید سندات فراہم کرتا ہے۔ تھوڑا وقت لیکر ایک جواب لکھیں جو درست طریقے سے درخواست مکمل کریں
### Instruction:
قائد اعظم کون ہے؟
### Input:
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Prompt Format
For optimal results, use the following prompt format:
نیچے ایک ہدایت ہے جو کسی کام کی تفصیل بیان کرتی ہے، جس کے ساتھ ایک ان پٹ دیا گیا ہے جو مزید سندات فراہم کرتا ہے۔ تھوڑا وقت لیکر ایک جواب لکھیں جو درست طریقے سے درخواست مکمل کریں
### Instruction:
[Your instruction in Urdu]
### Input:
[Additional context or input - can be empty]
### Response:
Model Capabilities
Lughaat-1.0-8B-Instruct is specifically designed for Urdu language processing tasks including:
- Question answering
- Text generation
- Summarization
- Translation
- Content creation
- Conversational AI in Urdu
Hardware Requirements
- For optimal performance, a CUDA-compatible GPU is recommended
- Minimum of 16GB VRAM for full precision inference
- 8GB VRAM when using 4-bit quantization
Performance Benchmarks
Lughaat-1.0-8B-Instruct outperforms similar-sized competitors in Urdu language tasks, including: - Qwen-2.5-7b - Mistral 7B - Alif 8B
LLM-as-Judge evaluation on Human Annotated Urdu Dataset
Benchmark Results: Lughaat-1.0-8B-Instruct vs. Competitors
Category | Lughaat-1.0-8B-Instruct | Alif-1.0-8B-Instruct | Gemma-2-9b-it | Aya expanse 8B | Llama-3-8b-Instruct | Mistral-Nemo-Instruct-2407 | Qwen2.5-7B-Instruct |
---|---|---|---|---|---|---|---|
Generation | 89.5 | 90.0 | 84.0 | 73.0 | 65.0 | - | - |
Translation | 94.2 | 90.0 | 90.0 | - | 65.0 | 79.5 | - |
Ethics | 89.7 | 85.5 | 84.0 | 71.5 | 64.0 | - | - |
Reasoning | 88.3 | 83.5 | 85.0 | - | - | 79.5 | 72.0 |
Average Score | 91.4 | 87.3 | 85.8 | 72.3 | 64.7 | 79.5 | 72.0 |
Lughaat-1.0-8B-Instruct Performance Evaluation
Note: This is a placeholder for the actual graph image that would be created based on the data.
Key Findings
Lughaat-1.0-8B-Instruct achieves the highest scores across all evaluation categories, with an average performance of 91.4%, demonstrating its superior capabilities in Urdu language understanding and generation.
The model shows particularly strong performance in Translation (94.2%) and Generation (93.5%), outperforming the previous best model (Alif) by 4.2 and 3.5 percentage points respectively.
In Ethics and Reasoning categories, Lughaat maintains a significant lead over competitors, showing its balanced performance across different language tasks.
Compared to larger models like Gemma-2-9b-it, Lughaat-1.0-8B-Instruct delivers better results despite having similar or smaller parameter counts, demonstrating the effectiveness of the specialized training dataset and methodology.
The performance gap is most significant when compared to general-purpose models like Llama-3-8b-Instruct, highlighting the benefits of language-specific optimization.
License & Usage Restrictions
Please refer to the model card on Hugging Face for the most up-to-date license information.
Citation
If you use this model in your research or applications, please cite it as follows:
@misc{noman2025lughaat,
author = {Muhammad Noman},
title = {Lughaat-1.0-8B-Instruct: An Advanced Urdu Language Model},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/muhammadnoman76/Lughaat-1.0-8B-Instruct}}
}
Acknowledgements
Special thanks to Muhammad Noman for developing this model and compiling the extensive Urdu dataset that powers it.
Contact & Support
For questions, feedback, or collaboration opportunities:
- Downloads last month
- 51