NyayaLM v0.5: Nepali Legal Assistant
Model Description
NyayaLM v0.5 is a fine-tuned version of Google's Gemma 3n 4B model, specifically designed to provide accurate legal information in Nepali. This model bridges the justice gap in Nepal by making legal knowledge accessible to everyone, running entirely offline on personal computers.
Key Features:
- 🇳🇵 Nepali language support for legal queries
- 💻 Offline operation (no internet required)
- 🔒 Privacy-first (all processing happens locally)
- ⚡ Efficient performance on consumer hardware
- 📚 Trained on 61+ Nepali legal documents
Model Details
- Base Model:
unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit
- Fine-tuned by: Chhatramani
- Languages: Nepali (primary), English (secondary)
- Domain: Nepalese Law
- Context Length: 2048 tokens
- Quantization: 4-bit (during training)
- Parameter Count: 4B (base), 21M trainable (LoRA adapters)
Intended Use
Primary Use Cases
- Answering legal questions in Nepali
- Explaining legal concepts in simple language
- Providing information about Nepalese laws and rights
- Supporting legal research and education
- Assisting NGOs and legal aid organizations
Target Users
- Rural communities with limited access to lawyers
- Students studying law in Nepal
- NGOs working on legal empowerment
- Government officials needing quick legal reference
- Citizens seeking to understand their legal rights
How to Use
Installation
- Install required libraries:
pip install transformers torch accelerate bitsandbytes
Load Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "chhatramani/NyayaLM_v0.5_gemma3n4B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16,
load_in_4bit=True,
)
Use with Chat Template
from unsloth.chat_templates import get_chat_template
# Get Gemma-3 chat template
tokenizer = get_chat_template(tokenizer, chat_template="gemma-3")
# Create conversation
messages = [
{"role": "user", "content": "बालबालिका अधिकार ऐनको मुख्य उद्देश्य के हो?"}
]
# Apply chat template
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Citation If you use this model in your research or applications, please cite:
@model{nyayalm_v0.5,
title={NyayaLM v0.5: A Nepali Legal Assistant Based on Gemma 3n},
author={Chhatramani},
year={2025},
month={August},
url={https://huggingface.co/chhatramani/NyayaLM_v0.5_gemma3n4B},
note={Google Gemma 3n Impact Challenge Submission}
}
Acknowledgments
Google: For the Gemma 3n model and the Impact Challenge opportunity Unsloth: For the efficient training framework Nepali Legal Community: For domain expertise and validation Open Source Community: For the tools and libraries that made this project possible
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support