BanglaByT5: Byte-Level Modelling for Bangla is an encoder–decoder transformer model pretrained at the byte level specifically for Bangla language understanding and generation tasks. By operating on raw bytes rather than subword tokens, BanglaByT5 captures fine-grained morphological and orthographic patterns, making it highly effective in handling diverse Bangla text sources.

Usage Example

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Vacaspati/BanglaByT5")
model = AutoModelForSeq2SeqLM.from_pretrained("Vacaspati/BanglaByT5")

# Tokenize input
input_text = "আমার নাম প্রমিত।"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

# Generate text
outputs = model.generate(input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Citation

If you are using this model please cite:

@misc{bhattacharyya2025banglabyt5bytelevelmodellingbangla,
  title={BanglaByT5: Byte-Level Modelling for Bangla},
  author={Pramit Bhattacharyya and Arnab Bhattacharyya},
  year={2025},
  eprint={2505.17102},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2505.17102},
}
Downloads last month
252
Safetensors
Model size
300M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Vacaspati/BanglaByT5

Base model

google/byt5-small
Finetuned
(46)
this model