tokenizers
tokenizer

Tokenizer Card for Ansh-256k!

The tokenizer model Ansh-256k - is trained on a dataset of Wikipedia articles in 22 Official Indic languages and English. We propose the name Ansh as this tokenizer is designed to meticulously identify every essential token (Ansh in Sanskrit) of our diverse Indic languages. This model is the advanced version of the Ansh-160k which was trained on 18 Indic languages and English.

image/png

Model Description

India is a vast country that has a multi-lingual culture that covers 22 Official languages and more than 1700 languages and dialects. It has been observed that various languages share words among themselves, sometimes even across language families. To capitalize on this observation, we trained our tokenization model with a vocabulary size of 256,000 (256k) using the dataset of Wikipedia articles and Sangraha dataset in 22 Indic languages and English by applying the Byte-Pair Encoding (BPE) algorithm. When compared among all the popular open-source tokenizers trained on multilingual Indic languages on fertility scores, our model outperformed them in 20 Indic languages.

How to Get Started with the Model ๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป

Use the code below to get started with the model.

from transformers import AutoTokenizer
try:
    tokenizer = tokenizer = AutoTokenizer.from_pretrained("LingoIITGN/Ansh-256k"))
    print("Tokenizer loaded successfully!")
except Exception as e:
    print(f"Error loading tokenizer: {e}")
    print("Please ensure you have the correct model name and are connected to the internet.")
    exit()
 
input_text = "Hello, world! This is an example of how to use the tokenizer."
#input_text = 'เคฎเฅเคเฅ‡ เคฏเคน presentation เค•เคฒ morning เคคเค• submit เค•เคฐเคจเคพ เคนเฅˆเฅค '
#input_text = 'What is capital city of India?'

encoded_input = tokenizer.encode(example_text)
print("\nOriginal Text:", example_text)
print("Encoded (Token IDs):", encoded_input)

decoded_output = tokenizer.decode(encoded_input)
print("Decoded Text:", decoded_output)

Evaluation

[More Information Needed]

Results ๐Ÿ†

Comparison of Fertility Scores among popular open-source tokenizers trained on multilingual Indic languages and Ansh-256k tokenizers across the 22 Indic languages and English. Tokenizers Results
Language IndicBERTv2 Sarvam-1 MuRIL Gemma-3 Llama-3.1 XLM-RoBERTa NLLB Ansh-160k
Tamil 1.966 2.853 1.904 2.766 12.170 2.726 2.925 1.937
Kannada 2.035 2.651 1.992 3.498 15.302 2.835 2.955 1.876
Malayalam 2.202 3.246 2.199 3.571 15.215 2.999 3.329 2.073
Maithili 1.534 2.269 1.549 2.036 3.414 1.991 2.058 1.270
Konkani 2.145 2.954 2.469 2.830 4.180 2.746 2.765 1.741
Telugu 1.803 2.429 1.859 3.050 13.002 2.391 2.691 1.713
Odia 1.601 2.419 1.497 4.639 15.629 2.222 2.284 1.397
Bengali 1.610 2.083 1.555 1.890 8.389 2.374 2.396 1.515
Nepali 1.629 2.450 1.484 2.163 3.768 1.903 2.070 1.466
Punjabi 1.458 1.822 1.459 2.968 8.277 2.031 1.983 1.445
Urdu 1.565 9.004 1.402 1.984 3.153 1.582 1.807 1.383
Hindi 1.456 1.784 1.450 1.719 2.997 1.716 1.790 1.364
Gujarati 1.505 2.228 1.428 2.491 9.926 2.195 2.332 1.387
Kashmiri 2.722 9.237 2.220 3.204 4.119 3.155 2.966 1.528
Marathi 1.529 1.906 1.493 2.026 3.964 2.032 2.173 1.494
Sindhi 1.740 8.337 1.436 2.377 3.060 1.735 1.830 1.380
Assamese 1.677 4.474 1.655 2.815 8.506 3.006 2.303 1.562
Sanskrit 2.821 3.916 2.294 3.586 5.036 3.268 3.390 1.950
English 1.491 1.844 1.526 1.537 1.486 1.574 1.587 1.521

Model Card Contact โœ‰๏ธ

Lingo Research Group at IIT Gandhinagar, India
Mail at: [email protected]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support