IndicNLG Benchmark: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
Paper
•
2203.05437
•
Published
MultiIndicHeadlineGenerationSS is a multilingual, sequence-to-sequence pre-trained model focusing only on Indic languages. It currently supports 11 Indian languages and is finetuned on IndicBARTSS checkpoint. You can use MultiIndicHeadlineGenerationSS model to build natural language generation applications in Indian languages for tasks like summarization, headline generation and other summarization related tasks. Some salient features of the MultiIndicHeadlineGenerationSS are:
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[58232, 76, 14514, 53, 5344, 10605, 1052, 680, 83, 648, . . . . , 12126, 725, 19, 13635, 17, 7, 64001, 64007]])
out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64007, 329, 1906, 15429, . . . . ,17, 329, 1906, 27241, 64001]])
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=32, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # अगस्त के अंत तक '5G' इंटरनेट लॉन्च हो जाएगा : अश्विनी वैष्णव
Scores on the MultiIndicHeadlineGenerationSS test sets are as follows:
| Language | Rouge-1 / Rouge-2 / Rouge-L |
|---|---|
| as | 48.10 / 32.41 / 46.82 |
| bn | 35.71 / 18.93 / 33.49 |
| gu | 32.41 / 16.95 / 30.87 |
| hi | 38.48 / 18.44 / 33.60 |
| kn | 65.22 / 54.23 / 64.50 |
| ml | 58.52 / 47.02 / 57.60 |
| mr | 34.11 / 18.36 / 33.04 |
| or | 24.83 / 11.00 / 23.74 |
| pa | 45.15 / 27.71 / 42.12 |
| ta | 47.15 / 31.09 / 45.72 |
| te | 36.80 / 20.81 / 35.58 |
| average | 42.41 / 27.00 / 40.64 |
If you use MultiIndicHeadlineGeneration, please cite the following paper:
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}