Kannada-BERT-SQuAD

Kannada-BERT-SQuAD is a KannadaBERT model (l3cube-pune/kannada-bert) fine-tuned on the translated Kannada question-answering dataset L3Cube-IndicSQuAD. [dataset link] (https://github.com/l3cube-pune/indic-nlp/tree/main/L3Cube-IndicSQUAD)

More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2505.03688)

Citing:

@article{endait2025indicsquad,
  title={IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages},
  author={Endait, Sharvi and Ghatage, Ruturaj and Kulkarni, Aditya and Patil, Rajlaxmi and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2505.03688},
  year={2025}
}
@article{ghatage2024mahasquad,
  title={MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering},
  author={Ghatage, Ruturaj and Kulkarni, Aditya and Patil, Rajlaxmi and Endait, Sharvi and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2404.13364},
  year={2024}
}

Other IndicSQuAD models:

Downloads last month
4
Safetensors
Model size
237M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support