You need to agree to share your contact information to access this dataset

You agree to not use the dataset to conduct experiments that cause harm to human subjects.

Log in or Sign Up to review the conditions and access this dataset content.

PeKA (Persian Knowledge Assessment)

PeKA is a dataset introduced in the paper "Advancing Persian LLM Evaluation", accepted at NAACL 2025 findings. It was developed as part of a broader effort to evaluate and benchmark large language models (LLMs) for multiple Persian knowledge topics. For comprehensive details regarding the dataset’s construction, scope, task, and intended use, please refer to the original paper.

This dataset is constructed so that answering these questions requires knowledge about the Persian community, particularly Iran, and its culture from a variety of perspectives. This data set contains 3600 multiple-choice questions divided into 12 different categories, each with 300 high-quality examples. The Categories are as follows: history, literature, religion, general knowledge, geography, nature, art, music, television shows, movies, food, and sports which cover a wide range of cultural and native topics for Persian speakers.

Dataset Sources

  • Paper: Advancing Persian LLM Evaluation link

Citation

BibTeX:

@inproceedings{hosseinbeigi-etal-2025-advancing,
    title = "Advancing {P}ersian {LLM} Evaluation",
    author = "Hosseinbeigi, Sara Bourbour  and
      Rohani, Behnam  and
      Masoudi, Mostafa  and
      Shamsfard, Mehrnoush  and
      Saaberi, Zahra  and
      Manesh, Mostafa Karimi  and
      Abbasi, Mohammad Amin",
    editor = "Chiruzzo, Luis  and
      Ritter, Alan  and
      Wang, Lu",
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2025",
    month = apr,
    year = "2025",
    address = "Albuquerque, New Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.findings-naacl.147/",
    doi = "10.18653/v1/2025.findings-naacl.147",
    pages = "2711--2727",
    ISBN = "979-8-89176-195-7",
    abstract = "Evaluation of large language models (LLMs) in low-resource languages like Persian has received less attention than in high-resource languages like English. Existing evaluation approaches for Persian LLMs generally lack comprehensive frameworks, limiting their ability to assess models' performance over a wide range of tasks requiring considerable cultural and contextual knowledge, as well as a deeper understanding of Persian literature and style. This paper first aims to fill this gap by providing two new benchmarks, PeKA and PK-BETS, on topics such as history, literature, and cultural knowledge, as well as challenging the present state-of-the-art models' abilities in a variety of Persian language comprehension tasks. These datasets are meant to reduce data contamination while providing an accurate assessment of Persian LLMs. The second aim of this paper is the general evaluation of LLMs across the current Persian benchmarks to provide a comprehensive performance overview. By offering a structured evaluation methodology, we hope to promote the examination of LLMs in the Persian language."
}
Downloads last month
9