Datasets:
dataset_info:
features:
- name: xml
dtype: string
- name: proceedings
dtype: string
- name: year
dtype: string
- name: url
dtype: string
- name: language documentation
dtype: string
- name: has non-English?
dtype: string
- name: topics
dtype: string
- name: language coverage
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 452838
num_examples: 310
download_size: 231933
dataset_size: 452838
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-classification
The State of Multilingual LLM Safety Research: From Measuring the Language Gap to Mitigating It
We present a comprehensive analysis of the linguistic diversity of LLM safety research, highlighting the English-centric nature of the field. Through a systematic review of nearly 300 publications from 2020–2024 across major NLP conferences and workshops at *ACL, we identify a significant and growing language gap in LLM safety research, with even high-resource non-English languages receiving minimal attention.
Dataset Description
Current version of the dataset consists of annotations for conference and workshop papers collected from *ACL venues between 2020 and 2024, using keywords of "safe" and "safety" in abstracts to identify relevant literature. The data source is https://github.com/acl-org/acl-anthology/tree/master/data, and the paperes are curated by Zheng-Xin Yong, Beyza Ermis, Marzieh Fadaee, and Julia Kreutzer.
Dataset Structure
- xml: xml string from ACL Anthology
- proceedings: proceedings of the conference or workshop the work is published in.
- year: year of publication
- url: paper url on ACL Anthology
- language documentation: whether the paper explicitly reports the languages studied in the work. ("x" indicates failure of reporting)
- has non-English?: whether the work contains non-English language. (0: English-only, 1: has at least one non-English language)
- topics: topic of the safety work ('jailbreaking attacks'; 'toxicity, bias'; 'hallucination, factuality'; 'privacy'; 'policy'; 'general safety, LLM alignment'; 'others')
- language coverage: languages covered in the work (null means English only)
- title: title of the paper
- abstract: abstract of the paper
Citation
@article{yong2025safetysurvey,
title={The State of Multilingual LLM Safety Research: From Measuring the Language Gap to Mitigating It},
author={Zheng-Xin Yong and Beyza Ermis and Marzieh Fadaee and Stephen H. Bach and Julia Kreutzer},
year={2025},
journal={arXiv preprint arXiv:2505.24119},
}