--- language: - bg size_categories: - n<1K task_categories: - text-classification license: apache-2.0 tags: - not-for-all-audiences - medical --- ### Warning: This dataset contains content that includes toxic, offensive, or otherwise inappropriate language. The toxic-onto-bg dataset consists of 299 Bulgarian manually annotated words from the [Flores Toxicity 200 dataset](https://github.com/facebookresearch/flores/tree/main/toxicity) across four categories: toxic language, medical terminology, non-toxic language, and terms related to minority communities, as well as class formalisms and definitions. The ontology is aimed for language and media researches and developers of toxic language filtering systems. Ontology overview: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/640721cd5e6d06cc2cf35deb/quUedZe9iVeG6vMd0Sni9.png) More information in [the paper](https://www.researchgate.net/publication/388842558_Detecting_Toxic_Language_Ontology_and_BERT-based_Approaches_for_Bulgarian_Text) and in the [conference presentation](https://docs.google.com/presentation/d/1CYQ5uU2lZgabUH5ap4br7-LCqoWZXlsXAG8Z5TfcIYc/edit?usp=sharing). # Code and usage The code for building the ontology is available in the [GitHub repository of the project](https://github.com/TsvetoslavVasev/toxic-language-classification). # Reference If you use this dataset in your academic project, please cite as: ```bibtex @article {berbatova2025detecting, title={Detecting Toxic Language: Ontology and BERT-based Approaches for Bulgarian Text}, author={Berbatova, Melania and Vasev, Tsvetoslav}, year={2025} } ```