Datasets:

Languages:
English
ArXiv:
Tags:
Not-For-All-Audiences
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for VOICED Dataset

Offensive speech detection is a key component of content moderation. However, what is offensive can be highly subjective. This paper investigates how machine and human moderators disagree on what is offensive when it comes to real-world social web political discourse. We show that (1) there is extensive disagreement among the moderators (humans and machines); and (2) human and large-language-model classifiers are unable to predict how other human raters will respond, based on their political leanings. For (1), we conduct a noise audit at an unprecedented scale that combines both machine and human responses. For (2), we introduce a first-of-its-kind dataset of vicarious offense. Our noise audit reveals that moderation outcomes vary wildly across different machine moderators. Our experiments with human moderators suggest that political leanings combined with sensitive issues affect both first-person and vicarious offense.

Dataset Details

The train/dev/test splits are individual parquet files and there is a combined file as well with voiced_complete.parquet.

Dataset Description

This is an overview of the columns on this dataset.

  1. annotator_id = The unique ID of the annotator of annotated this item, annotators come from Amazon Mechanical Turk. This is a hashed ID to maintain annotator information.
  2. comment_id = Unique comment ID from YouTube. This is hashed in accordance to YouTube data sharing policies.
  3. batch_id = This includes the batch the data item was part of during the human experiments.
  4. dataset = This the YouTube channel source of the dataset, it is a pick between CNN, MSNBC, and FOXNEWS.
  5. duration = The duration annotator took to annotate the entire batch (30 items)
  6. dataset_bin and dataset_kind = Both are co-dependent. Since we did sampling of general randomly picked YouTube comments, gun is data items that are related to gun laws, and abortion is comments that are related to abortion laws. 1/2 of the entire dataset is general, and 1/4 gun and abortion related each.
  7. comment_text = Text of the YouTube comment
  8. PERSON_TOXIC_raw = This is the raw label from the annotator, if it is personally offensive to them.
  9. PERSON_TOXIC = The aggregated version, here if the label choice is not at all offensive, then the value is 0. Everything else is 1.
  10. [political party]_TOXIC,[political party]_TOXIC_raw, and [political party] refers to vicarious labeling and is either DEM, REP, or IND. This should be inferred with annotator_political which is the political leaning the annotator provided. For example, if the annotator annotator_political is Republican, then REP_TOXIC will be 1000 (empty value) and REP_TOXIC_raw will be empty. Since vicarious labeling asks for a REP annotator to annotate for DEM and IND. So DEM_TOXIC and IND_TOXIC will have values (including their raw columns).
  11. online and social are questions where we asked if this content is OK to be shown online and on social networks respectively.
  12. 2016_election and 2020_election are questions where we asked if the those presidential elections were conducted in a fair and democratic manner.
  13. published_at is the timestamp the YouTube comment was published.
  14. The rest of the columns are demographic information of the annotators.

Dataset Paper

Citation

BibTeX:

@inproceedings{weerasooriya-etal-2023-vicarious,
    title = "Vicarious Offense and Noise Audit of Offensive Speech Classifiers: Unifying Human and Machine Disagreement on What is Offensive",
    author = "Weerasooriya, Tharindu  and
      Dutta, Sujan  and
      Ranasinghe, Tharindu  and
      Zampieri, Marcos  and
      Homan, Christopher  and
      KhudaBukhsh, Ashiqur",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.713",
    doi = "10.18653/v1/2023.emnlp-main.713",
    pages = "11648--11668",
    abstract = "Offensive speech detection is a key component of content moderation. However, what is offensive can be highly subjective. This paper investigates how machine and human moderators disagree on what is offensive when it comes to real-world social web political discourse. We show that (1) there is extensive disagreement among the moderators (humans and machines); and (2) human and large-language-model classifiers are unable to predict how other human raters will respond, based on their political leanings. For (1), we conduct a ***noise audit*** at an unprecedented scale that combines both machine and human responses. For (2), we introduce a first-of-its-kind dataset of ***vicarious offense***. Our noise audit reveals that moderation outcomes vary wildly across different machine moderators. Our experiments with human moderators suggest that political leanings combined with sensitive issues affect both first-person and vicarious offense. The dataset is available through https://github.com/Homan-Lab/voiced.",
}

Dataset Card Contact

Tharindu Cyril Weerasooriya [[email protected]]

Downloads last month
0