Dataset Viewer
Auto-converted to Parquet
input
stringlengths
10
18
reference
stringclasses
9 values
A trout is a
fish
A trout is not a
fish
A trout is a
tool
A trout is not a
tool
A salmon is a
fish
A salmon is not a
fish
A salmon is a
flower
A salmon is not a
flower
An ant is an
insect
An ant is not an
insect
An ant is a
vegetable
An ant is not a
vegetable
A bee is an
insect
A bee is not an
insect
A bee is a
building
A bee is not a
building
A robin is a
bird
A robin is not a
bird
A robin is a
tree
A robin is not a
tree
A sparrow is a
bird
A sparrow is not a
bird
A sparrow is a
vehicle
A sparrow is not a
vehicle
An oak is a
tree
An oak is not a
tree
An oak is a
vehicle
An oak is not a
vehicle
A pine is a
tree
A pine is not a
tree
A pine is a
tool
A pine is not a
tool
A rose is a
flower
A rose is not a
flower
A rose is an
insect
A rose is not an
insect
A daisy is a
flower
A daisy is not a
flower
A daisy is a
bird
A daisy is not a
bird
A carrot is a
vegetable
A carrot is not a
vegetable
A carrot is a
fish
A carrot is not a
fish
A pea is a
vegetable
A pea is not a
vegetable
A pea is a
building
A pea is not a
building
A hammer is a
tool
A hammer is not a
tool
A hammer is an
insect
A hammer is not an
insect
A saw is a
tool
A saw is not a
tool
A saw is a
vegetable
A saw is not a
vegetable
A car is a
vehicle
A car is not a
vehicle
A car is a
tree
A car is not a
tree
A truck is a
vehicle
A truck is not a
vehicle
A truck is a
flower
A truck is not a
flower
A hotel is a
building
A hotel is not a
building
A hotel is a
fish
A hotel is not a
fish
A house is a
building
A house is not a
building
A house is a
bird
A house is not a
bird

Dataset Card for LM Diagnostics (cprag) Clone

This repository contains a diagnostic dataset (cprag) for What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models, by Allyson Ettinger.

Licensing Information

The dataset is released under the MIT License.

Citation Information

@article{10.1162/tacl_a_00298,
    author = {Ettinger, Allyson},
    title = {What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models},
    journal = {Transactions of the Association for Computational Linguistics},
    volume = {8},
    pages = {34-48},
    year = {2020},
    month = {01},
    abstract = {Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event prediction— and, in particular, it shows clear insensitivity to the contextual impacts of negation.},
    issn = {2307-387X},
    doi = {10.1162/tacl_a_00298},
    url = {https://doi.org/10.1162/tacl_a_00298},
    eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00298/1923116/tacl_a_00298.pdf},
}
Downloads last month
119

Collection including SebastiaanBeekman/lm-diagnostics-negsimp