Edit model card

WRAP -- A TACO-based Classifier For Inference and Information-Driven Argument Mining on Twitter

Introducing WRAP, an advanced classification model built upon AutoModelForSequenceClassification, designed to identify tweets belonging to four distinct classes: Reason, Statement, Notification, and None of the TACO dataset. Designed specifically for extracting information and inferences from Twitter data, this specialized classification model utilizes WRAPresentations, from which WRAP acquires its name. WRAPresentations is an advancement of the BERTweet-base architecture, whose embeddings were extended on augmented tweets using contrastive learning for better encoding inference and information in tweets.

Class Semantics

The TACO framework revolves around the two key elements of an argument, as defined by the Cambridge Dictionary. It encodes inference as a guess that you make or an opinion that you form based on the information that you have, and it also leverages the definition of information as facts or details about a person, company, product, etc..

Taken together, WRAP can identify specific classes of tweets, where inferences and information can be aggregated in relation to these distinct classes containing these components:

  • Statement, which refers to unique cases where only the inference is presented as something that someone says or writes officially, or an action done to express an opinion.
  • Reason, which represents a full argument where the inference is based on direct information mentioned in the tweet, such as a source-reference or quotation, and thus reveals the author’s motivation to try to understand and to make judgments based on practical facts.
  • Notification, which refers to a tweet that limits itself to providing information, such as media channels promoting their latest articles.
  • None, a tweet that provides neither inference nor information.

In its entirety, WRAP can classify the following hierarchy for tweets:

Component Space

Usage

Using this model becomes easy when you have transformers installed:

pip install - U transformers

Then you can use the model to generate tweet classifications like this:

from transformers import pipeline

pipe = pipeline("text-classification", model="TomatenMarc/WRAP")
prediction = pipe("Huggingface is awesome")

print(prediction)
Notice: The tweets need to undergo preprocessing before classification.

Training

The final model underwent training using the entire shuffled ground truth dataset known as TACO, encompassing a total of 1734 tweets. This dataset showcases the distribution of topics as: #abortion (25.9%), #brexit (29.0%), #got (11.0%), #lotrrop (12.1%), #squidgame (12.7%), and #twittertakeover (9.3%). For training, we utilized SimpleTransformers.

Additionally, the category and class distribution of the dataset TACO is as follows:

Inference No-Inference
865 (49.88%) 869 (50.12%)
Information No-Information
1081 (62.34%) 653 (37.66%)
Reason Statement Notification None
581 (33.50%) 284 (16.38%) 500 (28.84%) 369 (21.28%)

Notice: Our training involved WRAP to forecast class predictions, where the categories (information/inference) represent class aggregations based on the inference or information component.

Dataloader

"data_loader": {
    "type": "torch.utils.data.dataloader.DataLoader",
    "args": {
        "batch_size": 8,
        "sampler": "torch.utils.data.sampler.RandomSampler"
    }
}

Parameters of the fit()-Method:

{
    "epochs": 5,
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 4e-05
    },
    "scheduler": "WarmupLinear",
    "warmup_steps": 66
}

Evaluation

We applied a 6-fold (Closed-Topic) cross-validation method to demonstrate WRAP's optimal performance. This involved using the same dataset and parameters described in the Training section, where we trained on k-1 splits and made predictions using the kth split.

Additionally, we assessed its ability to generalize across the 6 topics (Cross-Topic) of TACO. Each of the k topics was utilized for testing, while the remaining k-1 topics were used for training purposes.

In total, the WRAP classifier performs as follows:

Binary Classification Tasks

Macro-F1 Inference Information Multi-Class
Closed-Topic 86.62% 86.30% 75.29%
Cross-Topic 86.27% 84.90% 73.54%

Multi-Class Classification Task

Micro-F1 Reason Statement Notification None
Closed-Topic 78.14% 60.96% 79.36% 82.72%
Cross-Topic 77.05% 58.33% 78.45% 80.33%

Environmental Impact

Licensing

WRAP © 2023 is licensed under CC BY-NC-SA 4.0

Citation

@inproceedings{feger-dietze-2024-bertweets,
    title = "{BERT}weet{'}s {TACO} Fiesta: Contrasting Flavors On The Path Of Inference And Information-Driven Argument Mining On {T}witter",
    author = "Feger, Marc  and
              Dietze, Stefan",
    editor = "Duh, Kevin  and
              Gomez, Helena  and
              Bethard, Steven",
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-naacl.146",
    doi = "10.18653/v1/2024.findings-naacl.146",
    pages = "2256--2266"
}
Downloads last month
268
Safetensors
Model size
135M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TomatenMarc/WRAP

Finetuned
(98)
this model