Genius1237 commited on
Commit
63d17a6
·
1 Parent(s): 70e3ab7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - Genius1237/TyDiP
5
+ language:
6
+ - en
7
+ - hi
8
+ - ko
9
+ - es
10
+ - ta
11
+ - fr
12
+ - vi
13
+ - ru
14
+ - af
15
+ - hu
16
+ metrics:
17
+ - accuracy
18
+ pipeline_tag: text-classification
19
+ ---
20
+
21
+ # Multilingual Politeness Classification Model
22
+ This model is based on `xlm-roberta-large` and is finetuned on the English subset of the [TyDiP](https://github.com/Genius1237/TyDiP) dataset as discussed in the original paper [here](https://aclanthology.org/2022.findings-emnlp.420/).
23
+
24
+ ## Languages
25
+ In the paper, this model was evaluated on English + 9 Languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian). Given the model's good performance and XLMR's cross lingual abilities, it is likely that this finetuned model can be used for more languages as well.
26
+
27
+ ## Evaluation
28
+ The politeness classification accuracy scores on 10 languages from the TyDiP test set are mentioned below.
29
+
30
+ | lang | acc |
31
+ |------|-------|
32
+ | en | 0.892 |
33
+ | hi | 0.868 |
34
+ | ko | 0.784 |
35
+ | es | 0.84 |
36
+ | ta | 0.78 |
37
+ | fr | 0.82 |
38
+ | vi | 0.844 |
39
+ | ru | 0.668 |
40
+ | af | 0.856 |
41
+ | hu | 0.812 |
42
+
43
+ ## Usage
44
+ You can use this model directly with a text-classification pipeline
45
+ ```python
46
+ from transformers import pipeline
47
+
48
+ classifier = pipeline(task="text-classification", model="Genius1237/xlm-roberta-large-tydip")
49
+
50
+ sentences = ["Could you please get me a glass of water", "mere liye पानी का एक गिलास ले आओ "]
51
+
52
+ print(classifier(sentences))
53
+ # [{'label': 'polite', 'score': 0.9076159000396729}, {'label': 'impolite', 'score': 0.765066385269165}]
54
+ ```
55
+
56
+ More advanced usage
57
+ ```python
58
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
59
+ import torch
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained('Genius1237/xlm-roberta-large-tydip')
62
+ model = AutoModelForSequenceClassification.from_pretrained('Genius1237/xlm-roberta-large-tydip')
63
+
64
+ text = "Could you please get me a glass of water"
65
+ encoded_input = tokenizer(text, return_tensors='pt')
66
+
67
+ output = model(**encoded_input)
68
+ prediction = torch.argmax(output.logits).item()
69
+
70
+ print(model.config.id2label[prediction])
71
+ # polite
72
+ ```
73
+
74
+ ## Citation
75
+ ```
76
+ @inproceedings{srinivasan-choi-2022-tydip,
77
+ title = "{T}y{D}i{P}: A Dataset for Politeness Classification in Nine Typologically Diverse Languages",
78
+ author = "Srinivasan, Anirudh and
79
+ Choi, Eunsol",
80
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
81
+ month = dec,
82
+ year = "2022",
83
+ address = "Abu Dhabi, United Arab Emirates",
84
+ publisher = "Association for Computational Linguistics",
85
+ url = "https://aclanthology.org/2022.findings-emnlp.420",
86
+ doi = "10.18653/v1/2022.findings-emnlp.420",
87
+ pages = "5723--5738",
88
+ abstract = "We study politeness phenomena in nine typologically diverse languages. Politeness is an important facet of communication and is sometimes argued to be cultural-specific, yet existing computational linguistic study is limited to English. We create TyDiP, a dataset containing three-way politeness annotations for 500 examples in each language, totaling 4.5K examples. We evaluate how well multilingual models can identify politeness levels {--} they show a fairly robust zero-shot transfer ability, yet fall short of estimated human accuracy significantly. We further study mapping the English politeness strategy lexicon into nine languages via automatic translation and lexicon induction, analyzing whether each strategy{'}s impact stays consistent across languages. Lastly, we empirically study the complicated relationship between formality and politeness through transfer experiments. We hope our dataset will support various research questions and applications, from evaluating multilingual models to constructing polite multilingual agents.",
89
+ }
90
+ ```