voorhs commited on
Commit
51e2912
·
1 Parent(s): 7181ec6

add dataset card

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md CHANGED
@@ -42,3 +42,55 @@ configs:
42
  - split: intents
43
  path: intents/intents-*
44
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  - split: intents
43
  path: intents/intents-*
44
  ---
45
+
46
+ # massive
47
+
48
+ This is a text classification dataset. It is intended for machine learning research and experimentation.
49
+
50
+ This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).
51
+
52
+ ## Usage
53
+
54
+ It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
55
+
56
+ ```python
57
+ from autointent import Dataset
58
+
59
+ dream = Dataset.from_datasets("AutoIntent/massive")
60
+ ```
61
+
62
+ ## Source
63
+
64
+ This dataset is taken from `mteb/amazon_massive_intent` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
65
+
66
+ ```python
67
+ from datasets import load_dataset
68
+ from autointent import Dataset
69
+
70
+ def convert_massive(massive_train):
71
+ intent_names = sorted(massive_train.unique("label"))
72
+ name_to_id = dict(zip(intent_names, range(len(intent_names)), strict=False))
73
+ n_classes = len(intent_names)
74
+
75
+ classwise_utterance_records = [[] for _ in range(n_classes)]
76
+ intents = [
77
+ {
78
+ "id": i,
79
+ "name": name,
80
+
81
+ }
82
+ for i, name in enumerate(intent_names)
83
+ ]
84
+
85
+ for batch in massive_train.iter(batch_size=16, drop_last_batch=False):
86
+ for txt, name in zip(batch["text"], batch["label"], strict=False):
87
+ intent_id = name_to_id[name]
88
+ target_list = classwise_utterance_records[intent_id]
89
+ target_list.append({"utterance": txt, "label": intent_id})
90
+
91
+ utterances = [rec for lst in classwise_utterance_records for rec in lst]
92
+ return Dataset.from_dict({"intents": intents, "train": utterances})
93
+
94
+ massive = load_dataset("mteb/amazon_massive_intent", "en")
95
+ massive_converted = convert_massive(massive["train"])
96
+ ```