File size: 3,631 Bytes
4c9d4f2
 
7181ec6
4c9d4f2
 
 
 
 
 
 
b7a7558
 
d9b2adf
 
 
806843f
 
 
 
 
7181ec6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2076476
 
 
 
df37e1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c9d4f2
 
 
 
 
d9b2adf
 
806843f
 
7181ec6
 
 
 
df37e1e
 
 
 
715c9ab
 
 
 
4c9d4f2
51e2912
 
 
 
 
 
 
 
 
 
 
 
 
 
cc35b5f
51e2912
 
 
 
 
 
 
bd08743
51e2912
bd08743
51e2912
bd08743
 
51e2912
bd08743
 
 
 
 
51e2912
bd08743
 
 
 
 
 
 
 
 
51e2912
bd08743
 
 
 
 
 
 
 
 
715c9ab
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
dataset_info:
- config_name: default
  features:
  - name: utterance
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 540734
    num_examples: 11492
  - name: validation
    num_bytes: 95032
    num_examples: 2031
  - name: test
    num_bytes: 138211
    num_examples: 2968
  download_size: 378530
  dataset_size: 773977
- config_name: intents
  features:
  - name: id
    dtype: int64
  - name: name
    dtype: string
  - name: tags
    sequence: 'null'
  - name: regexp_full_match
    sequence: 'null'
  - name: regexp_partial_match
    sequence: 'null'
  - name: description
    dtype: 'null'
  splits:
  - name: intents
    num_bytes: 2187
    num_examples: 58
  download_size: 3921
  dataset_size: 2187
- config_name: intentsqwen3-32b
  features:
  - name: id
    dtype: int64
  - name: name
    dtype: string
  - name: tags
    sequence: 'null'
  - name: regex_full_match
    sequence: 'null'
  - name: regex_partial_match
    sequence: 'null'
  - name: description
    dtype: string
  splits:
  - name: intents
    num_bytes: 5694
    num_examples: 58
  download_size: 6157
  dataset_size: 5694
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
- config_name: intents
  data_files:
  - split: intents
    path: intents/intents-*
- config_name: intentsqwen3-32b
  data_files:
  - split: intents
    path: intentsqwen3-32b/intents-*
task_categories:
- text-classification
language:
- en
---

# massive

This is a text classification dataset. It is intended for machine learning research and experimentation.

This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).

## Usage

It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):

```python
from autointent import Dataset

massive = Dataset.from_hub("AutoIntent/massive")
```

## Source

This dataset is taken from `mteb/amazon_massive_intent` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):

```python
from datasets import Dataset as HFDataset
from datasets import load_dataset

from autointent import Dataset
from autointent.schemas import Intent, Sample


def extract_intents_info(split: HFDataset) -> tuple[list[Intent], dict[str, int]]:
    """Extract metadata."""
    intent_names = sorted(split.unique("label"))
    intent_names.remove("cooking_query")
    intent_names.remove("audio_volume_other")
    n_classes = len(intent_names)
    name_to_id = dict(zip(intent_names, range(n_classes), strict=False))
    intents_data = [Intent(id=i, name=intent_names[i]) for i in range(n_classes)]
    return intents_data, name_to_id


def convert_massive(split: HFDataset, name_to_id: dict[str, int]) -> list[Sample]:
    """Extract utterances and labels."""
    return [Sample(utterance=s["text"], label=name_to_id[s["label"]]) for s in split if s["label"] in name_to_id]


if __name__ == "__main__":
    massive = load_dataset("mteb/amazon_massive_intent", "en")
    intents, name_to_id = extract_intents_info(massive["train"])
    train_samples = convert_massive(massive["train"], name_to_id)
    test_samples = convert_massive(massive["test"], name_to_id)
    validation_samples = convert_massive(massive["validation"], name_to_id)
    dataset = Dataset.from_dict(
        {"intents": intents, "train": train_samples, "test": test_samples, "validation": validation_samples}
    )
```