metadata
dataset_info:
- config_name: default
features:
- name: utterance
dtype: string
- name: label
sequence: int64
splits:
- name: train
num_bytes: 8999208
num_examples: 2742
- name: test
num_bytes: 1255307
num_examples: 378
download_size: 22576550
dataset_size: 10254515
- config_name: intents
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regex_full_match
sequence: 'null'
- name: regex_partial_match
sequence: 'null'
- name: description
dtype: 'null'
splits:
- name: full_intents
num_bytes: 1240
num_examples: 29
- name: intents
num_bytes: 907
num_examples: 21
download_size: 8042
dataset_size: 2147
- config_name: intentsqwen3-32b
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regex_full_match
sequence: 'null'
- name: regex_partial_match
sequence: 'null'
- name: description
dtype: string
splits:
- name: intents
num_bytes: 2497
num_examples: 21
download_size: 5062
dataset_size: 2497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: intents
data_files:
- split: full_intents
path: intents/full_intents-*
- split: intents
path: intents/intents-*
- config_name: intentsqwen3-32b
data_files:
- split: intents
path: intentsqwen3-32b/intents-*
events
This is a text classification dataset. It is intended for machine learning research and experimentation.
This dataset is obtained via formatting another publicly available data to be compatible with our AutoIntent Library.
Usage
It is intended to be used with our AutoIntent Library:
from autointent import Dataset
banking77 = Dataset.from_hub("AutoIntent/events")
Source
This dataset is taken from knowledgator/events_classification_biotech
and formatted with our AutoIntent Library:
"""Convert events dataset to autointent internal format and scheme."""
from datasets import Dataset as HFDataset
from datasets import load_dataset
from autointent import Dataset
from autointent.schemas import Intent
def extract_intents_data(events_dataset: HFDataset) -> list[Intent]:
"""Extract intent names and assign ids to them."""
intent_names = sorted({name for intents in events_dataset["train"]["all_labels"] for name in intents})
return [Intent(id=i,name=name) for i, name in enumerate(intent_names)]
def converting_mapping(example: dict, intents_data: list[Intent]) -> dict[str, str | list[int] | None]:
"""Extract utterance and OHE label and drop the rest."""
res = {
"utterance": example["content"],
"label": [
int(intent.name in example["all_labels"]) for intent in intents_data
]
}
if sum(res["label"]) == 0:
res["label"] = None
return res
def convert_events(events_split: HFDataset, intents_data: dict[str, int]) -> list[dict]:
"""Convert one split into desired format."""
events_split = events_split.map(
converting_mapping, remove_columns=events_split.features.keys(),
fn_kwargs={"intents_data": intents_data}
)
return [sample for sample in events_split if sample["utterance"] is not None]
def get_low_resource_classes_mask(ds: list[dict], intent_names: list[str], fraction_thresh: float = 0.01) -> list[bool]:
res = [0] * len(intent_names)
for sample in ds:
for i, indicator in enumerate(sample["label"]):
res[i] += indicator
for i in range(len(intent_names)):
res[i] /= len(ds)
return [(frac < fraction_thresh) for frac in res]
def remove_low_resource_classes(ds: list[dict], mask: list[bool]) -> list[dict]:
res = []
for sample in ds:
if sum(sample["label"]) == 1 and mask[sample["label"].index(1)]:
continue
sample["label"] = [
indicator for indicator, low_resource in
zip(sample["label"], mask, strict=True) if not low_resource
]
res.append(sample)
return res
def remove_oos(ds: list[dict]):
return [sample for sample in ds if sum(sample["label"]) != 0]
if __name__ == "__main__":
# `load_dataset` might not work
# fix is here: https://github.com/huggingface/datasets/issues/7248
events_dataset = load_dataset("knowledgator/events_classification_biotech", trust_remote_code=True)
intents_data = extract_intents_data(events_dataset)
train_samples = convert_events(events_dataset["train"], intents_data)
test_samples = convert_events(events_dataset["test"], intents_data)
intents_names = [intent.name for intent in intents_data]
mask = get_low_resource_classes_mask(train_samples, intents_names)
train_samples = remove_oos(remove_low_resource_classes(train_samples, mask))
test_samples = remove_oos(remove_low_resource_classes(test_samples, mask))
events_converted = Dataset.from_dict(
{"train": train_samples, "test": test_samples, "intents": intents_data}
)