Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: all: struct<len: int64, sensor_downtime: struct<0: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 1: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 2: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 3: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 4: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 5: struct<time: list<item: timestamp[s]>, index: list<item: int64>>>> vs all_except_battery: struct<len: int64, sensor_downtime: struct<0: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 1: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 2: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 3: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 4: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 5: struct<time: list<item: timestamp[s]>, index: list<item: int64>>>> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: all: struct<len: int64, sensor_downtime: struct<0: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 1: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 2: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 3: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 4: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 5: struct<time: list<item: timestamp[s]>, index: list<item: int64>>>> vs all_except_battery: struct<len: int64, sensor_downtime: struct<0: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 1: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 2: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 3: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 4: struct<time: list<item: timestamp[s]>, index: list<item: int64>>, 5: struct<time: list<item: timestamp[s]>, index: list<item: int64>>>>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
WIATS: Weather-centric Intervention-Aware Time Series Multimodal Dataset
Data source:
- [California ISO]
Dataset Structure
The dataset is organized into the following structure:
|-- subdataset1
| |-- raw_data # Original data files
| |-- time_series # Rule-based Imputed data files
| | |-- all_version_1.parquet # Time series data for each subject can be multivariate, can be in csv, parquet, etc.
| | |-- all_version_2.parquet
| | |-- ...
| | |-- id_info.json # Metadata for each subject
| |-- weather
| | |-- location_1
| | | |-- raw_data
| | | | |-- daily_weather_raw_????.json
| | | | |-- ...
| | | | |-- daily_weather_????.csv
| | | | |-- ...
| | | | |-- hourly_weather_????.csv
| | | | |-- ...
| | | |-- weather_report (can be flattened and use regex to extract the version)
| | | | |-- version_1
| | | | | |-- xxx_weather_report_????.json
| | | | | |-- ...
| | | | |-- version_2
| | | | |-- ...
| | | |-- report_embedding # embedding for the weather report
| | | | |-- version_1
| | | | | |-- xxx_report_embedding_????.pkl
| | | | | |-- ...
| | | | |-- version_2
| | | | |-- ...
| | |-- location_2
| | |-- ...
| | |-- merged_report_embedding # merged embedding for multiple needed locations (optional)
| | | |-- xxx_embeddings_????.pkl
| | | |-- ...
| | |-- merged_general_report # merged general report for multiple needed locations (optional)
| | | |-- xxx_report.json
| | | |-- ...
| |-- scripts # Scripts for data processing, model training, and evaluation
| |-- id_info.json # Metadata for whole dataset without preprocessing
| |-- static_info.json # Static information for this dataset, including the dataset information, channel information, downtime reasons.
| |-- static_info_embeddings.pkl
|-- subdataset2
|-- ......
id_info.json Structure
The id_info.json
file contains metadata for each subject in the dataset. Extracted from the raw dataset. The structure is as follows:
{
"id_1": {
"len": 1000, # Length of the time series data
"sensor_downtime": {
1: {
"time": [yyyy-mm-dd hh:mm:ss, yyyy-mm-dd hh:mm:ss],
"index": [start_index, end_index]
},
2: {
"time": [yyyy-mm-dd hh:mm:ss, yyyy-mm-dd hh:mm:ss],
"index": [start_index, end_index]
},
...
},
"other_info_1": "value_1", # Other information about the subject customizable entry
"other_info_2": "value_2",
...
},
"id_2": ...
}
static_info.json Structure
The static_info.json
file contains static information for the whole dataset. The structure is as follows:
{
"general_info": "description of the dataset",
"downtime_prompt": "",
"channel_info": {
"id_1": {
"channel_1": "channel 1 is xxx",
"channel_2": "channel 2 is xxx"
},
"id_2": {
"channel_1": "channel 1 is xxx",
"channel_2": "channel 2 is xxx"
},
...
},
}
- Downloads last month
- 195