datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
math-extraction-comp/deepseek-ai__DeepSeek-R1-Distill-Qwen-1.5B | math-extraction-comp | 2025-01-26T13:29:41Z | 19 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-26T13:29:38Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-b200fe81_extracted_answer
dtype: string
- name: lighteval-b200fe81_score
dtype: float64
- name: qwen_extracted_answer
dtype: string
- name: qwen_score
dtype: float64
- name: harness_extracted_answer
dtype: string
- name: harness_score
dtype: float64
splits:
- name: train
num_bytes: 5546292
num_examples: 1324
download_size: 2496876
dataset_size: 5546292
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tacofoundation/cloudsen12 | tacofoundation | 2025-01-05T14:47:21Z | 1,070 | 0 | [
"language:en",
"license:cc0-1.0",
"modality:geospatial",
"region:us",
"clouds",
"sentinel-2",
"image-segmentation",
"deep-learning",
"remote-sensing"
] | [] | 2024-12-04T16:39:44Z | 0 | ---
license:
- cc0-1.0
language:
- en
tags:
- clouds
- sentinel-2
- image-segmentation
- deep-learning
- remote-sensing
pretty_name: cloudsen12plus
viewer: false
---
<div style="text-align: center; border: 1px solid #ddd; border-radius: 10px; padding: 15px; max-width: 250px; margin: auto; background-color: #f9f9f9;">
[](https://cloudsen12.github.io/)
<b><p>This dataset follows the TACO specification.</p></b>
</div>
# [cloudsen12plus](https://cloudsen12.github.io/)
**Website:** https://cloudsen12.github.io/
**version:** 1.1.2
****The largest dataset of expert-labeled pixels for cloud and cloud shadow detection in Sentinel-2****
CloudSEN12+ version 1.1.0 is a significant extension of the CloudSEN12 dataset, which doubles the number of
expert-reviewed labels, making it, by a large margin, the largest cloud detection dataset to
date for Sentinel-2. All labels from the previous version have been curated and refined, enhancing
the dataset's truestworthiness. This new release is licensed under CC0, which puts it in the public
domain and allows anyone to use, modify, and distribute it without permission or attribution.
The images are padded from 509x509 to 512x512 and 2000x2000 to 2048x2048 to ensure that the
patches are divisible by 32. The padding is filled with zeros in the left and bottom sides of the
image. For those who prefer traditional storage formats, GeoTIFF files are available in our
[ScienceDataBank](https://www.scidb.cn/en/detail?dataSetId=2036f4657b094edfbb099053d6024b08&version=V1)
repository.
`CloudSEN12+` v.1.1.0 offers three distinct modes, tailored for diverse research and application needs:
- **`cloudsen12-l1c`**: Patches derived from Sentinel-2 Level-1C imagery, including high-quality labels, scribble annotations, and unlabeled data.
- **`cloudsen12-l2a`**: Similar to cloudsen12-l1c but based on Sentinel-2 Level-2A data as processed by Google Earth Engine.
- **`cloudsen12-extra`**: A supplementary collection of metadata to enhance contextual understanding of landscapes. Cloud masks from multiple sources have **NOT** been normalized to align with the CloudSEN12 class schema. This mode includes:
- **`elevation:`** Elevation data (meters) sourced from the Copernicus DEM GLO-30.
- **`lc10:`** ESA WorldCover 10m v100 land cover product.
- **`vv:`** Normalized Sentinel-1 Global Backscatter Model Land Surface (VV polarization).
- **`vh:`** Normalized Sentinel-1 Global Backscatter Model Land Surface (VH polarization).
- **`cloudmask_qa60:`** Cloud mask from Sentinel-2 Level-1C.
- **`cloudmask_sen2cor:`** Cloud mask from Sentinel-2 Level-2A.
- **`cloudmask_s2cloudless:`** Cloud mask generated by Sentinel Hub Cloud Detector.
- **`cloudmask_cloudscore_cs_v1:`** Cloud mask generated by [Pasquarella et al. 2023](https://openaccess.thecvf.com/content/CVPR2023W/EarthVision/html/Pasquarella_Comprehensive_Quality_Assessment_of_Optical_Satellite_Imagery_Using_Weakly_Supervised_CVPRW_2023_paper.html).
- **`cloudmask_cloudscore_cs_cdf_v1:`** Cloud mask generated by [Pasquarella et al. 2023](https://openaccess.thecvf.com/content/CVPR2023W/EarthVision/html/Pasquarella_Comprehensive_Quality_Assessment_of_Optical_Satellite_Imagery_Using_Weakly_Supervised_CVPRW_2023_paper.html).
- **`cloudmask_unetmobv2_v1:`** Cloud mask generated by [Aybar et al. 2022](https://www.nature.com/articles/s41597-022-01878-2).
- **`cloudmask_unetmobv2_v2:`** Cloud mask generated by [Aybar et al. 2024](https://www.sciencedirect.com/science/article/pii/S2352340924008163).
- **`cloudmask_sensei_v2:`** Cloud mask generated by [Alistair Francis 2024](https://ieeexplore.ieee.org/document/10505181).
- Changelog:
- Version 1.1.2:
- BUG FIX: cloudmask_qa60 now are in uint16 rather than uint8 to avoid overflow.
- Version 1.1.1:
- BUG FIX: VV and VH now are in int16 rather than uint16 to avoid overflow.
- Version 1.1.0:
- We save all GeoTIFF files with discard_lsb=2 to improve the compression ratio.
- Fixed 2000x2000 rotated patches. The datapoints are now correctly oriented. Check the patches:
- ROI_2526__20200709T105031_20200709T105719_T31UDQ
- ROI_0070__20190708T130251_20190708T130252_T24MUA
- ROI_4565__20200530T100029_20200530T100502_T32TQP
- Improved the quality of the following patches:
- ROI_1098__20200515T190909_20200515T191310_T11WPN
- ROI_1735__20190814T163849_20190814T164716_T15SXS
- ROI_0760__20190516T022551_20190516T022553_T56WMD
- ROI_3696__20200419T075611_20200419T080344_T35MRN
- ROI_2864__20170529T105621_20170529T110523_T31TCN
- We removed the following patches due to poor quality:
- ROI_3980__20190228T005641_20190228T005640_T58WDB
- ROI_1489__20210228T070831_20210228T070834_T40TDP
- Consideration:
- The field `roi_id` field serves as a unique identifier for the geographical location of each patch. In other words, it is used to link S2 images with
a specific geographic location. However, the roi_id between the 509x509 and 2000x2000 patches are
not the same. For example, the roid_id: `ROI_0008` in the 509x509 patches is not the same as the
`ROI_0008` in the 2000x2000 patches. In this version, we fixed this issue by summing the max value
of the 509x509 patches to the 2000x2000 patches. In this way, the `roi_id` between the 509x509 and
2000x2000 patches are unique. If users of 2000x2000 patches need to match the original roi_id published
in the previous version, they can use the following formula:
- `old_roi_id_2000 = old_roi_id_2000 - 12101`
where `12101` is the max value of the 509 patches. We also reported the previous roi as old_roi_id.
<center>
<img src='https://cdn-uploads.huggingface.co/production/uploads/6402474cfa1acad600659e92/9UA4U3WObVeq7BAcf37-C.png' alt='drawing' width='80%'/>
</center>
*CloudSEN12+ spatial coverage. The terms p509 and p2000 denote the patch size 509 Γ 509 and 2000 Γ 2000,
respectively. `high`, `scribble`, and `nolabel` refer to the types of expert-labeled annotations*
## π Reproducible Example
<a target="_blank" href="https://colab.research.google.com/drive/1U9n40rwdnn73bdWruONA3hIs1-H3f74Q">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Load this dataset using the `tacoreader` library.
```python
import tacoreader
import rasterio as rio
print(tacoreader.__version__) # 0.5.3
# Remotely load the Cloud-Optimized Dataset
dataset = tacoreader.load("tacofoundation:cloudsen12-l1c")
#dataset = tacoreader.load("tacofoundation:cloudsen12-l2a")
#dataset = tacoreader.load("tacofoundation:cloudsen12-extra")
# Read a sample
sample_idx = 2422
s2_l1c = dataset.read(sample_idx).read(0)
s2_label = dataset.read(sample_idx).read(1)
# Retrieve the data
with rio.open(s2_l1c) as src, rio.open(s2_label) as dst:
s2_l1c_data = src.read([4, 3, 2], window=rio.windows.Window(0, 0, 512, 512))
s2_label_data = dst.read(window=rio.windows.Window(0, 0, 512, 512))
# Display
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(s2_l1c_data.transpose(1, 2, 0) / 3000)
ax[0].set_title("Sentinel-2 L1C")
ax[1].imshow(s2_label_data[0])
ax[1].set_title("Human Label")
plt.tight_layout()
plt.savefig("taco_check.png")
plt.close(fig)
```
<center>
<img src='https://cdn-uploads.huggingface.co/production/uploads/6402474cfa1acad600659e92/0nRv7sqMRMNY-TVkY2kh7.png' alt='drawing' width='70%'/>
</center>
## π°οΈ Sensor Information
The sensor related to the dataset: **sentinel2msi**
## π― Task
The task associated with this dataset: **semantic-segmentation**
## π Original Data Repository
Source location of the raw data:**[https://huggingface.co/datasets/isp-uv-es/CloudSEN12Plus](https://huggingface.co/datasets/isp-uv-es/CloudSEN12Plus)**
## π¬ Discussion
Insights or clarifications about the dataset: **[https://huggingface.co/datasets/tacofoundation/cloudsen12/discussions](https://huggingface.co/datasets/tacofoundation/cloudsen12/discussions)**
## π Split Strategy
How the dataset is divided for training, validation, and testing: **stratified**
## π Scientific Publications
Publications that reference or describe the dataset.
### Publication 01
- **DOI**: [10.1038/s41597-022-01878-2](10.1038/s41597-022-01878-2)
- **Summary**: CloudSEN12 first release. Only 509 x 509 patches.
- **BibTeX Citation**:
```bibtex
@article{aybar2022cloudsen12,
title={CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2},
author={Aybar, Cesar and Ysuhuaylas, Luis and Loja, Jhomira and Gonzales, Karen and Herrera, Fernando and Bautista, Lesly and Yali, Roy and Flores, Angie and Diaz, Lissette and Cuenca, Nicole and others},
journal={Scientific Data},
volume={9},
number={1},
pages={782},
year={2022},
publisher={Nature Publishing Group UK London}
}
```
### Publication 02
- **DOI**: [10.1109/IGARSS52108.2023.10282381](10.1109/IGARSS52108.2023.10282381)
- **Summary**: Exploration of incorrect annotations in cloud semantic segmentation datasets.
- **BibTeX Citation**:
```bibtex
@inproceedings{aybar2023lessons,
title={Lessons Learned From Cloudsen12 Dataset: Identifying Incorrect Annotations in Cloud Semantic Segmentation Datasets},
author={Aybar, Cesar and Montero, David and Mateo-Garc{'\i}a, Gonzalo and G{'o}mez-Chova, Luis},
booktitle={IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium},
pages={892--895},
year={2023},
organization={IEEE}
}
```
### Publication 03
- **DOI**: [10.1016/j.dib.2024.110852](10.1016/j.dib.2024.110852)
- **Summary**: Extended version of CloudSEN12. We include 2000 x 2000 patches to the dataset.
- **BibTeX Citation**:
```bibtex
@article{aybar2024cloudsen12+,
title={CloudSEN12+: The largest dataset of expert-labeled pixels for cloud and cloud shadow detection in Sentinel-2},
author={Aybar, Cesar and Bautista, Lesly and Montero, David and Contreras, Julio and Ayala, Daryl and Prudencio, Fernando and Loja, Jhomira and Ysuhuaylas, Luis and Herrera, Fernando and Gonzales, Karen and others},
journal={Data in Brief},
volume={56},
pages={110852},
year={2024},
publisher={Elsevier}
}
```
## π€ Data Providers
Organizations or individuals responsible for the dataset.
|**Name**|**Role**|**URL**|
| :--- | :--- | :--- |
|Image & Signal Processing|host|[https://isp.uv.es/](https://isp.uv.es/)|
|European Space Agency (ESA)|producer|[https://www.esa.int/](https://www.esa.int/)|
## π§βπ¬ Curators
Responsible for structuring the dataset in the TACO format.
|**Name**|**Organization**|**URL**|
| :--- | :--- | :--- |
|Cesar Aybar|Image & Signal Processing|[https://csaybar.github.io/](https://csaybar.github.io/)|
## π·οΈ Labels
The dataset contains four classes: clear, thick cloud, thin cloud, and cloud shadow.
|**Name**|**Category**|**Description**|
| :--- | :--- | :--- |
|clear|0|Pixels without cloud and cloud shadow contamination. They are primarily identified using bands B4- B3-B2, B1-B12-B13, and the cirrus band.|
|thick cloud|1|Opaque clouds that block all the reflected light from the Earth's surface. We identify them by assuming clouds exhibit distinctive shapes and maintain higher reflectance values in bands B4-B3-B2, B1-B12-B13, and the cirrus band.|
|thin cloud|2|Semitransparent clouds that alter the surface spectral signal but still allow to recognize the background. This is the hardest class to identify. We utilize CloudApp [1] to better understand the background, both with and without cloud cover.|
|cloud shadow|3|Dark pixels where light is occluded by thick or thin clouds. Cloud shadows depend on clouds presence and, by considering the solar position, we can identify and map these shadows through a reasoned projection of the cloud shape.|
## π Optical Bands
No data value is always: **99**
### `tacofoundation:cloudsen12-l1c`
- `s2l1c`
|**Name**|**Common Name**|**Description**|**Center Wavelength** (nm)|**Full Width Half Max** (nm)|**Index**|**Scale Factor**|
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|B01|Coastal aerosol|Band 1 - Coastal aerosol - 60m|443.5|17.0|0|0.0001|
|B02|Blue|Band 2 - Blue - 10m|496.5|53.0|1|0.0001|
|B03|Green|Band 3 - Green - 10m|560.0|34.0|2|0.0001|
|B04|Red|Band 4 - Red - 10m|664.5|29.0|3|0.0001|
|B05|Red edge 1|Band 5 - Vegetation red edge 1 - 20m|704.5|13.0|4|0.0001|
|B06|Red edge 2|Band 6 - Vegetation red edge 2 - 20m|740.5|13.0|5|0.0001|
|B07|Red edge 3|Band 7 - Vegetation red edge 3 - 20m|783.0|18.0|6|0.0001|
|B08|NIR|Band 8 - Near infrared - 10m|840.0|114.0|7|0.0001|
|B8A|Red edge 4|Band 8A - Vegetation red edge 4 - 20m|864.5|19.0|8|0.0001|
|B09|Water vapor|Band 9 - Water vapor - 60m|945.0|18.0|9|0.0001|
|B10|Cirrus|Band 10 - Cirrus - 60m|1375.5|31.0|10|N/A|
|B11|SWIR 1|Band 11 - Shortwave infrared 1 - 20m|1613.5|89.0|11|0.0001|
|B12|SWIR 2|Band 12 - Shortwave infrared 2 - 20m|2199.5|173.0|12|0.0001|
- `target`
The 'scribble' and 'nolabel' patches contains 99 values, which represent 'no data' in CloudSEN12.
|**Name**|**Common Name**|**Description**|**Center Wavelength**|**Full Width Half Max**|**Index**|**Scale Factor**|
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|B01|Cloud Label|Cloud labels annotated by humans|-|-|0|1|
### `tacofoundation:cloudsen12-l2a`
- `s2l2a`
|**Band**|**Name**|**Description**|**Center Wavelength** (nm)|**Bandwidth** (nm)|**Index**|**Scale Factor**|
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|B01|Coastal aerosol|Band 1 - Coastal aerosol - 60m|443.5|17.0|0|0.0001|
|B02|Blue|Band 2 - Blue - 10m|496.5|53.0|1|0.0001|
|B03|Green|Band 3 - Green - 10m|560.0|34.0|2|0.0001|
|B04|Red|Band 4 - Red - 10m|664.5|29.0|3|0.0001|
|B05|Red edge 1|Band 5 - Vegetation red edge 1 - 20m|704.5|13.0|4|0.0001|
|B06|Red edge 2|Band 6 - Vegetation red edge 2 - 20m|740.5|13.0|5|0.0001|
|B07|Red edge 3|Band 7 - Vegetation red edge 3 - 20m|783.0|18.0|6|0.0001|
|B08|NIR|Band 8 - Near infrared - 10m|840.0|114.0|7|0.0001|
|B8A|Red edge 4|Band 8A - Vegetation red edge 4 - 20m|864.5|19.0|8|0.0001|
|B09|Water vapor|Band 9 - Water vapor - 60m|945.0|18.0|9|0.0001|
|B11|SWIR 1|Band 11 - Shortwave infrared 1 - 20m|1613.5|89.0|10|0.0001|
|B12|SWIR 2|Band 12 - Shortwave infrared 2 - 20m|2199.5|173.0|11|0.0001|
|AOT|-|Aerosol Optical Thickness|-|-|12|0.001|
|WVP|-|Water Vapor Pressure. The height the water would occupy if the vapor were condensed into liquid and spread evenly across the column|-|-|13|0.001|
- `target`
The 'scribble' and 'nolabel' patches contains 99 values, which represent 'no data' in CloudSEN12.
|**Name**|**Common Name**|**Description**|**Center Wavelength**|**Full Width Half Max**|**Index**|**Scale Factor**|
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|B01|Cloud Label|Cloud labels annotated by humans|-|-|0|1|
### `tacofoundation:cloudsen12-extra`
| **Band** | **Name** | **Description** | **Center Wavelength** (nm) | **Bandwidth** (nm) | **Index** | **Scale Factor** |
|-------------------------|----------------------------------|------------------------------------------------------------------------------------------------------------------|----------------------------|--------------------|-----------|------------------|
| elevation | Elevation | Elevation data (meters) sourced from the Copernicus DEM GLO-30. | - | - | 0 | 1 |
| lc10 | Landcover | ESA WorldCover 10m v100 land cover product. | - | - | 0 | 1 |
| vv | Vertical-Vertical Polarization | Normalized Sentinel-1 Global Backscatter Model (VV polarization). | - | - | 0 | 0.1 |
| vh | Vertical-Horizontal Polarization | Normalized Sentinel-1 Global Backscatter Model (VH polarization). | - | - | 0 | 0.1 |
| cloudmask_qa60 | QA60 | Cloud mask from Sentinel-2 Level-1C. | - | - | 0 | 1 |
| cloudmask_sen2cor | Sen2Cor | Cloud mask from Sentinel-2 Level-2A. | - | - | 0 | 1 |
| cloudmask_s2cloudless | S2Cloudless | Cloud mask generated by Sentinel Hub Cloud Detector. | - | - | 0 | 1 |
| cloudmask_cloudscore_cs_v1 | CloudScore v1 | Cloud mask generated by [Pasquarella et al. 2023](https://openaccess.thecvf.com/content/CVPR2023W/EarthVision/html/Pasquarella_Comprehensive_Quality_Assessment_of_Optical_Satellite_Imagery_Using_Weakly_Supervised_CVPRW_2023_paper.html) model. | - | - | 0 | 1 |
| cloudmask_cloudscore_cs_cdf_v1 | CloudScore CDF v1 | Cloud mask generated by [Pasquarella et al. 2023](https://openaccess.thecvf.com/content/CVPR2023W/EarthVision/html/Pasquarella_Comprehensive_Quality_Assessment_of_Optical_Satellite_Imagery_Using_Weakly_Supervised_CVPRW_2023_paper.html) model. | - | - | 0 | 1 |
| cloudmask_unetmobv2_v1 | UNetMobV2 v1 | Cloud mask generated by [Aybar et al. 2022](https://www.nature.com/articles/s41597-022-01878-2) model. | - | - | 0 | 1 |
| cloudmask_unetmobv2_v2 | UNetMobV2 v2 | Cloud mask generated by [Aybar et al. 2024](https://www.sciencedirect.com/science/article/pii/S2352340924008163) model.| - | - | 0 | 1 |
| cloudmask_sensei_v2 | Sensei v2 | Cloud mask generated by [Alistair Francis 2024](https://ieeexplore.ieee.org/document/10505181). | - | - | 0 | 1 |
## π Additional metadata
In addition to the **`stac`** and **`rai`** fields, this dataset includes the following fields at the sample level.
| **ID** | **Description** |
|------------------------------|---------------------------------------------------------------------------------|
| `roi_id` | Unique identifier for the region of interest (ROI). |
| `old_roi_id` | Previous identifier for the region of interest, if applicable. See considerations section above. |
| `equi_id` | Identifier for the corresponding equi7grid system. |
| `equi_zone` | Zone or section within the equi7grid system. |
| `label_type` | Type of label assigned to the ROI. It can be: `high`. `scribble`, and `nolabe`. |
| `s2_id` | Identifier for the Sentinel-2 image ids. |
| `real_proj_shape` | Image in CloudSEN12 are padded with zeros (see description above). This field contains the shape of the original image. It can be: 2000 and 509.
| `s2_mean_solar_azimuth_angle` | Mean solar azimuth angle for the Sentinel-2 image (in degrees). |
| `s2_mean_solar_zenith_angle` | Mean solar zenith angle for the Sentinel-2 image (in degrees). |
| `thick_percentage` | Percentage of thick clouds in the ROI estimated by the annotator for the `high` label. For `scribble` and `nolabel` labels, this value is derived from UNetMobV2-V1 predictions. |
| `thin_percentage` | Percentage of thin clouds in the ROI estimated by the annotator for the `high` label. For `scribble` and `nolabel` labels, this value is derived from UNetMobV2-V1 predictions. |
| `cloud_shadow_percentage` | Percentage of cloud shadows in the ROI estimated by the annotator for the `high` label. For `scribble` and `nolabel` labels, this value is derived from UNetMobV2-V1 predictions. |
| `clear_percentage` | Percentage of clear sky in the ROI estimated by the annotator for the `high` label. For `scribble` and `nolabel` labels, this value is derived from UNetMobV2-V1 predictions. |

|
xzhe121/comaprative_analysis_results.csv | xzhe121 | 2024-11-29T04:18:07Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-28T08:08:46Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: base_output
dtype: string
- name: pairrm_output
dtype: string
- name: llm_judge_output
dtype: string
splits:
- name: train
num_bytes: 30275
num_examples: 10
download_size: 38195
dataset_size: 30275
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shizi1011/xlam-function-calling-processed-1tool | shizi1011 | 2025-03-10T07:57:34Z | 19 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-06T06:59:56Z | 0 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answers
sequence: string
- name: schemas
sequence: string
- name: system
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 5742846
num_examples: 3782
download_size: 1225273
dataset_size: 5742846
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RLAIF/STAR-TRAIN-math_llama-star-iter3 | RLAIF | 2025-03-15T19:01:48Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-15T19:01:46Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: full_solution
dtype: string
- name: is_correct
dtype: int64
- name: done
dtype: bool
- name: num_turns
dtype: int64
splits:
- name: train
num_bytes: 16206030.92375
num_examples: 3203
download_size: 2677615
dataset_size: 16206030.92375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Locutusque/Math-Evol-Instruct-v0.1 | Locutusque | 2025-02-15T13:27:01Z | 26 | 2 | [
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-08T05:55:08Z | 0 | ---
language:
- en
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2632269.0
num_examples: 1288
download_size: 1220108
dataset_size: 2632269.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
samsitol/so100_pretest | samsitol | 2025-06-10T15:40:30Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-10T15:40:22Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 773,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.head": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
anichris/eval_act_so100_test | anichris | 2025-05-01T16:54:12Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-01T16:47:41Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 405,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
supergoose/flan_combined_task308_jeopardy_answer_generation_all | supergoose | 2025-03-05T21:55:57Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T21:55:39Z | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 13108461
num_examples: 19439
download_size: 3743834
dataset_size: 13108461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sywang/AttributeByUnlearning | sywang | 2024-11-24T20:09:16Z | 106 | 0 | [
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2024-11-07T16:43:50Z | 0 | ---
license: cc-by-nc-sa-4.0
---
|
ganga4364/benchmark-stt-hs | ganga4364 | 2024-11-20T11:52:18Z | 8 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-20T11:52:15Z | 0 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: uni
dtype: string
- name: wylie
dtype: string
- name: url
dtype: string
- name: dept
dtype: string
- name: grade
dtype: int64
- name: char_len
dtype: int64
- name: audio_len
dtype: float64
- name: exiled_year
dtype: string
- name: Age__Group
dtype: string
- name: Gender
dtype: string
- name: place_of_origin
dtype: string
- name: Original_ID
dtype: string
- name: Publishing_Year
dtype: string
- name: Name
dtype: string
splits:
- name: Publishing_Year
num_bytes: 1128010
num_examples: 1968
- name: Name
num_bytes: 1123387
num_examples: 1970
- name: Gender
num_bytes: 1138582
num_examples: 2000
- name: Age__Group
num_bytes: 797968
num_examples: 1397
- name: place_of_origin
num_bytes: 774699
num_examples: 1363
- name: exiled_year
num_bytes: 741026
num_examples: 1236
download_size: 1964568
dataset_size: 5703672
configs:
- config_name: default
data_files:
- split: Publishing_Year
path: data/Publishing_Year-*
- split: Name
path: data/Name-*
- split: Gender
path: data/Gender-*
- split: Age__Group
path: data/Age__Group-*
- split: place_of_origin
path: data/place_of_origin-*
- split: exiled_year
path: data/exiled_year-*
---
|
zijian2022/so100_test_1230vf | zijian2022 | 2024-12-30T20:56:01Z | 51 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2024-12-30T20:55:57Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 823,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
DogNeverSleep/TEST_ocr_1 | DogNeverSleep | 2025-05-12T13:20:22Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T13:20:13Z | 0 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: video_index
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: eval_method
dtype: string
- name: option
sequence: string
- name: task_type
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 191496
num_examples: 956
download_size: 70866
dataset_size: 191496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
caiyan123/merged_other_task | caiyan123 | 2025-06-24T21:34:04Z | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T20:52:10Z | 0 | ---
license: cc-by-4.0
---
# Dataset: merged_other_task
This dataset contains merged robot manipulation episodes from six different tasks:
- open_cabinet
- close_cabinet
- open_fridge
- close_fridge
- open_dishwasher
- close_dishwasher
Each row represents one frame of an episode, with the following columns:
- `action`: a list of float values representing robot joint control signals.
- `observation.state`: a list of float values representing the robot's observed state at the frame.
- `timestamp`: frame timestamp (float)
- `label_index`: task label, where 0 = "open", 1 = "close"
- `task`: task name string (e.g., "open_fridge")
## Format
- File: `merged_other_task.csv`
- Rows: ~66,000
- Format: Tabular
- License: CC-BY-4.0
## License
This dataset is released under the [Creative Commons Attribution 4.0 License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/).
Please cite or credit the author if used in research or publication.
## Citation
Created by [caiyan123](https://huggingface.co/caiyan123).
|
supergoose/buzz_sources_277_xquery | supergoose | 2024-11-10T20:51:43Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-10T20:51:42Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
- name: stack
dtype: string
splits:
- name: train
num_bytes: 56935
num_examples: 38
download_size: 22691
dataset_size: 56935
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
friedrichor/MSR-VTT | friedrichor | 2025-05-20T08:01:59Z | 332 | 0 | [
"task_categories:text-to-video",
"task_categories:text-retrieval",
"task_categories:video-classification",
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-video",
"text-retrieval",
"video-classification"
] | 2025-02-28T12:58:43Z | 0 | ---
configs:
- config_name: train_9k
data_files:
- split: train
path: "msrvtt_train_9k.json"
- config_name: train_7k
data_files:
- split: train
path: "msrvtt_train_7k.json"
- config_name: test_1k
data_files:
- split: test
path: "msrvtt_test_1k.json"
task_categories:
- text-to-video
- text-retrieval
- video-classification
language:
- en
size_categories:
- 1K<n<10K
---
[MSRVTT](https://openaccess.thecvf.com/content_cvpr_2016/html/Xu_MSR-VTT_A_Large_CVPR_2016_paper.html) contains 10K video clips and 200K captions.
We adopt the standard `1K-A split` protocol, which was introduced in [JSFusion](https://openaccess.thecvf.com/content_ECCV_2018/html/Youngjae_Yu_A_Joint_Sequence_ECCV_2018_paper.html) and has since become the de facto benchmark split in the `Text-Video Retrieval` field.
Train:
- train_7k: 7,010 videos, 140,200 captions
- train_9k: 9,000 videos, 180,000 captions
Test:
- test_1k: 1,000 videos, 1,000 captions
---
## π Citation
```bibtex
@inproceedings{xu2016msrvtt,
title={Msr-vtt: A large video description dataset for bridging video and language},
author={Xu, Jun and Mei, Tao and Yao, Ting and Rui, Yong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2016}
}
```
|
datasets-CNRS/MPF | datasets-CNRS | 2025-03-29T21:40:30Z | 7 | 0 | [
"language:fra",
"license:cc-by-nc-sa-4.0",
"region:us"
] | [] | 2024-10-20T11:43:49Z | 0 | ---
language:
- fra
viewer: false
license: cc-by-nc-sa-4.0
---
> [!NOTE]
> Dataset origin: https://www.ortolang.fr/market/corpora/mpf
## Description
Conseil scientifique
FranΓ§oise Gadet (directrice), Emmanuelle GuΓ©rin, Amal Guha, AnaΓ―s Moreno Kerdreux, RobertoPaternostro.
MPF est un corpus qui a été recueilli en région parisienne depuis 2010, auprès de populations « jeunes » connaissant des contacts multiculturels réguliers. La dénomination anglaise est due au contexte de son lancement, en 2010 : un projet franco-britannique (ANR-FRBR-09-037-01, 2010-2014). Sorti de ce cadre en 2014, il a été soutenu par la DGLFLF (budgets 2014, 2016, 2018, 2019), par Ortolang (budget 2016) et par le GTRC canadien Le français à la mesure d'un continent (F. Martineau dir., 2010-2017)
Ce corpus vise Γ documenter des Γ©volutions en cours dans le franΓ§ais, sous la forme dβun "vernaculaire urbain contemporain", ainsi que les effets sur le franΓ§ais du contact avec les langues de lβimmigration. Le principe privilΓ©giΓ© pour le recueil a Γ©tΓ© la "proximitΓ© communicative" : aucun enregistrement nβa Γ©tΓ© fait avec des inconnus (liens de rΓ©seau, histoire conversationnelle antΓ©rieure). Cβest pourquoi le recueil est le fruit du travail de 26 enquΓͺteurs (dont le prΓ©nom engage le titre de lβenquΓͺte), qui ont joint 187 locuteurs diffΓ©rents, provenant de 28 communes dβIle-de-France et de 4 arrondissements parisiens. En fonction de leurs caractΓ©ristiques linguistiques et langagiΓ¨res audibles (et non sur des critΓ¨res socio-dΓ©mographiques), les enregistrements ont Γ©tΓ© classΓ©s en 3 catΓ©gories : entretiens traditionnels, entretiens de proximitΓ© et Γ©vΓ©nements Γ©cologiques (sans enquΓͺteur). Le corpus comporte 106 enregistrements, de durΓ©es entre 6 et 153 mn, transcrits sous Praat (sauf 2 sous Transcriber). Les transcriptions ont Γ©tΓ© rΓ©visΓ©es et anonymisΓ©es (texte et son). Eniron 1 200 000 mots figurent sur le site, soit 83 heures (cf. conventions de transcription). En mars 2023, le corpus a fait lβobjet de 97 publications (voir fichier pdf des rΓ©fΓ©rences bibliographiques).
Chacune des enquΓͺtes correspond Γ cinq catΓ©gories de fichiers dans le corpus :
- le fichier son (.wav),
- lat transcription (.TextGrid) - On peut la consulter de faΓ§on alignΓ©e avec le son (.wav) Γ l'aide du logiciel Praat),
- lat mΓͺme transcription (.docx) - que'on peur consulter sans alignement avec le son,
- un fichier de mΓ©tadonnΓ©es (.txt),
- une ou des fiche(s) signalΓ©tique(s) pour chaque locuteur, dans la mesure du possible (.txt).
Pour avoir un aperΓ§u du contenu des transcriptions, des versions docx (Word) sont disponibles dans le dossier "MPF_Docx".
Créé Γ partir de ce corpus, mais utilisable par tout corpus de Textgrids, le programme Python extractionMots.py (adresse ci-dessous). Pour chaque "mot" spΓ©cifiΓ© en paramΓ¨tre, le programme en extrait du corpus sous format csv toutes les occurrences (enquΓͺtes et tours de parole - marinutage, locutΒ·eurΒ·rice).
https://github.com/aghnanterre/extract_from_TextGrid/edit/master/README.txt
Il est Γ noter que l'url https://ct3.ortolang.fr/teiconvert/index-en.html propose plusieurs outils de conversion en ligne (par exemple un TextGrid peut Γͺtre converti en .doc ou en .txt.
## Citation
```
@misc{11403/mpf/v4.1,
title = {MPF},
author = {},
url = {https://hdl.handle.net/11403/mpf/v4.1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons Attribution - pans d'Utilisation Commerciale - panrtage dans les MΓͺmes Conditions 4.0 International},
year = {2023}
}
``` |
nicher92/verified_one | nicher92 | 2025-05-09T09:08:33Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-09T09:08:31Z | 0 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: acsl_snippet
dtype: string
- name: c_code_snippet
dtype: string
- name: original_code
dtype: string
- name: raw_text
dtype: string
- name: verified
dtype: bool
- name: verified_goals
dtype: int64
- name: total_goals
dtype: int64
- name: error_cause
dtype: string
splits:
- name: train
num_bytes: 851817
num_examples: 466
download_size: 249878
dataset_size: 851817
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aishu15/aryaumeshl | aishu15 | 2025-02-19T02:24:53Z | 17 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-18T07:06:57Z | 0 | ---
license: apache-2.0
---
|
AiArtLab/e-shuushuu | AiArtLab | 2024-12-17T09:40:41Z | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-12-16T16:13:09Z | 0 | ---
license: apache-2.0
---
|
uzair921/LLAMA7B_GUM_EMBEDDINGS_LLM_RAG_75_openai | uzair921 | 2025-02-11T11:25:59Z | 8 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-11T11:20:51Z | 0 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-abstract
'2': I-abstract
'3': B-animal
'4': I-animal
'5': B-event
'6': I-event
'7': B-object
'8': I-object
'9': B-organization
'10': I-organization
'11': B-person
'12': I-person
'13': B-place
'14': I-place
'15': B-plant
'16': I-plant
'17': B-quantity
'18': I-quantity
'19': B-substance
'20': I-substance
'21': B-time
'22': I-time
splits:
- name: train
num_bytes: 284397
num_examples: 910
- name: validation
num_bytes: 213725
num_examples: 615
- name: test
num_bytes: 292655
num_examples: 807
download_size: 218223
dataset_size: 790777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
PrimeIntellect/SYNTHETIC-2-Base-v2-Math | PrimeIntellect | 2025-06-23T22:18:28Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T22:18:26Z | 0 | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: task_group
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: verification_info
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 53203608
num_examples: 105055
download_size: 20636361
dataset_size: 53203608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm3_gen2_run0_WXS_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-25T10:49:36Z | 30 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-25T10:49:34Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: TPP
dtype: float64
- name: MPP
dtype: float64
- name: FTP
dtype: float64
splits:
- name: train
num_bytes: 3930167
num_examples: 6000
download_size: 2378694
dataset_size: 3930167
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-ADA-batch-38 | ChavyvAkvar | 2025-06-04T10:34:58Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T10:33:54Z | 0 | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923454732
num_examples: 1000
download_size: 924401480
dataset_size: 923454732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HYAIYN/so100_get_orange_10epi | HYAIYN | 2025-03-20T13:08:18Z | 44 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial",
"test"
] | [
"robotics"
] | 2025-03-20T13:06:13Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
- test
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 6885,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jb-balaji/pick_place_calib | jb-balaji | 2025-06-15T09:16:53Z | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-15T09:16:50Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# pick_place_calib
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
weqweasdas/qwen7b_grpo_ver2_step80_with_score_passn | weqweasdas | 2025-05-09T05:02:20Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-09T05:02:17Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: responses
sequence: string
- name: gt
dtype: string
- name: score
sequence: float64
splits:
- name: train
num_bytes: 109049813
num_examples: 1000
download_size: 32907140
dataset_size: 109049813
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_humaneval_qwen2.5-7b_t1.0_n8_tests_humaneval_qwen3-0.6b_t0.6_n1_think | test-gen | 2025-05-21T09:18:28Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-21T09:18:27Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 1525747
num_examples: 164
download_size: 559102
dataset_size: 1525747
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
MossProphet/so100_folding_testrun | MossProphet | 2025-06-04T14:22:47Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-04T14:22:42Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1170,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.External": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.Arm_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.Arm_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Isaac2489/rosa | Isaac2489 | 2025-04-23T15:39:17Z | 20 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-04-21T20:22:09Z | 0 | ---
license: apache-2.0
---
|
1231czx/w2r125k_r2r0k_r185k_ep3_tmp10 | 1231czx | 2025-01-10T04:01:29Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-09T16:50:51Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 20226320
num_examples: 5000
download_size: 6840631
dataset_size: 20226320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andrewsiah/PersonaPromptPersonalLLM_813 | andrewsiah | 2024-11-15T06:01:21Z | 8 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-15T06:01:19Z | 0 | ---
dataset_info:
features:
- name: personaid_813_response_1_llama3_sfairx
dtype: float64
- name: personaid_813_response_2_llama3_sfairx
dtype: float64
- name: personaid_813_response_3_llama3_sfairx
dtype: float64
- name: personaid_813_response_4_llama3_sfairx
dtype: float64
- name: personaid_813_response_5_llama3_sfairx
dtype: float64
- name: personaid_813_response_6_llama3_sfairx
dtype: float64
- name: personaid_813_response_7_llama3_sfairx
dtype: float64
- name: personaid_813_response_8_llama3_sfairx
dtype: float64
- name: prompt
dtype: string
- name: subset
dtype: string
- name: prompt_id
dtype: int64
- name: response_1
dtype: string
- name: response_1_model
dtype: string
- name: response_2
dtype: string
- name: response_2_model
dtype: string
- name: response_3
dtype: string
- name: response_3_model
dtype: string
- name: response_4
dtype: string
- name: response_4_model
dtype: string
- name: response_5
dtype: string
- name: response_5_model
dtype: string
- name: response_6
dtype: string
- name: response_6_model
dtype: string
- name: response_7
dtype: string
- name: response_7_model
dtype: string
- name: response_8
dtype: string
- name: response_8_model
dtype: string
- name: response_1_gemma_2b
dtype: float64
- name: response_2_gemma_2b
dtype: float64
- name: response_3_gemma_2b
dtype: float64
- name: response_4_gemma_2b
dtype: float64
- name: response_5_gemma_2b
dtype: float64
- name: response_6_gemma_2b
dtype: float64
- name: response_7_gemma_2b
dtype: float64
- name: response_8_gemma_2b
dtype: float64
- name: response_1_gemma_7b
dtype: float64
- name: response_2_gemma_7b
dtype: float64
- name: response_3_gemma_7b
dtype: float64
- name: response_4_gemma_7b
dtype: float64
- name: response_5_gemma_7b
dtype: float64
- name: response_6_gemma_7b
dtype: float64
- name: response_7_gemma_7b
dtype: float64
- name: response_8_gemma_7b
dtype: float64
- name: response_1_mistral_raft
dtype: float64
- name: response_2_mistral_raft
dtype: float64
- name: response_3_mistral_raft
dtype: float64
- name: response_4_mistral_raft
dtype: float64
- name: response_5_mistral_raft
dtype: float64
- name: response_6_mistral_raft
dtype: float64
- name: response_7_mistral_raft
dtype: float64
- name: response_8_mistral_raft
dtype: float64
- name: response_1_mistral_ray
dtype: float64
- name: response_2_mistral_ray
dtype: float64
- name: response_3_mistral_ray
dtype: float64
- name: response_4_mistral_ray
dtype: float64
- name: response_5_mistral_ray
dtype: float64
- name: response_6_mistral_ray
dtype: float64
- name: response_7_mistral_ray
dtype: float64
- name: response_8_mistral_ray
dtype: float64
- name: response_1_mistral_weqweasdas
dtype: float64
- name: response_2_mistral_weqweasdas
dtype: float64
- name: response_3_mistral_weqweasdas
dtype: float64
- name: response_4_mistral_weqweasdas
dtype: float64
- name: response_5_mistral_weqweasdas
dtype: float64
- name: response_6_mistral_weqweasdas
dtype: float64
- name: response_7_mistral_weqweasdas
dtype: float64
- name: response_8_mistral_weqweasdas
dtype: float64
- name: response_1_llama3_sfairx
dtype: float64
- name: response_2_llama3_sfairx
dtype: float64
- name: response_3_llama3_sfairx
dtype: float64
- name: response_4_llama3_sfairx
dtype: float64
- name: response_5_llama3_sfairx
dtype: float64
- name: response_6_llama3_sfairx
dtype: float64
- name: response_7_llama3_sfairx
dtype: float64
- name: response_8_llama3_sfairx
dtype: float64
- name: response_1_oasst_deberta_v3
dtype: float64
- name: response_2_oasst_deberta_v3
dtype: float64
- name: response_3_oasst_deberta_v3
dtype: float64
- name: response_4_oasst_deberta_v3
dtype: float64
- name: response_5_oasst_deberta_v3
dtype: float64
- name: response_6_oasst_deberta_v3
dtype: float64
- name: response_7_oasst_deberta_v3
dtype: float64
- name: response_8_oasst_deberta_v3
dtype: float64
- name: response_1_beaver_7b
dtype: float64
- name: response_2_beaver_7b
dtype: float64
- name: response_3_beaver_7b
dtype: float64
- name: response_4_beaver_7b
dtype: float64
- name: response_5_beaver_7b
dtype: float64
- name: response_6_beaver_7b
dtype: float64
- name: response_7_beaver_7b
dtype: float64
- name: response_8_beaver_7b
dtype: float64
- name: response_1_oasst_pythia_7b
dtype: float64
- name: response_2_oasst_pythia_7b
dtype: float64
- name: response_3_oasst_pythia_7b
dtype: float64
- name: response_4_oasst_pythia_7b
dtype: float64
- name: response_5_oasst_pythia_7b
dtype: float64
- name: response_6_oasst_pythia_7b
dtype: float64
- name: response_7_oasst_pythia_7b
dtype: float64
- name: response_8_oasst_pythia_7b
dtype: float64
- name: response_1_oasst_pythia_1b
dtype: float64
- name: response_2_oasst_pythia_1b
dtype: float64
- name: response_3_oasst_pythia_1b
dtype: float64
- name: response_4_oasst_pythia_1b
dtype: float64
- name: response_5_oasst_pythia_1b
dtype: float64
- name: response_6_oasst_pythia_1b
dtype: float64
- name: response_7_oasst_pythia_1b
dtype: float64
- name: response_8_oasst_pythia_1b
dtype: float64
- name: id
dtype: int64
- name: rformatted_promptresponse_1
dtype: string
- name: rformatted_promptresponse_2
dtype: string
- name: rformatted_promptresponse_3
dtype: string
- name: rformatted_promptresponse_4
dtype: string
- name: rformatted_promptresponse_5
dtype: string
- name: rformatted_promptresponse_6
dtype: string
- name: rformatted_promptresponse_7
dtype: string
- name: rformatted_promptresponse_8
dtype: string
splits:
- name: train
num_bytes: 32665752
num_examples: 1000
download_size: 18448067
dataset_size: 32665752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PersonaPromptPersonalLLM_813"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cpondoc/noisy-nf-10771 | cpondoc | 2025-03-13T07:24:55Z | 29 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-13T07:24:53Z | 0 | ---
dataset_info:
features:
- name: article_id
dtype: string
- name: topic
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 41229581
num_examples: 10771
download_size: 21161764
dataset_size: 41229581
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cm-upadhyay/TwinLlama-3.1-8B-results | cm-upadhyay | 2025-05-14T02:33:32Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-14T02:33:28Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
- name: answers
dtype: string
splits:
- name: test
num_bytes: 451430
num_examples: 334
download_size: 219631
dataset_size: 451430
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Asap7772/omnimath-hint-generator-0701 | Asap7772 | 2025-05-02T01:49:47Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T01:49:42Z | 0 | ---
dataset_info:
features:
- name: domain
sequence: string
- name: difficulty
dtype: float64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: note1
dtype: string
- name: note2
dtype: string
- name: note3
dtype: string
- name: note4
dtype: string
- name: note5
dtype: string
- name: all_hints
dtype: string
splits:
- name: train
num_bytes: 21768797
num_examples: 4428
download_size: 11776193
dataset_size: 21768797
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SadeqH/instruction-response-dataset | SadeqH | 2025-03-23T11:58:11Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-23T11:58:04Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: statistics_text_generation_0
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 15506
num_examples: 10
download_size: 19729
dataset_size: 15506
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
swapnil7777/extracted_features_of_dermnet_images | swapnil7777 | 2024-12-27T00:45:14Z | 18 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-26T18:33:22Z | 0 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1611701
num_examples: 1813
download_size: 678498
dataset_size: 1611701
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
robotchip/eval_act_gf5_l2_9 | robotchip | 2025-06-11T14:53:52Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-06-11T14:53:47Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 3,
"total_frames": 1336,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 60,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper",
"main_shoulder_lift"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper",
"main_shoulder_lift"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 60,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 60,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kornwtp/karonese-ind-classification | kornwtp | 2025-03-04T15:48:04Z | 28 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-04T15:47:59Z | 0 | ---
dataset_info:
features:
- name: texts
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 67326
num_examples: 1000
download_size: 37513
dataset_size: 67326
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
5525FP/minipile-spigot-100000-1-percent | 5525FP | 2025-04-23T22:14:42Z | 23 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T22:12:34Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 579117525
num_examples: 100000
download_size: 308402394
dataset_size: 579117525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sanket366/friend-chat | Sanket366 | 2024-12-28T09:23:08Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-28T09:23:07Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1934484
num_examples: 13979
download_size: 1038521
dataset_size: 1934484
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neelabh17/new_news_exploded_prompt_n_75_d_perc_0_num_gen_10_Qwen2.5-3B-Instruct_dist_mcq | neelabh17 | 2025-05-17T18:35:52Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-17T18:35:49Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: topic
dtype: string
- name: news
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: option
sequence: string
- name: prompt
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
splits:
- name: train
num_bytes: 9379588
num_examples: 375
download_size: 2752185
dataset_size: 9379588
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/tts-rj-hi-karya-44100hz-part-22-quality | SayantanJoker | 2025-03-31T05:25:10Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T05:25:08Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float64
- name: utterance_pitch_std
dtype: float64
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 1970622
num_examples: 10000
download_size: 1125441
dataset_size: 1970622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fluently-sets/reasoning-1-1k | fluently-sets | 2024-12-21T16:38:14Z | 64 | 25 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"fluently-sets",
"sft",
"alpaca",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"question-answering"
] | 2024-12-21T16:16:43Z | 0 | ---
size_categories: 1K<n<10K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 3605358
num_examples: 1150
download_size: 1678594
dataset_size: 3605358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dodalities: text
tags:
- fluently-sets
- sft
- alpaca
- synthetic
- distilabel
- rlaif
- datacraft
license: mit
task_categories:
- text-generation
- text2text-generation
- question-answering
language:
- en
---
# Reasoning-1 1K
## Short about
This dataset will help in SFT training of LLM on the Alpaca format.
The goal of the dataset: to teach LLM to reason and analyze its mistakes using SFT training.
The size of 1.15K is quite small, so for effective training on SFTTrainer set *4-6* epochs instead of *1-3*.
*Made by Fluently Team ([@ehristoforu](https://huggingface.co/ehristoforu)) using [distilabel](https://github.com/argilla-io/distilabel) with loveπ₯°*
## Dataset structure
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("fluently-sets/reasoning-1-1k", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("fluently-sets/reasoning-1-1k")
```
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p> |
Supa-AI/STEM-en-ms | Supa-AI | 2025-01-03T04:13:44Z | 84 | 7 | [
"language:en",
"language:ms",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"mathematics",
"physics",
"llms",
"Malaysia",
"Asia"
] | [] | 2024-12-27T06:50:06Z | 0 | ---
license: cc-by-4.0
dataset_info:
- config_name: data-ms
features:
- name: file name
dtype: string
- name: IBSN
dtype: string
- name: subject
dtype: string
- name: topic
dtype: string
- name: Questions
dtype: string
- name: figures
sequence: image
- name: label
sequence: string
- name: Options
dtype: string
- name: Answers
dtype: string
splits:
- name: eval
num_bytes: 34663548
num_examples: 614
download_size: 34559856
dataset_size: 34663548
- config_name: data_en
features:
- name: FileName
dtype: string
- name: IBSN
dtype: string
- name: Subject
dtype: string
- name: Topic
dtype: string
- name: Questions
dtype: string
- name: Figures
sequence: image
- name: Label
sequence: string
- name: Options
dtype: string
- name: Answers
dtype: string
splits:
- name: eval
num_bytes: 34663548
num_examples: 614
download_size: 69119656
dataset_size: 69327096.0
tags:
- mathematics
- physics
- llms
- Malaysia
- Asia
size_categories:
- n<1K
configs:
- config_name: data_en
data_files:
- split: eval
path: data_en/train-*
- config_name: data_ms
data_files:
- split: eval
path: data_ms/train-*
language:
- en
- ms
---
# **A Bilingual Dataset for Evaluating Reasoning Skills in STEM Subjects**
This dataset provides a comprehensive evaluation set for tasks assessing reasoning skills in Science, Technology, Engineering, and Mathematics (STEM) subjects. It features questions in both English and Malay, catering to a diverse audience.
**Key Features**
* **Bilingual:** Questions are available in English and Malay, promoting accessibility for multilingual learners.
* **Visually Rich:** Questions are accompanied by figures to enhance understanding and support visual and contextual reasoning.
* **Focus on Reasoning:** The dataset emphasizes questions requiring logical reasoning and problem-solving skills, as opposed to simple recall of knowledge.
* **Real-World Context:** Questions are derived from real-world scenarios, such as past SPM (Sijil Pelajaran Malaysia) examinations, making them relatable to students.
**Dataset Structure**
The dataset is comprised of two configurations: `data_en` (English) and `data_ms` (Malay). Both configurations share the same features and structure.
**Data Fields**
* **FileName:** Unique identifier for the source file (alphanumeric).
* **IBSN:** International Standard Book Number of the source book (if available).
* **Subject:** Academic subject (e.g., Physics, Mathematics).
* **Topic:** Specific topic of the question within the subject (may be missing).
* **Questions:** Main body of the question or problem statement.
* **Figures:** List of associated image files related to the question (empty if no figures are present).
* **Label:** Original caption or description of each image in the `imgs` list.
* **Options:** Possible answer choices for the question, with keys (e.g., "A", "B", "C", "D") and corresponding text.
* **Answers:** Correct answer to the question, represented by the key of the correct option (e.g., "C").
---
## Data Instance Example
```json
{
Β Β "FileName": "FC064244",
Β Β "ISBN": "9786294703681",
Β Β "Subject": "Physics",
Β Β "Topic": "Measurement",
Β Β "Questions": "State the physical quantity that can be measured using the measuring device shown in Diagram 1.",
Β Β "Figures": [
Β Β Β Β {
Β Β Β Β Β Β "label": "Diagram 1",
Β Β Β Β Β Β "path": "FC064244_C1_Q12_ImageFile_0.png"
Β Β Β Β }
Β Β ],
Β Β "Options": {
Β Β Β Β "A": "Weight",
Β Β Β Β "B": "Mass",
Β Β Β Β "C": "Amount of substance",
Β Β Β Β "D": "Volume"
Β Β },
Β Β "Answers": "B"
}
```
**Data Split**
The dataset is split between Physics and Mathematics subjects, with some questions lacking topic categorization.
| SubjectΒ Β Β | Instances with Topic | Instances without Topic | Total |
|-------------|----------------------|-------------------------|-------|
| PhysicsΒ Β Β | 316Β Β Β Β Β Β Β Β Β | 77Β Β Β Β Β Β Β Β Β Β Β | 393Β Β |
| Mathematics | 32Β Β Β Β Β Β Β Β Β Β | 189Β Β Β Β Β Β Β Β Β Β Β | 221Β Β |
**Known Limitations**
* **Subject Coverage:** The current version focuses on Physics and Mathematics. Future releases will include more STEM subjects.
* **Answer Accuracy:** Answers are extracted from various sources and may contain inaccuracies.
**Source**
The dataset is derived from a combination of resources, including:
* SPM past-year exams
* SPM mock exams
* Educational exercise books
**Data Acquisition Method**
* Optical Character Recognition (OCR) for text extraction
* Manual quality control (QC) to ensure data accuracy
**Versioning and Maintenance**
* **Current Version:** 1.0.0
* **Release Date:** December 27, 2024
* **Contact:** We welcome any feedback or corrections to improve the dataset quality.
---
# License
This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
---
# Getting Started
You can access the dataset on Hugging Face using the following commands:
```bash
# For English data
pip install datasets
from datasets import load_dataset
dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_en")
# For Malay data
dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_ms")
```
---
# Bilingual STEM Dataset LLM Leaderboard
This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
| **Model** | **en\_withfigures** | **en\_withoutfigures** | **ms\_withfigures** | **ms\_withoutfigures** |
| --------------------------------- | ------------------- | ---------------------- | ------------------- | ---------------------- |
| **gemini-2.0-flash-exp** | **63.70%** | <ins>75.16%</ins> | **63.36%** | <ins>75.47%</ins> |
| **gemini-1.5-flash** | 49.66% | 67.39% | 50.00% | 64.28% |
| **Qwen/Qwen2-VL-72B-Instruct** | <ins>58.22%</ins> | 69.25% | <ins>57.53%</ins> | 63.66% |
| **gpt-4o** | 47.95% | 66.15% | 50.00% | 68.01% |
| **gpt-4o-mini** | 41.10% | 55.90% | 38.36% | 52.80% |
| **pixtral-large-2411** | 42.81% | 64.29% | 35.27% | 60.87% |
| **pixtral-12b-2409** | 24.66% | 48.45% | 24.66% | 39.13% |
| **DeepSeek-V3** | None | **79.19%** | None | **76.40%** |
| **Qwen2.5-72B-Instruct** | None | 74.53% | None | 72.98% |
| **Meta-Llama-3.3-70B-Instruct** | None | 67.08% | None | 58.07% |
| **Llama-3.2-90B-Vision-Instruct** | None | 65.22% | None | 58.07% |
| **sail/Sailor2-20B-Chat** | None | 66.46% | None | 61.68% |
| **mallam-small** | None | 61.49% | None | 55.28% |
| **mistral-large-latest** | None | 60.56% | None | 53.42% |
| **google/gemma-2-27b-it** | None | 58.07% | None | 57.76% |
| **SeaLLMs-v3-7B-Chat** | None | 50.93% | None | 45.96% |
---
## Notes on eval.py
`eval.py` is a template for evaluating large language models (LLMs), update the script to integrate your _API calls_ or local model logic.
- The "First Token Accuracy" metric highlights initial token prediction accuracy.
- The evaluation results are based on the specific dataset and methodology employed.
- Further analysis might be needed to determine the models' suitability for specific tasks.
### Attribution for Evaluation Code
The `eval.py` script is based on work from the MMLU-Pro repository:
- Repository: [TIGER-AI-Lab/MMLU-Pro](https://github.com/TIGER-AI-Lab/MMLU-Pro)
- License: Apache License 2.0 (included in the `NOTICE` file)
---
# **Contributors**
- [**Gele**](https://huggingface.co/Geleliong)
- [**Ken Boon**](https://huggingface.co/caibcai)
- [**Wei Wen**](https://huggingface.co/WeiWen21)
|
Evan768/testEvan | Evan768 | 2024-12-27T12:18:25Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-26T12:44:41Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 3099451
num_examples: 6788
download_size: 1179658
dataset_size: 3099451
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ttimofeyka/qa-assistant-llama3.2 | Ttimofeyka | 2025-04-15T13:04:20Z | 16 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-15T13:03:48Z | 0 | ---
license: apache-2.0
---
|
villekuosmanen/pick_2_snackbars | villekuosmanen | 2025-05-18T17:20:04Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-18T11:58:45Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "arx5",
"total_episodes": 10,
"total_frames": 4759,
"total_tasks": 1,
"total_videos": 30,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
]
},
"observation.state": {
"dtype": "float32",
"shape": [
14
]
},
"observation.velocity": {
"dtype": "float32",
"shape": [
14
]
},
"observation.effort": {
"dtype": "float32",
"shape": [
14
]
},
"observation.eef_6d_pose": {
"dtype": "float32",
"shape": [
12
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 20.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 20.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 20.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
FerronHooi/my-distiset-737c164c | FerronHooi | 2024-12-16T21:00:17Z | 9 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [] | 2024-12-16T21:00:16Z | 0 | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': home-appliance
'1': pc-accessory
'2': mobile-device
'3': gaming-console
'4': accessory
'5': wearable-technology
splits:
- name: train
num_bytes: 2581
num_examples: 10
download_size: 4101
dataset_size: 2581
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-737c164c
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/FerronHooi/my-distiset-737c164c/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/FerronHooi/my-distiset-737c164c/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 2,
"text": "The new wireless charging pad is a game-changer for my busy lifestyle. I can simply place my phone on it and it starts charging instantly. No more tangled cables or worrying about running out of battery."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("FerronHooi/my-distiset-737c164c", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("FerronHooi/my-distiset-737c164c")
```
</details>
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_1_for_gen_12 | HungVu2003 | 2025-04-17T10:02:14Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T01:19:17Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4244538
num_examples: 10000
download_size: 2182091
dataset_size: 4244538
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
klo1/seq_monkey | klo1 | 2025-01-27T13:37:03Z | 130 | 0 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-27T12:45:25Z | 0 | ---
license: apache-2.0
---
|
emilbiju/Shortened-Plan-Execution-Data-Math | emilbiju | 2025-04-28T08:16:11Z | 20 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T01:45:03Z | 0 | ---
dataset_info:
- config_name: math
features:
- name: ProblemIdx
dtype: int64
- name: SystemPrompt
dtype: string
- name: UserPrompt
dtype: string
- name: ExpectedOutput
dtype: string
- name: FinalAnswer
dtype: string
- name: Phase
dtype: float64
- name: __index_level_0__
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 290847416
num_examples: 23223
- name: test
num_bytes: 33933612
num_examples: 2614
download_size: 46939746
dataset_size: 324781028
- config_name: math_eval
features:
- name: ProblemIdx
dtype: int64
- name: Question
dtype: string
- name: FinalAnswer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 251741
num_examples: 1236
- name: test
num_bytes: 27640
num_examples: 137
download_size: 172116
dataset_size: 279381
configs:
- config_name: math
data_files:
- split: train
path: math/train-*
- split: test
path: math/test-*
- config_name: math_eval
data_files:
- split: train
path: math_eval/train-*
- split: test
path: math_eval/test-*
---
|
christinakopi/M2_preference_pairs_final | christinakopi | 2025-05-27T14:50:41Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-27T14:50:32Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 286255653
num_examples: 78264
download_size: 126708157
dataset_size: 286255653
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hcurbelo/Kichwa_ES_001 | hcurbelo | 2025-03-04T17:37:40Z | 15 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-04T17:35:53Z | 0 | ---
license: apache-2.0
---
|
math-extraction-comp/0-hero__Matter-0.2-7B-DPO | math-extraction-comp | 2025-01-25T21:10:53Z | 9 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-07T23:05:01Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-4cfbbf17_extracted_answer
dtype: string
- name: lighteval-4cfbbf17_score
dtype: float64
- name: lighteval-6e869ab5_extracted_answer
dtype: string
- name: lighteval-c24870ea_score
dtype: float64
- name: lighteval-7e794885_extracted_answer
dtype: string
- name: qwen_extracted_answer
dtype: string
- name: lighteval-0f21c935_extracted_answer
dtype: string
- name: lighteval-d5acdd53_extracted_answer
dtype: string
- name: lighteval-2018ed86_extracted_answer
dtype: string
- name: lighteval-6e869ab5_score
dtype: float64
- name: lighteval-2018ed86_score
dtype: float64
- name: lighteval-7e794885_score
dtype: float64
- name: harness_score
dtype: float64
- name: qwen_score
dtype: float64
- name: lighteval-d5acdd53_score
dtype: float64
- name: lighteval-c24870ea_extracted_answer
dtype: string
- name: lighteval-0f21c935_score
dtype: float64
- name: harness_extracted_answer
dtype: string
splits:
- name: train
num_bytes: 5097315
num_examples: 1324
download_size: 1276124
dataset_size: 5097315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ytan-ucr/mu_llava_bruce_lee | ytan-ucr | 2025-01-23T17:02:03Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-23T17:02:01Z | 0 | ---
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1093505.0
num_examples: 100
download_size: 988901
dataset_size: 1093505.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
chendelong/goalsteps_cooking_14_fps | chendelong | 2024-11-26T15:33:32Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-26T15:32:02Z | 0 | ---
dataset_info:
features:
- name: video_uid
dtype: string
- name: goal
dtype: string
- name: num_steps
dtype: int32
- name: step_frames
sequence:
sequence: image
- name: step_descriptions
sequence: string
- name: step_timestamps
sequence:
sequence: float64
splits:
- name: val
num_bytes: 2458685086.0
num_examples: 67
download_size: 2458814662
dataset_size: 2458685086.0
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
---
|
stalaei/realmath_2025-2025-05 | stalaei | 2025-06-24T01:47:07Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T01:46:59Z | 0 | ---
dataset_info:
features:
- name: paper_link
dtype: string
- name: theorem
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: submission_date
dtype: string
splits:
- name: train
num_bytes: 273065944
num_examples: 611
download_size: 201759687
dataset_size: 273065944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/tts-rj-hi-karya-44100hz-part-33-quality | SayantanJoker | 2025-03-30T20:28:08Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-30T20:27:57Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float64
- name: utterance_pitch_std
dtype: float64
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 1964637
num_examples: 10000
download_size: 1124632
dataset_size: 1964637
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_combined_task902_deceptive_opinion_spam_classification | supergoose | 2025-03-05T21:56:51Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T21:56:49Z | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 8965760
num_examples: 4757
download_size: 4171503
dataset_size: 8965760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kowndinya23/flan2021-held-out-v11 | kowndinya23 | 2025-05-11T07:28:57Z | 1 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T07:28:49Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 167613225.59223446
num_examples: 100000
- name: validation
num_bytes: 16761322.559223447
num_examples: 10000
download_size: 78640611
dataset_size: 184374548.1514579
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
sagarhv001/gamelore | sagarhv001 | 2025-04-10T05:15:19Z | 13 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T05:13:35Z | 0 | ---
license: apache-2.0
---
|
zjrwtxtechstudio/o1data66 | zjrwtxtechstudio | 2024-12-27T11:05:47Z | 16 | 0 | [
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"camel",
"question-answering"
] | [
"question-answering"
] | 2024-12-27T11:05:45Z | 0 | ---
authors:
- zjrwtxtechstudio
description: Question-Answer dataset generated by CAMEL O1DataGene
language:
- en
license: mit
pretty_name: zjrwtxtechstudio/o1data66
size_categories: <1MB
tags:
- camel
- question-answering
task_categories:
- question-answering
version: 0.1.0
--- |
jokoum/record-image-test | jokoum | 2025-06-15T10:37:50Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-14T09:31:46Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 2,
"total_frames": 3600,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
SatsukiVie/ECHO | SatsukiVie | 2025-05-12T09:43:41Z | 0 | 0 | [
"task_categories:question-answering",
"license:cc-by-nc-4.0",
"size_categories:10M<n<100M",
"region:us"
] | [
"question-answering"
] | 2025-05-12T09:42:44Z | 0 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
size_categories:
- 10M<n<100M
--- |
uonyeka/uplimit-model-pooling | uonyeka | 2025-03-17T12:03:19Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [] | 2025-03-17T12:03:09Z | 0 | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: grouped_generation
sequence: string
- name: model_name
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_input_text_generation_1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: raw_output_text_generation_1
dtype: string
- name: statistics_text_generation_0
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: statistics_text_generation_1
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
splits:
- name: train
num_bytes: 8959
num_examples: 3
download_size: 22160
dataset_size: 8959
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for uplimit-model-pooling
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/uonyeka/uplimit-model-pooling/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/uonyeka/uplimit-model-pooling/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"distilabel_metadata": {
"raw_input_text_generation_0": [
{
"content": "Explain the concept of reinforcement learning.",
"role": "user"
}
],
"raw_input_text_generation_1": [
{
"content": "Explain the concept of reinforcement learning.",
"role": "user"
}
],
"raw_output_text_generation_0": "Sure, I\u0027d be happy to explain!\n\nReinforcement Learning (RL) is a type of machine learning where an agent learns to make decisions by performing actions in its environment and receiving rewards or penalties for those actions. The goal of the agent is to learn a policy that maximizes the cumulative reward over time. \n\nHere\u0027s how it works:\n\n1. **Environment**: The environment is the world in which the agent operates. It can be anything from a game board to a real-world robot navigating through a room.\n\n2. **Agent**: The agent is the entity that interacts with the environment. It takes",
"raw_output_text_generation_1": "Reinforcement Learning (RL) is an area of machine learning that focuses on how software agents should take actions in an environment to maximize some notion of cumulative reward. The goal of RL is to learn policies - algorithms that map states to actions - through trial and error interactions with the environment.\n\nIn reinforcement learning, an agent interacts with its environment over time, receiving feedback in the form of rewards or penalties for each action it takes. The objective is to find a policy that maximizes the total expected cumulative reward over time. This process involves several key concepts:\n\n1. **State**: The current situation or context of the agent.\n2. **",
"statistics_text_generation_0": {
"input_tokens": 7,
"output_tokens": 128
},
"statistics_text_generation_1": {
"input_tokens": 8,
"output_tokens": 128
}
},
"grouped_generation": [
"Sure, I\u0027d be happy to explain!\n\nReinforcement Learning (RL) is a type of machine learning where an agent learns to make decisions by performing actions in its environment and receiving rewards or penalties for those actions. The goal of the agent is to learn a policy that maximizes the cumulative reward over time. \n\nHere\u0027s how it works:\n\n1. **Environment**: The environment is the world in which the agent operates. It can be anything from a game board to a real-world robot navigating through a room.\n\n2. **Agent**: The agent is the entity that interacts with the environment. It takes",
"Reinforcement Learning (RL) is an area of machine learning that focuses on how software agents should take actions in an environment to maximize some notion of cumulative reward. The goal of RL is to learn policies - algorithms that map states to actions - through trial and error interactions with the environment.\n\nIn reinforcement learning, an agent interacts with its environment over time, receiving feedback in the form of rewards or penalties for each action it takes. The objective is to find a policy that maximizes the total expected cumulative reward over time. This process involves several key concepts:\n\n1. **State**: The current situation or context of the agent.\n2. **"
],
"instruction": "Explain the concept of reinforcement learning.",
"model_name": "Qwen/Qwen2.5-1.5B-Instruct"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("uonyeka/uplimit-model-pooling", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("uonyeka/uplimit-model-pooling")
```
</details>
|
MatthewsFace/diagnosticar-es_train | MatthewsFace | 2025-02-24T17:15:07Z | 56 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-18T17:23:36Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 54751
num_examples: 255
download_size: 23425
dataset_size: 54751
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dmeldrum6/c-sharp-linq-dataset | dmeldrum6 | 2024-12-21T13:11:46Z | 15 | 0 | [
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [] | 2024-12-19T21:08:21Z | 0 | ---
language:
- en
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': useful
'1': not-useful
splits:
- name: train
num_bytes: 28874
num_examples: 100
download_size: 14087
dataset_size: 28874
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for c-sharp-linq-dataset
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/dmeldrum6/c-sharp-linq-dataset/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/dmeldrum6/c-sharp-linq-dataset/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"labels": [
0,
1
],
"text": "The following C# LINQ query utilizes the GroupBy method to aggregate a list of employee records by department, then projects the result into a new anonymous type containing the department name and the average salary of employees within that department."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("dmeldrum6/c-sharp-linq-dataset", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("dmeldrum6/c-sharp-linq-dataset")
```
</details>
|
LucasChen/VAD-extracted-features | LucasChen | 2025-06-19T03:05:19Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-19T03:05:19Z | 0 | ---
license: apache-2.0
---
|
babs/OpenSLR-Nigerian-Accented-English | babs | 2024-10-16T20:23:37Z | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-16T20:20:19Z | 0 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 2138528084.6
num_examples: 3358
download_size: 1509463898
dataset_size: 2138528084.6
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
About this resource:
This data set contains transcribed high-quality audio of Nigerian English sentences recorded by volunteers, in Lagos Nigerian and in London. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
See LICENSE file for license information.
Copyright 2018, 2019 Google, Inc.
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
fady21/fady-llama2-1k | fady21 | 2025-04-23T16:35:36Z | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T16:35:30Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: Question
dtype: string
- name: Context
dtype: string
- name: Topic
dtype: string
- name: Answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1485574
num_examples: 1000
download_size: 673273
dataset_size: 1485574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
YanNeu/RePOPE | YanNeu | 2025-06-06T14:59:37Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-06T14:59:35Z | 0 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
splits:
- name: train
num_bytes: 809281
num_examples: 8185
download_size: 133693
dataset_size: 809281
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
introvoyz041/wills_trusts_and_estates | introvoyz041 | 2025-04-11T10:18:50Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-11T10:18:49Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1714844
num_examples: 577
download_size: 779238
dataset_size: 1714844
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Exmind2/Exmind2 | Exmind2 | 2025-02-20T23:17:21Z | 47 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-02-20T23:17:21Z | 0 | ---
license: apache-2.0
---
|
brandonyang/square_d1 | brandonyang | 2025-05-01T04:17:52Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T04:16:50Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 152400,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
SanskritVoyager/Sanskrit-Literature-Raw | SanskritVoyager | 2025-06-16T12:55:10Z | 0 | 0 | [
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2025-06-16T06:27:45Z | 0 | ---
license: cc-by-nc-4.0
---
|
relai-ai/angular-reasoning | relai-ai | 2025-04-21T16:07:52Z | 20 | 0 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2025-04-21T16:06:32Z | 0 | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
---
Samples in this benchmark were generated by RELAI using the following data source(s):
Data Source Name: Angular 17 Documentation
Data Source Link: https://v17.angular.io/docs
Data Source License: https://github.com/angular/angular/blob/main/LICENSE
Data Source Authors: Google
AI Benchmarks by Data Agents Β© 2025 RELAI.AI Β· Licensed under CC BY 4.0. Source: https://relai.ai |
PGLearn/PGLearn-Medium-2869_pegase | PGLearn | 2025-05-04T01:03:29Z | 0 | 0 | [
"task_categories:tabular-regression",
"license:cc-by-sa-4.0",
"region:us",
"energy",
"optimization",
"optimal_power_flow",
"power_grid"
] | [
"tabular-regression"
] | 2025-05-03T20:45:57Z | 0 | ---
license: cc-by-sa-4.0
tags:
- energy
- optimization
- optimal_power_flow
- power_grid
pretty_name: PGLearn Optimal Power Flow (2869_pegase)
task_categories:
- tabular-regression
dataset_info:
config_name: 2869_pegase
features:
- name: input/pd
sequence: float32
length: 1491
- name: input/qd
sequence: float32
length: 1491
- name: input/gen_status
sequence: bool
length: 510
- name: input/branch_status
sequence: bool
length: 4582
- name: input/seed
dtype: int64
- name: ACOPF/primal/vm
sequence: float32
length: 2869
- name: ACOPF/primal/va
sequence: float32
length: 2869
- name: ACOPF/primal/pg
sequence: float32
length: 510
- name: ACOPF/primal/qg
sequence: float32
length: 510
- name: ACOPF/primal/pf
sequence: float32
length: 4582
- name: ACOPF/primal/pt
sequence: float32
length: 4582
- name: ACOPF/primal/qf
sequence: float32
length: 4582
- name: ACOPF/primal/qt
sequence: float32
length: 4582
- name: ACOPF/dual/kcl_p
sequence: float32
length: 2869
- name: ACOPF/dual/kcl_q
sequence: float32
length: 2869
- name: ACOPF/dual/vm
sequence: float32
length: 2869
- name: ACOPF/dual/pg
sequence: float32
length: 510
- name: ACOPF/dual/qg
sequence: float32
length: 510
- name: ACOPF/dual/ohm_pf
sequence: float32
length: 4582
- name: ACOPF/dual/ohm_pt
sequence: float32
length: 4582
- name: ACOPF/dual/ohm_qf
sequence: float32
length: 4582
- name: ACOPF/dual/ohm_qt
sequence: float32
length: 4582
- name: ACOPF/dual/pf
sequence: float32
length: 4582
- name: ACOPF/dual/pt
sequence: float32
length: 4582
- name: ACOPF/dual/qf
sequence: float32
length: 4582
- name: ACOPF/dual/qt
sequence: float32
length: 4582
- name: ACOPF/dual/va_diff
sequence: float32
length: 4582
- name: ACOPF/dual/sm_fr
sequence: float32
length: 4582
- name: ACOPF/dual/sm_to
sequence: float32
length: 4582
- name: ACOPF/dual/slack_bus
dtype: float32
- name: ACOPF/meta/seed
dtype: int64
- name: ACOPF/meta/formulation
dtype: string
- name: ACOPF/meta/primal_objective_value
dtype: float32
- name: ACOPF/meta/dual_objective_value
dtype: float32
- name: ACOPF/meta/primal_status
dtype: string
- name: ACOPF/meta/dual_status
dtype: string
- name: ACOPF/meta/termination_status
dtype: string
- name: ACOPF/meta/build_time
dtype: float32
- name: ACOPF/meta/extract_time
dtype: float32
- name: ACOPF/meta/solve_time
dtype: float32
- name: DCOPF/primal/va
sequence: float32
length: 2869
- name: DCOPF/primal/pg
sequence: float32
length: 510
- name: DCOPF/primal/pf
sequence: float32
length: 4582
- name: DCOPF/dual/kcl_p
sequence: float32
length: 2869
- name: DCOPF/dual/pg
sequence: float32
length: 510
- name: DCOPF/dual/ohm_pf
sequence: float32
length: 4582
- name: DCOPF/dual/pf
sequence: float32
length: 4582
- name: DCOPF/dual/va_diff
sequence: float32
length: 4582
- name: DCOPF/dual/slack_bus
dtype: float32
- name: DCOPF/meta/seed
dtype: int64
- name: DCOPF/meta/formulation
dtype: string
- name: DCOPF/meta/primal_objective_value
dtype: float32
- name: DCOPF/meta/dual_objective_value
dtype: float32
- name: DCOPF/meta/primal_status
dtype: string
- name: DCOPF/meta/dual_status
dtype: string
- name: DCOPF/meta/termination_status
dtype: string
- name: DCOPF/meta/build_time
dtype: float32
- name: DCOPF/meta/extract_time
dtype: float32
- name: DCOPF/meta/solve_time
dtype: float32
- name: SOCOPF/primal/w
sequence: float32
length: 2869
- name: SOCOPF/primal/pg
sequence: float32
length: 510
- name: SOCOPF/primal/qg
sequence: float32
length: 510
- name: SOCOPF/primal/pf
sequence: float32
length: 4582
- name: SOCOPF/primal/pt
sequence: float32
length: 4582
- name: SOCOPF/primal/qf
sequence: float32
length: 4582
- name: SOCOPF/primal/qt
sequence: float32
length: 4582
- name: SOCOPF/primal/wr
sequence: float32
length: 4582
- name: SOCOPF/primal/wi
sequence: float32
length: 4582
- name: SOCOPF/dual/kcl_p
sequence: float32
length: 2869
- name: SOCOPF/dual/kcl_q
sequence: float32
length: 2869
- name: SOCOPF/dual/w
sequence: float32
length: 2869
- name: SOCOPF/dual/pg
sequence: float32
length: 510
- name: SOCOPF/dual/qg
sequence: float32
length: 510
- name: SOCOPF/dual/ohm_pf
sequence: float32
length: 4582
- name: SOCOPF/dual/ohm_pt
sequence: float32
length: 4582
- name: SOCOPF/dual/ohm_qf
sequence: float32
length: 4582
- name: SOCOPF/dual/ohm_qt
sequence: float32
length: 4582
- name: SOCOPF/dual/jabr
dtype:
array2_d:
shape:
- 4582
- 4
dtype: float32
- name: SOCOPF/dual/sm_fr
dtype:
array2_d:
shape:
- 4582
- 3
dtype: float32
- name: SOCOPF/dual/sm_to
dtype:
array2_d:
shape:
- 4582
- 3
dtype: float32
- name: SOCOPF/dual/va_diff
sequence: float32
length: 4582
- name: SOCOPF/dual/wr
sequence: float32
length: 4582
- name: SOCOPF/dual/wi
sequence: float32
length: 4582
- name: SOCOPF/dual/pf
sequence: float32
length: 4582
- name: SOCOPF/dual/pt
sequence: float32
length: 4582
- name: SOCOPF/dual/qf
sequence: float32
length: 4582
- name: SOCOPF/dual/qt
sequence: float32
length: 4582
- name: SOCOPF/meta/seed
dtype: int64
- name: SOCOPF/meta/formulation
dtype: string
- name: SOCOPF/meta/primal_objective_value
dtype: float32
- name: SOCOPF/meta/dual_objective_value
dtype: float32
- name: SOCOPF/meta/primal_status
dtype: string
- name: SOCOPF/meta/dual_status
dtype: string
- name: SOCOPF/meta/termination_status
dtype: string
- name: SOCOPF/meta/build_time
dtype: float32
- name: SOCOPF/meta/extract_time
dtype: float32
- name: SOCOPF/meta/solve_time
dtype: float32
splits:
- name: train
num_bytes: 189042853073
num_examples: 188545
- name: test
num_bytes: 47261465249
num_examples: 47137
download_size: 235716403370
dataset_size: 236304318322
configs:
- config_name: 2869_pegase
data_files:
- split: train
path: 2869_pegase/train-*
- split: test
path: 2869_pegase/test-*
default: true
---
|
GitBag/block-q-sharp_ds-distilled-qwen-1.5b-ppo-kl-1e-4-ec-0.001-good-1_hmmt-feb-24_eval_new_256 | GitBag | 2025-05-12T13:13:22Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T13:13:19Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: message_id
dtype: string
- name: responses
dtype: string
- name: processed_answer
dtype: string
- name: reward
dtype: bool
splits:
- name: train
num_bytes: 209991355
num_examples: 7680
download_size: 85983081
dataset_size: 209991355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/tts-rj-hi-karya-44100hz-part-33-quality-metadata | SayantanJoker | 2025-03-30T20:28:45Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-30T20:28:38Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float64
- name: utterance_pitch_std
dtype: float64
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 3030228
num_examples: 10000
download_size: 1092875
dataset_size: 3030228
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chaiamy/reddit_dataset_197 | chaiamy | 2025-03-25T13:26:44Z | 98 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-02-10T19:44:03Z | 0 | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** chaiamy/reddit_dataset_197
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5HSmU8zVqHRhVskecoyh2JWPD5erGuDRT6B6TBNtUwqJyopG
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{chaiamy2025datauniversereddit_dataset_197,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={chaiamy},
year={2025},
url={https://huggingface.co/datasets/chaiamy/reddit_dataset_197},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 1043896
- **Date Range:** 2019-07-24T00:00:00Z to 2025-03-25T00:00:00Z
- **Last Updated:** 2025-03-25T13:26:44Z
### Data Distribution
- Posts: 2.43%
- Comments: 97.57%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/wallstreetbets | 298843 | 28.63% |
| 2 | r/Bitcoin | 188943 | 18.10% |
| 3 | r/CryptoCurrency | 173168 | 16.59% |
| 4 | r/solana | 87910 | 8.42% |
| 5 | r/CryptoMarkets | 77526 | 7.43% |
| 6 | r/ethtrader | 71839 | 6.88% |
| 7 | r/leagueoflegends | 42181 | 4.04% |
| 8 | r/balatro | 32235 | 3.09% |
| 9 | r/pokemon | 27081 | 2.59% |
| 10 | r/btc | 18807 | 1.80% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-02-10T19:45:26Z | 772395 | 772395 |
| 2025-02-14T07:08:28Z | 47300 | 819695 |
| 2025-02-17T19:08:39Z | 43473 | 863168 |
| 2025-02-27T16:36:36Z | 41093 | 904261 |
| 2025-03-13T14:38:00Z | 1431 | 905692 |
| 2025-03-14T08:38:07Z | 9781 | 915473 |
| 2025-03-15T02:38:15Z | 11402 | 926875 |
| 2025-03-15T20:38:23Z | 8293 | 935168 |
| 2025-03-16T14:38:30Z | 7677 | 942845 |
| 2025-03-17T08:38:36Z | 9781 | 952626 |
| 2025-03-18T02:26:22Z | 9838 | 962464 |
| 2025-03-18T20:26:28Z | 9052 | 971516 |
| 2025-03-19T14:26:35Z | 9232 | 980748 |
| 2025-03-19T15:56:41Z | 980 | 981728 |
| 2025-03-19T17:26:46Z | 893 | 982621 |
| 2025-03-19T18:56:51Z | 969 | 983590 |
| 2025-03-19T20:26:55Z | 997 | 984587 |
| 2025-03-19T21:57:01Z | 966 | 985553 |
| 2025-03-19T23:27:07Z | 605 | 986158 |
| 2025-03-20T00:57:14Z | 654 | 986812 |
| 2025-03-20T02:27:21Z | 851 | 987663 |
| 2025-03-20T03:57:32Z | 168 | 987831 |
| 2025-03-20T05:27:46Z | 600 | 988431 |
| 2025-03-20T06:57:51Z | 450 | 988881 |
| 2025-03-20T08:27:58Z | 493 | 989374 |
| 2025-03-20T09:58:02Z | 484 | 989858 |
| 2025-03-20T11:28:07Z | 693 | 990551 |
| 2025-03-20T12:58:17Z | 641 | 991192 |
| 2025-03-20T14:28:35Z | 657 | 991849 |
| 2025-03-20T15:58:41Z | 872 | 992721 |
| 2025-03-20T17:29:00Z | 954 | 993675 |
| 2025-03-20T18:59:07Z | 693 | 994368 |
| 2025-03-20T20:29:15Z | 704 | 995072 |
| 2025-03-20T21:59:36Z | 752 | 995824 |
| 2025-03-20T23:29:45Z | 773 | 996597 |
| 2025-03-21T00:59:52Z | 869 | 997466 |
| 2025-03-21T02:29:56Z | 518 | 997984 |
| 2025-03-21T04:00:01Z | 620 | 998604 |
| 2025-03-21T05:30:06Z | 427 | 999031 |
| 2025-03-21T07:00:11Z | 468 | 999499 |
| 2025-03-21T08:30:17Z | 267 | 999766 |
| 2025-03-21T10:00:26Z | 177 | 999943 |
| 2025-03-21T11:30:43Z | 672 | 1000615 |
| 2025-03-21T13:00:48Z | 750 | 1001365 |
| 2025-03-21T14:30:58Z | 454 | 1001819 |
| 2025-03-21T16:01:11Z | 590 | 1002409 |
| 2025-03-21T19:25:32Z | 203 | 1002612 |
| 2025-03-22T13:26:15Z | 6930 | 1009542 |
| 2025-03-23T07:26:23Z | 8551 | 1018093 |
| 2025-03-24T01:26:31Z | 9212 | 1027305 |
| 2025-03-24T19:26:38Z | 8629 | 1035934 |
| 2025-03-25T13:26:44Z | 7962 | 1043896 |
|
jkcho/QA-Dataset-mini | jkcho | 2025-03-23T10:59:06Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-23T10:28:14Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9800
num_examples: 48
download_size: 8321
dataset_size: 9800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_qwen2.5-3b_t0.1_n8_tests_mbpp_qwen3-8b_t0.6_n1_think | test-gen | 2025-05-16T16:11:30Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-16T16:11:29Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5589027
num_examples: 500
download_size: 1067799
dataset_size: 5589027
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
korbih/ui-sensei-iteration-0-20250408_114450-complete | korbih | 2025-04-08T09:54:24Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-08T09:54:21Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: trial_number
dtype: int32
- name: task_description
dtype: string
- name: start_url
dtype: string
- name: is_success
dtype: bool
- name: is_shortest
dtype: bool
- name: evaluator_thoughts
dtype: string
- name: evaluator_status
dtype: string
- name: run_error
dtype: string
- name: step_index
dtype: int32
- name: url_at_step
dtype: string
- name: prompt
dtype: string
- name: action
dtype: string
- name: screenshot
dtype: image
splits:
- name: train
num_bytes: 20051902.0
num_examples: 90
download_size: 5882938
dataset_size: 20051902.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Self-GRIT/open-hermes-2.5-sft-active-retrieval-sample-300k-retrieval-llama3-infer-query-ref | Self-GRIT | 2024-10-04T02:20:28Z | 19 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-04T02:20:25Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
splits:
- name: train
num_bytes: 188138596
num_examples: 33454
download_size: 43519419
dataset_size: 188138596
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deepcopy/text_recognition_en_zh_small_250k | deepcopy | 2025-06-20T12:26:50Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-20T12:23:19Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1954585583.0
num_examples: 250000
- name: test
num_bytes: 32178006.726
num_examples: 2894
- name: val
num_bytes: 31484751.68
num_examples: 2910
download_size: 2799453221
dataset_size: 2018248341.4060001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
# Dataset Card for "text_recognition_en_zh_small_250k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_0e54ffda-419c-4cac-b017-82089ffb0db7 | argilla-internal-testing | 2025-01-21T08:38:16Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-21T08:38:15Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_3_dataset_1_for_gen_4 | HungVu2003 | 2025-04-29T17:55:18Z | 20 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T17:55:06Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3251221
num_examples: 12499
download_size: 1731953
dataset_size: 3251221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zekeZZ/medmcqa-gen-by-zephyr-ft-gpqa-all | zekeZZ | 2025-01-14T02:00:47Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-06T20:28:03Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: gen_answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
splits:
- name: train
num_bytes: 12185614
num_examples: 10000
download_size: 7682002
dataset_size: 12185614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/j_c_dfiltered_DeepSeek-R1-Distill-Llama-8B_madversarial_continue_with_wrong_reasoning_t50 | reasoning-proj | 2025-05-11T10:08:41Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T10:08:35Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
- name: mutated_answer_content
dtype: string
- name: continuation_1
dtype: string
- name: complete_answer_1
dtype: string
- name: continuation_2
dtype: string
- name: complete_answer_2
dtype: string
- name: continuation_3
dtype: string
- name: complete_answer_3
dtype: string
- name: continuation_4
dtype: string
- name: complete_answer_4
dtype: string
- name: continuation_5
dtype: string
- name: complete_answer_5
dtype: string
- name: continuation_6
dtype: string
- name: complete_answer_6
dtype: string
- name: continuation_7
dtype: string
- name: complete_answer_7
dtype: string
- name: continuation_8
dtype: string
- name: complete_answer_8
dtype: string
- name: continuation_model
dtype: string
- name: verifier_score_1
dtype: int64
- name: verifier_score_2
dtype: int64
- name: verifier_score_3
dtype: int64
- name: verifier_score_4
dtype: int64
- name: verifier_score_5
dtype: int64
- name: verifier_score_6
dtype: int64
- name: verifier_score_7
dtype: int64
- name: verifier_score_8
dtype: int64
splits:
- name: train
num_bytes: 114180685
num_examples: 600
download_size: 47377219
dataset_size: 114180685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PRINCEEMMANUEL/AMAZON_FFR_TVT | PRINCEEMMANUEL | 2025-03-24T16:36:02Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T18:16:28Z | 0 | ---
dataset_info:
features:
- name: Text_CL
dtype: string
- name: Summary_CL
dtype: string
- name: Label_CL
dtype: int64
splits:
- name: train
num_bytes: 1987993.7
num_examples: 7000
- name: test
num_bytes: 425998.65
num_examples: 1500
- name: valid
num_bytes: 425998.65
num_examples: 1500
download_size: 1681337
dataset_size: 2839991.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
Samll/math_early_answering_data_v2 | Samll | 2025-01-20T11:08:51Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-20T10:58:53Z | 0 | ---
dataset_info:
features:
- name: Original_Prompt
dtype: string
- name: Original_COT
dtype: string
- name: Original_CoT_answer
dtype: string
- name: Truncated_Input
dtype: string
- name: Truncated_Input_response
dtype: string
- name: Truncated_Input_extracted_answer
dtype: string
- name: Truncated_CoT_length
dtype: int64
- name: Correct_Answer
dtype: string
splits:
- name: train
num_bytes: 114180418
num_examples: 27167
download_size: 6711388
dataset_size: 114180418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AmarHelio/record-test19 | AmarHelio | 2025-06-15T05:48:10Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-15T05:47:19Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 10,
"total_frames": 3778,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
infinite-dataset-hub/OvarianUltrasoundFeatureExtraction | infinite-dataset-hub | 2024-11-16T10:38:43Z | 14 | 1 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2024-11-16T10:38:42Z | 0 | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# OvarianUltrasoundFeatureExtraction
tags: Regression, Feature Learning, Gynecological Imaging
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'OvarianUltrasoundFeatureExtraction' dataset comprises high-resolution ultrasound images of the ovaries from various gynecological imaging sources. Each image is annotated with key features and corresponding labels, such as follicular cysts, corpus luteum, and endometriomas. This dataset serves as a basis for regression and feature learning models aimed at assisting radiologists and machine learning practitioners in the diagnosis and treatment planning for ovarian pathologies. The annotations are performed by expert radiologists and are used for supervised learning tasks in gynecological healthcare.
**CSV Content Preview:**
```
ImageID,ImageURL,Labels,Feature1,Feature2,Feature3
Ultrasound001,http://example.com/images/ultrasound001.jpg,["FollicularCyst"],3.2,14.8,0.5
Ultrasound002,http://example.com/images/ultrasound002.jpg,["CorpusLuteum"],5.1,12.3,0.2
Ultrasound003,http://example.com/images/ultrasound003.jpg,["Endometrioma"],4.6,13.5,0.8
Ultrasound004,http://example.com/images/ultrasound004.jpg,["Normal"],2.9,10.0,0.1
Ultrasound005,http://example.com/images/ultrasound005.jpg,["PolycysticOvary"],6.2,15.2,0.3
```
In this dataset, the 'ImageURL' column links to the ultrasound images, the 'Labels' column contains the list of identified features, and the 'Feature1', 'Feature2', and 'Feature3' columns represent extracted features from the images that could be used for machine learning tasks. The feature values are indicative of certain characteristics of the ultrasound images, such as the size and number of cysts, which are important for classification and regression tasks in ovarian ultrasound analysis.
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'ultrasound':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=ultrasound&dataset=OvarianUltrasoundFeatureExtraction&tags=Regression,+Feature+Learning,+Gynecological+Imaging
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
omercann/twinup | omercann | 2024-12-30T14:20:55Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-30T14:20:27Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 3242.75
num_examples: 7
- name: test
num_bytes: 484
num_examples: 1
download_size: 11210
dataset_size: 3726.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
obiwan96/obiwan96owm_raw_v3__100000_120000 | obiwan96 | 2025-02-26T19:39:10Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-26T15:16:21Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
splits:
- name: train
num_bytes: 210328396
num_examples: 20000
download_size: 95116250
dataset_size: 210328396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashraq/fashion-product-images-small | ashraq | 2022-11-01T20:25:52Z | 1,194 | 31 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-11-01T20:22:50Z | 1 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: gender
dtype: string
- name: masterCategory
dtype: string
- name: subCategory
dtype: string
- name: articleType
dtype: string
- name: baseColour
dtype: string
- name: season
dtype: string
- name: year
dtype: float64
- name: usage
dtype: string
- name: productDisplayName
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 546202015.44
num_examples: 44072
download_size: 271496441
dataset_size: 546202015.44
---
# Dataset Card for "fashion-product-images-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-small) |
FlippyDora/numia_prompt_reward_iter1_0-2000 | FlippyDora | 2025-03-11T16:23:34Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T16:23:33Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: responses
sequence: string
- name: gt
dtype: string
- name: problem
dtype: string
- name: rewards
sequence: float64
splits:
- name: train
num_bytes: 28393319
num_examples: 2000
download_size: 10490498
dataset_size: 28393319
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hhhFuture/class_stand | hhhFuture | 2024-12-04T06:50:17Z | 15 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-12-04T06:50:17Z | 0 | ---
license: apache-2.0
---
|
facebook/feint6k | facebook | 2024-10-09T21:07:02Z | 16 | 0 | [
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2024-10-09T21:07:02Z | 0 | ---
license: cc-by-nc-4.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.