datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kangin/test_data | kangin | 2024-10-11T10:39:58Z | 21 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-11T10:39:47Z | 0 | ---
dataset_info:
features:
- name: QUESTION
dtype: string
- name: ANSWER
dtype: string
splits:
- name: train
num_bytes: 87269
num_examples: 430
download_size: 40593
dataset_size: 87269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/stackexchange_avp | mlfoundations-dev | 2024-12-23T17:34:04Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-11T08:04:37Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 136780748
num_examples: 20396
download_size: 71098919
dataset_size: 136780748
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_humaneval_qwen2.5-3b_t0.1_n8_tests_humaneval_qwen3-1.7b_t0.7_n1 | test-gen | 2025-05-16T15:26:35Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-16T15:26:33Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 2296379
num_examples: 164
download_size: 394677
dataset_size: 2296379
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
alea-institute/kl3m-data-pacer-ca7 | alea-institute | 2025-04-11T01:55:19Z | 9 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07854",
"arxiv:2503.17247",
"region:us"
] | [] | 2025-02-08T04:08:16Z | 0 | ---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 401085889
num_examples: 6207
download_size: 68052585
dataset_size: 401085889
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# KL3M Data Project
> **Note**: This page provides general information about the KL3M Data Project. Additional details specific to this dataset will be added in future updates. For complete information, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854).
## Description
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
## Dataset Details
- **Format**: Parquet files containing document text and metadata
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
## Abstract
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified to meet the strict copyright and licensing protocol detailed in the project. We are releasing the entire pipeline, including:
1. The source code to acquire and process these documents
2. The original document formats with associated provenance and metadata
3. Extracted content in a standardized format
4. Pre-tokenized representations of the documents
5. Various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification, prediction, and conversational data
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
## Legal Basis
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
- Public domain materials
- US government works
- Open access content under permissive licenses
- Content explicitly licensed for AI training
## Papers
For more information about the KL3M Data Project, please refer to:
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
- [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{bommarito2025kl3m,
title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2503.17247},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/). |
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-sl-unrevised_NoQuant_32_16_0.01_64_BestF1 | ferrazzipietro | 2024-11-25T11:29:19Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-25T11:29:17Z | 0 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 149854
num_examples: 101
- name: test
num_bytes: 1063090
num_examples: 654
download_size: 242559
dataset_size: 1212944
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
michsethowusu/twi-speech-text-parallel-synthetic-1m-part003 | michsethowusu | 2025-06-15T15:17:29Z | 0 | 0 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_ids:keyword-spotting",
"multilinguality:monolingual",
"language:aka",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"speech",
"aka",
"twi",
"ghana",
"african-languages",
"low-resource",
"parallel-corpus",
"synthetic-data",
"largest-twi-dataset"
] | [
"automatic-speech-recognition",
"text-to-speech"
] | 2025-06-15T13:28:04Z | 0 | ---
language:
- aka
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
task_ids:
- keyword-spotting
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
modalities:
- audio
- text
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
config_name: default
splits:
- name: train
num_bytes: 0
num_examples: {len(data)}
download_size: 0
dataset_size: 0
tags:
- speech
- aka
- twi
- ghana
- african-languages
- low-resource
- parallel-corpus
- synthetic-data
- largest-twi-dataset
pretty_name: Twi Speech-Text Parallel Dataset - Part 3 of 5
---
# Twi Speech-Text Parallel Dataset - Part 3 of 5
## 🎉 The Largest Speech Dataset for Twi Language
This dataset contains **part 3 of the largest speech dataset for the Twi language**, featuring **1 million speech-to-text pairs** split across 5 parts (approximately 200,000 samples each). This represents a groundbreaking resource for Twi (Akan), a language spoken primarily in Ghana.
### 🚀 Breaking the Low-Resource Language Barrier
This publication demonstrates that **African languages don't have to remain low-resource**. Through creative synthetic data generation techniques, we've produced the largest collection of AI training data for speech-to-text models in Twi, proving that innovative approaches can build the datasets African languages need.
## 📊 Complete Dataset Series (1M Total Samples)
| Part | Repository | Samples | Status |
|------|------------|---------|--------|
| Part 1 | `michsethowusu/twi-speech-text-parallel-synthetic-1m-part001` | ~200,000 | ✅ Available |
| Part 2 | `michsethowusu/twi-speech-text-parallel-synthetic-1m-part002` | ~200,000 | ✅ Available |
| **Part 3** | `michsethowusu/twi-speech-text-parallel-synthetic-1m-part003` | ~200,000 | **🔥 THIS PART** |
| Part 4 | `michsethowusu/twi-speech-text-parallel-synthetic-1m-part004` | ~200,000 | ✅ Available |
| Part 5 | `michsethowusu/twi-speech-text-parallel-synthetic-1m-part005` | ~200,000 | ✅ Available |
### Dataset Summary
- **Language**: Twi/Akan - `aka`
- **Total Dataset Size**: 1,000,000 speech-text pairs
- **This Part**: {len(data):,} audio files (filtered, >1KB)
- **Task**: Speech Recognition, Text-to-Speech
- **Format**: WAV audio files with corresponding text transcriptions
- **Generation Method**: Synthetic data generation
- **Modalities**: Audio + Text
## 🎯 Supported Tasks
- **Automatic Speech Recognition (ASR)**: Train models to convert Twi speech to text
- **Text-to-Speech (TTS)**: Use parallel data for TTS model development
- **Speech-to-Speech Translation**: Cross-lingual speech applications
- **Keyword Spotting**: Identify specific Twi words in audio
- **Phonetic Analysis**: Study Twi pronunciation patterns
- **Language Model Training**: Large-scale Twi language understanding
## 📁 Dataset Structure
### Data Fields
- `audio`: Audio file in WAV format (synthetically generated)
- `text`: Corresponding text transcription in Twi
### Data Splits
This part contains a single training split with {len(data):,} filtered audio-text pairs (small/corrupted files removed).
### Loading the Complete Dataset
```python
from datasets import load_dataset, concatenate_datasets
# Load all parts of the dataset
parts = []
for i in range(1, 6):
part_name = f"michsethowusu/twi-speech-text-parallel-synthetic-1m-part{i:03d}"
part = load_dataset(part_name, split="train")
parts.append(part)
# Combine all parts into one dataset
complete_dataset = concatenate_datasets(parts)
print(f"Complete dataset size: {{len(complete_dataset):,}} samples")
```
### Loading Just This Part
```python
from datasets import load_dataset
# Load only this part
dataset = load_dataset("michsethowusu/twi-speech-text-parallel-synthetic-1m-part003", split="train")
print(f"Part 3 dataset size: {{len(dataset):,}} samples")
```
## 🛠️ Dataset Creation
### Methodology
This dataset was created using **synthetic data generation techniques**, specifically designed to overcome the challenge of limited speech resources for African languages. The approach demonstrates how AI can be used to bootstrap language resources for underrepresented languages.
### Data Processing Pipeline
1. **Text Generation**: Synthetic Twi sentences generated
2. **Speech Synthesis**: Text-to-speech conversion using advanced models
3. **Quality Filtering**: Files smaller than 1KB removed to ensure quality
4. **Alignment Verification**: Audio-text alignment validated
5. **Format Standardization**: Consistent WAV format and text encoding
### Technical Details
- **Audio Format**: WAV files, various sample rates
- **Text Encoding**: UTF-8
- **Language Code**: `aka` (ISO 639-3)
- **Filtering**: Minimum file size 1KB to remove corrupted/empty files
## 🌍 Impact and Applications
### Breaking Language Barriers
This dataset represents a paradigm shift in how we approach low-resource African languages:
- **Scalability**: Proves synthetic generation can create large datasets
- **Accessibility**: Makes Twi ASR/TTS development feasible
- **Innovation**: Demonstrates creative solutions for language preservation
- **Reproducibility**: Methodology can be applied to other African languages
### Use Cases
- **Educational Technology**: Twi language learning applications
- **Accessibility**: Voice interfaces for Twi speakers
- **Cultural Preservation**: Digital archiving of Twi speech patterns
- **Research**: Phonetic and linguistic studies of Twi
- **Commercial Applications**: Voice assistants for Ghanaian markets
## ⚠️ Considerations for Using the Data
### Social Impact
**Positive Impact:**
- Advances language technology for underrepresented communities
- Supports digital inclusion for Twi speakers
- Contributes to cultural and linguistic preservation
- Enables development of Twi-language AI applications
### Limitations and Biases
- **Synthetic Nature**: Generated data may not capture all nuances of natural speech
- **Dialect Coverage**: May not represent all regional Twi dialects equally
- **Speaker Diversity**: Limited to synthesis model characteristics
- **Domain Coverage**: Vocabulary limited to training data scope
- **Audio Quality**: Varies across synthetic generation process
### Ethical Considerations
- Data created with respect for Twi language and culture
- Intended to support, not replace, natural language preservation efforts
- Users should complement with natural speech data when possible
## 📚 Technical Specifications
### Audio Specifications
- **Format**: WAV
- **Channels**: Mono
- **Sample Rate**: 16kHz
- **Bit Depth**: 16-bit
- **Duration**: Variable per sample
### Quality Assurance
- Minimum file size: 1KB (corrupted files filtered)
- Text-audio alignment verified
- UTF-8 encoding validation
- Duplicate removal across parts
## 📄 License and Usage
### Licensing Information
This dataset is released under the **Creative Commons Attribution 4.0 International License (CC BY 4.0)**.
**You are free to:**
- Share: Copy and redistribute the material
- Adapt: Remix, transform, and build upon the material
- Commercial use: Use for commercial purposes
**Under the following terms:**
- Attribution: Give appropriate credit and indicate if changes were made
## 🙏 Acknowledgments
- **Original Audio Production**: The Ghana Institute of Linguistics, Literacy and Bible Translation in partnership with Davar Partners
- **Audio Processing**: MMS-300M-1130 Forced Aligner
- **Synthetic Generation**: Advanced text-to-speech synthesis pipeline
- **Community**: Twi language speakers and researchers who inspire this work
## 📖 Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{{twi_speech_parallel_1m_2025,
title={{Twi Speech-Text Parallel Dataset: The Largest Speech Dataset for Twi Language}},
author={{Owusu, Michael Seth}},
year={{2025}},
publisher={{Hugging Face}},
note={{1 Million synthetic speech-text pairs across 5 parts}},
url={{https://huggingface.co/datasets/michsethowusu/twi-speech-text-parallel-synthetic-1m-part003}}
}}
```
For the complete dataset series:
```bibtex
@dataset{{twi_speech_complete_series_2025,
title={{Complete Twi Speech-Text Parallel Dataset Series (1M samples)}},
author={{Owusu, Mich-Seth}},
year={{2025}},
publisher={{Hugging Face}},
note={{Parts 003-005, 200k samples each}},
url={{https://huggingface.co/michsethowusu}}
}}
```
## 📞 Contact and Support
- **Repository Issues**: Open an issue in this dataset repository
- **General Questions**: Contact through Hugging Face profile
- **Collaboration**: Open to partnerships for African language AI development
## 🔗 Related Resources
- [Complete Dataset Series](https://huggingface.co/michsethowusu)
- [Twi Language Resources](https://huggingface.co/models?language=aka)
---
**🌟 Star this dataset if it helps your research!**
**🔄 Share to support African language AI development!**
""" |
yujunzhou/LabSafety_Bench | yujunzhou | 2025-06-08T08:50:37Z | 152 | 2 | [
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.14182",
"region:us",
"chemistry",
"biology",
"synthetic",
"physics",
"lab-safety"
] | [
"question-answering"
] | 2024-10-19T02:13:46Z | 0 | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- question-answering
tags:
- chemistry
- biology
- synthetic
- physics
- lab-safety
dataset_info:
features:
- name: Question
dtype: string
- name: Explanation
dtype: string
- name: Correct Answer
dtype: string
- name: Category
sequence: string
- name: Topic
dtype: string
- name: Level
dtype: string
- name: Decoded Image
dtype: image
- name: Image Path
dtype: string
- name: Scenario
dtype: string
- name: LabSafety_Related_Issues
struct:
- name: Most_Common_Hazards
sequence: string
- name: Improper_Operation_Issues
sequence: string
- name: Negative_Lab_Environment_Impacts
sequence: string
- name: Most_Likely_Safety_Incidents
sequence: string
- name: SubCategory
dtype: string
- name: Decisions
sequence:
struct:
- name: Decision
dtype: string
- name: Consequence
dtype: string
- name: Subject
dtype: string
splits:
- name: QA
num_bytes: 958038
num_examples: 632
- name: QA_I
num_bytes: 19003036
num_examples: 133
- name: sampledQA
num_bytes: 120743
num_examples: 80
- name: sampledQA_I
num_bytes: 1597973
num_examples: 20
- name: scenario
num_bytes: 1718214
num_examples: 404
download_size: 13591675
dataset_size: 23398004
configs:
- config_name: MCQ
data_files:
- split: QA
path: data/QA-*
- split: QA_I
path: data/QA_I-*
- split: sampledQA
path: data/sampledQA-*
- split: sampledQA_I
path: data/sampledQA_I-*
- config_name: scenario
data_files:
- split: scenario
path: data/scenarios*
---
# LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs
## Dataset Description
LabSafety Bench is a comprehensive evaluation framework designed to rigorously assess the trustworthiness of large language models in laboratory settings. The benchmark includes two main evaluation components:
- **Multiple-Choice Questions (MCQs):**
A set of 765 questions derived from authoritative lab safety protocols, including 632 text-only questions and 133 multimodal questions. These questions enable standard evaluation of language models and vision-language models in handling lab safety issues.
- **Real-World Scenario Evaluations:**
A collection of 404 realistic laboratory scenarios that yield a total of 3128 open-ended questions. These scenarios are organized into two tests:
- **Hazards Identification Test:** Models identify all potential hazards in a given scenario.
- **Consequence Identification Test:** Models predict the outcomes of executing specific hazardous actions.
This dual-component design provides a multifaceted evaluation of model performance in both structured multiple-choice and open-ended, real-world safety contexts.
## Paper Information
- **Paper:** [https://arxiv.org/abs/2410.14182](https://arxiv.org/abs/2410.14182)
- **Code:** [https://github.com/YujunZhou/LabSafety-Bench](https://github.com/YujunZhou/LabSafety-Bench)
- **Project:** [https://yujunzhou.github.io/LabSafetyBench.github.io/](https://yujunzhou.github.io/LabSafetyBench.github.io/)
## Available Configurations
This dataset is published with two configurations to cater to different evaluation needs:
- **MCQ:**
Contains only the QA-related splits. Available splits include:
- **QA:** 632 text-only examples.
- **QA_I:** 133 multimodal examples.
- **sampledQA:** 80 text-only examples for human evaluation or validation.
- **sampledQA_I:** 20 multimodal examples for human evaluation or validation.
- **scenario:**
Contains only the scenario-related split, which includes additional fields (e.g., "Scenario", "LabSafety_Related_Issues", "SubCategory", and "Decisions") to capture detailed safety issue information.
- **scenario:** 404 examples with scenario-specific data.
When loading the dataset using the Hugging Face Datasets library, specify the configuration with the `name` parameter:
```python
from datasets import load_dataset
# Load MCQ configuration
MCQ_dataset = load_dataset("yujunzhou/LabSafety_Bench", name="MCQ")
# Load scenario configuration
scenario_dataset = load_dataset("yujunzhou/LabSafety_Bench", name="scenario")
```
## Dataset Usage
### Data Downloading
For the **MCQ** configuration, the data examples are divided into four splits:
- **QA:** 632 text-only examples.
- **QA_I:** 133 multimodal examples.
- **sampledQA:** 80 text-only examples.
- **sampledQA_I:** 20 multimodal examples.
For the **scenario** configuration, the dataset contains a single split:
- **scenario:** 404 examples with detailed scenario-specific fields.
Download the dataset as follows:
```python
from datasets import load_dataset
# Load all MCQ splits (MCQ configuration)
MCQ_dataset = load_dataset("yujunzhou/LabSafety_Bench", name="MCQ")
# Or load a specific split
QA_split = load_dataset("yujunzhou/LabSafety_Bench", name="MCQ", split="QA")
# Load scenario configuration
scenario_dataset = load_dataset("yujunzhou/LabSafety_Bench", name="scenario", split="scenario")
```
### Data Format
For the **MCQ** configuration, each data item is a dictionary with the following keys:
```json
{
"Question": "A multiple-choice question with four options",
"Explanation": "An explanation why the correct answer is correct and why the other options are incorrect",
"Correct Answer": "A single option from 'A', 'B', 'C', or 'D'",
"Category": ["List", "of", "categories"],
"Topic": "A brief description of the hazardous substance or equipment",
"Level": "Easy or Hard",
"Image Path": "Path to the image (None for text-only questions)",
"Decoded Image": "The actual image (for multimodal questions)"
}
```
For the **scenario** configuration, each example includes additional keys:
```json
{
"Scenario": "A detailed description of a lab safety scenario",
"LabSafety_Related_Issues": {
"Most_Common_Hazards": ["List", "of", "hazards"],
"Improper_Operation_Issues": ["List", "of", "issues"],
"Negative_Lab_Environment_Impacts": ["List", "of", "impacts"],
"Most_Likely_Safety_Incidents": ["List", "of", "incidents"]
},
"Topic": "A brief description of the hazardous substance or equipment",
"SubCategory": "A subcategory label",
"Decisions": [
{
"Decision": "A decision description",
"Consequence": "The consequence of that decision"
},
...
]
"Subject": "A subject label",
}
```
## Model Evaluation
To evaluate a model on the dataset, please refer to our [GitHub repository](https://github.com/YujunZhou/LabSafety-Bench).
## Disclaimer
This dataset is designed to evaluate the safety awareness of large language models (LLMs) in scientific laboratory environments. While every effort has been made to ensure the questions and scenarios cover common safety concerns, they are not exhaustive. Model performance on this dataset should not be interpreted as a guarantee of real-world safety.
Users are responsible for independently verifying the dataset content before applying it in any critical settings. The creators and affiliated institutions assume no liability for any direct or indirect consequences arising from its use.
## How to Cite
If you use this dataset in your research or projects, please cite it as follows:
```
@misc{zhou2024labsafetybenchbenchmarkingllms,
title={LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs},
author={Yujun Zhou and Jingdong Yang and Kehan Guo and Pin-Yu Chen and Tian Gao and Werner Geyer and Nuno Moniz and Nitesh V Chawla and Xiangliang Zhang},
year={2024},
eprint={2410.14182},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.14182},
}
```
|
gustavkeppler/so101_test | gustavkeppler | 2025-05-09T13:59:03Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-09T12:49:01Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 1786,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
MatteoKhan/vietnamese | MatteoKhan | 2025-03-03T17:33:37Z | 18 | 0 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:token-classification",
"language:vi",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"vietnam",
"textile",
"airline",
"banking",
"energy",
"dataset",
"information extraction"
] | [
"question-answering",
"text-classification",
"token-classification"
] | 2025-03-03T16:20:57Z | 0 | ---
task_categories:
- question-answering
- text-classification
- token-classification
size_categories:
- n>1K
language:
- vi
- en
tags:
- vietnam
- textile
- airline
- banking
- energy
- dataset
- information extraction
pretty_name: Vietnamese Industries Insights
license: cc-by-4.0
---
<div align="center">
<img src="./1730467754863.jpeg" alt="Matteo Khan" width="200" style="border-radius: 50%; margin-bottom: 20px;">
<h1>Vietnamese Industries Insights</h1>
<a href="https://tw3partners.fr/fr/accueil/" target="_blank">
<img src="https://img.shields.io/badge/TW3_Partners-Visit_Website-2ea44f" alt="TW3 Partners" style="margin-bottom: 20px;">
</a>
</div>
## About Me
I'm **Matteo Khan**, a computer science apprentice at [TW3 Partners](https://tw3partners.fr/fr/accueil/), specializing in Generative AI and NLP. My focus is on creating datasets that improve AI's ability to process complex technical documents.
You can connect with me on LinkedIn: [Matteo Khan](https://www.linkedin.com/in/matteo-khan-a10309263/)
## Dataset Details
### Purpose / Mục Đích
<table>
<tr>
<td><strong>Tiếng Việt:</strong></td>
<td>Bộ dữ liệu này được tạo ra nhằm cung cấp cái nhìn tổng quan về các ngành công nghiệp chủ chốt của Việt Nam, bao gồm dệt may, hàng không, ngân hàng và năng lượng. Nó hỗ trợ phân tích thị trường, dự báo xu hướng kinh tế và nghiên cứu chính sách công.</td>
</tr>
<tr>
<td><strong>English:</strong></td>
<td>This dataset is designed to offer comprehensive insights into key industrial sectors in Vietnam, including textiles, aviation, banking, and energy. It supports market analysis, economic forecasting, and policy research.</td>
</tr>
</table>
### Source Data / Nguồn Dữ Liệu
<table>
<tr>
<td><strong>Tiếng Việt:</strong></td>
<td>Dữ liệu được thu thập từ nhiều nguồn đáng tin cậy như báo cáo chính thức, thống kê của chính phủ, nghiên cứu thị trường và các bài báo chuyên ngành. Mỗi nguồn cung cấp các chỉ số và thông tin quan trọng liên quan đến từng ngành.</td>
</tr>
<tr>
<td><strong>English:</strong></td>
<td>The data is compiled from reliable sources such as official reports, government statistics, market research studies, and specialized articles. Each source provides key metrics and critical insights related to each sector.</td>
</tr>
</table>
### Data Processing / Xử Lý Dữ Liệu
<table>
<tr>
<td><strong>Tiếng Việt:</strong></td>
<td>
<ul>
<li><strong>Hỗ trợ Ngôn Ngữ:</strong> Dữ liệu bao gồm nội dung bằng tiếng Việt và tiếng Anh nhằm phục vụ đối tượng toàn cầu.</li>
<li><strong>Phân Loại Ngành:</strong>
<ul>
<li><strong>Dệt may:</strong> Sản lượng, xuất khẩu, số liệu lao động.</li>
<li><strong>Hàng không:</strong> Dữ liệu về chuyến bay, đội bay, hiệu suất hoạt động.</li>
<li><strong>Ngân hàng:</strong> Báo cáo tài chính, chỉ số hoạt động, quy định.</li>
<li><strong>Năng lượng:</strong> Sản xuất, tiêu thụ, phân phối năng lượng.</li>
</ul>
</li>
<li><strong>Chuẩn hóa Dữ Liệu:</strong> Các nguồn dữ liệu được xử lý và chuẩn hóa thành định dạng chung để dễ dàng phân tích và tích hợp.</li>
</ul>
</td>
</tr>
<tr>
<td><strong>English:</strong></td>
<td>
<ul>
<li><strong>Language Support:</strong> The dataset includes content in both Vietnamese and English to cater to a global audience.</li>
<li><strong>Sector Classification:</strong>
<ul>
<li><strong>Textiles:</strong> Production figures, export data, and workforce statistics.</li>
<li><strong>Aviation:</strong> Flight information, fleet details, and operational metrics.</li>
<li><strong>Banking:</strong> Financial reports, performance indicators, and regulatory guidelines.</li>
<li><strong>Energy:</strong> Production, consumption, and distribution data.</li>
</ul>
</li>
<li><strong>Data Standardization:</strong> Data from various sources is processed and normalized into a unified format for seamless analysis and integration.</li>
</ul>
</td>
</tr>
</table>
### Data Format / Định Dạng Dữ Liệu
<table>
<tr>
<td><strong>Tiếng Việt:</strong></td>
<td>Bộ dữ liệu được cung cấp ở nhiều định dạng, bao gồm các file CSV riêng cho từng ngành và file parquet tổng hợp, phục vụ cho việc học máy đa mô hình.</td>
</tr>
<tr>
<td><strong>English:</strong></td>
<td>The dataset is available in multiple formats, including individual CSV files for each sector and a consolidated parquet file for multimodal machine learning applications.</td>
</tr>
</table>
### Dataset Usage / Ứng Dụng Bộ Dữ Liệu
<table>
<tr>
<td><strong>Tiếng Việt:</strong></td>
<td>Bộ dữ liệu này lý tưởng cho việc phát triển các mô hình AI trong phân tích ngành công nghiệp, dự báo kinh tế và nghiên cứu chính sách. Nó cũng hỗ trợ việc đào tạo các hệ thống xử lý ngôn ngữ tự nhiên và trích xuất thông tin.</td>
</tr>
<tr>
<td><strong>English:</strong></td>
<td>This dataset is ideal for developing AI models focused on industry analysis, economic forecasting, and policy research. It also supports training natural language processing and information extraction systems.</td>
</tr>
</table> |
xxizhouu/red_teaming_qa | xxizhouu | 2024-11-06T15:05:32Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-06T15:04:30Z | 0 | ---
dataset_info:
features:
- name: user_input
dtype: string
- name: response
dtype: string
splits:
- name: 10_sample
num_bytes: 1617.3
num_examples: 9
download_size: 3422
dataset_size: 1617.3
configs:
- config_name: default
data_files:
- split: 10_sample
path: data/10_sample-*
---
|
frnka/dmp-qa-with-context | frnka | 2025-04-28T21:12:27Z | 28 | 0 | [
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/5270",
"region:us"
] | [] | 2024-12-28T18:30:05Z | 0 | ---
dataset_info:
features:
- name: file
dtype: string
- name: section
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: hash
dtype: string
- name: context
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 80866410
num_examples: 17357
download_size: 38490547
dataset_size: 80866410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
pretty_name: Data management plan QAs with generated context
---
# Data management questions and answers with generated context
Questions and answers from [dmp-qa](https://huggingface.co/datasets/frnka/dmp-qa) with generated context
Attribution `Improved with Qwen` should be displayed when using this data for finetuning.
Generated context + answer length are around 700 tokens. |
AkitoP/MiSide-Japanese | AkitoP | 2025-01-01T21:37:44Z | 41 | 22 | [
"task_categories:text-to-speech",
"language:ja",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-to-speech"
] | 2025-01-01T15:44:41Z | 0 | ---
license: apache-2.0
task_categories:
- text-to-speech
language:
- ja
size_categories:
- 1K<n<10K
--- |
llm-jp/magpie-sft-v1.0 | llm-jp | 2024-11-13T18:54:02Z | 129 | 11 | [
"task_categories:text-generation",
"language:ja",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.08464",
"region:us"
] | [
"text-generation"
] | 2024-11-13T18:02:16Z | 0 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ja
size_categories:
- 100K<n<1M
---
# magpie-sft-v1.0
This repository provides an instruction-tuning dataset developed by LLM-jp, a collaborative project launched in Japan.
This is a dataset of instruction and response pairs created using the [Magpie](https://arxiv.org/abs/2406.08464) method.
[cyberagent/calm3-22b-chat](https://huggingface.co/cyberagent/calm3-22b-chat) was used for generating the instructions, and [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) was used for generating the responses.
## Send Questions to
llm-jp(at)nii.ac.jp
## Model Card Authors
The names are listed in alphabetical order.
Hirokazu Kiyomaru and Takashi Kodama. |
kaiwenw/nov2_aft_gpt4o_1.1 | kaiwenw | 2024-11-03T07:39:50Z | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-03T07:39:48Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: chosen_score
dtype: float64
- name: rejected
dtype: string
- name: rejected_score
dtype: float64
splits:
- name: train
num_bytes: 26507779
num_examples: 3410
- name: validation
num_bytes: 1398384
num_examples: 183
download_size: 18743233
dataset_size: 27906163
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
HennersBro98/feedbackCollectionReferenceRunInput | HennersBro98 | 2024-10-28T13:50:33Z | 19 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-28T13:50:09Z | 0 | ---
dataset_info:
features:
- name: orig_criteria
dtype: string
- name: orig_score4_description
dtype: string
- name: orig_response
dtype: string
- name: orig_instruction
dtype: string
- name: orig_score
dtype: string
- name: orig_score1_description
dtype: string
- name: orig_feedback
dtype: string
- name: orig_score2_description
dtype: string
- name: orig_score3_description
dtype: string
- name: input
dtype: string
- name: orig_score5_description
dtype: string
- name: orig_reference_answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: split
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 998959366
num_examples: 100952
download_size: 487022951
dataset_size: 998959366
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
edwhu/eval_koch_test | edwhu | 2024-10-24T00:42:19Z | 21 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"tutorial",
"eval"
] | [
"robotics"
] | 2024-10-24T00:42:08Z | 0 | ---
task_categories:
- robotics
tags:
- LeRobot
- tutorial
- eval
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
BookingCare/PL2-TT13-2023-BYT-KCBTYC | BookingCare | 2025-01-10T08:19:29Z | 53 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-08T10:25:56Z | 0 | ---
dataset_info:
features:
- name: Tên dịch vụ
dtype: string
- name: Giá tối thiểu
dtype: float64
- name: Giá tối đa
dtype: float64
- name: Ghi chú
dtype: string
splits:
- name: train
num_bytes: 166346
num_examples: 1813
download_size: 74561
dataset_size: 166346
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Khung giá dịch vụ kỹ thuật và xét nghiệm theo yêu cầu
Kèm theo TT số 13/2023/TT-BYT ngày 29/6/2023 của bộ y tế
|
dgambettaphd/prompt_wxs_3000doc | dgambettaphd | 2025-04-29T00:00:06Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T23:59:59Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: synt
dtype: int64
- name: dataset
dtype: string
- name: id_doc
dtype: int64
- name: gen
dtype: int64
splits:
- name: train
num_bytes: 78008814
num_examples: 9000
download_size: 45071194
dataset_size: 78008814
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ramu143/guanaco-llama2-2k | Ramu143 | 2025-02-16T05:05:20Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-11T06:57:57Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3211457
num_examples: 2000
download_size: 1887235
dataset_size: 3211457
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
YufeiWeng/DaDaoZhengFeng-Dataset | YufeiWeng | 2025-02-06T08:51:32Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-06T08:40:01Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 22220728
num_examples: 2392
- name: test
num_bytes: 2183177
num_examples: 235
download_size: 15206091
dataset_size: 24403905
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
LocalDoc/Azerbaijani-sts13-sts | LocalDoc | 2024-10-27T03:36:09Z | 32 | 0 | [
"task_categories:sentence-similarity",
"language:az",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"sentence-similarity"
] | 2024-10-27T03:32:08Z | 0 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: scaled_score
dtype: float64
splits:
- name: train
num_bytes: 236280
num_examples: 1496
download_size: 131507
dataset_size: 236280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- sentence-similarity
language:
- az
size_categories:
- 1K<n<10K
--- |
cdactvm/kannada_new_data_v3 | cdactvm | 2024-12-19T07:43:06Z | 48 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-19T07:13:28Z | 0 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1050402724.38
num_examples: 41130
- name: valid
num_bytes: 58228085.09
num_examples: 2285
- name: test
num_bytes: 57446546.54
num_examples: 2285
download_size: 1147604152
dataset_size: 1166077356.01
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
TAUR-dev/solution-trees__short-and-wide_p3_batch44 | TAUR-dev | 2025-03-13T22:41:17Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-13T22:41:01Z | 0 | ---
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: gemini_thinking_trajectory
dtype: string
- name: gemini_attempt
dtype: string
- name: deepseek_thinking_trajectory
dtype: string
- name: deepseek_attempt
dtype: string
- name: gemini_grade
dtype: string
- name: gemini_grade_reason
dtype: string
- name: deepseek_grade
dtype: string
- name: deepseek_grade_reason
dtype: string
- name: trees
dtype: string
splits:
- name: train
num_bytes: 675164309
num_examples: 1000
download_size: 134067660
dataset_size: 675164309
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
claudiogsc/emnist_balanced | claudiogsc | 2025-02-14T01:52:42Z | 59 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-13T19:47:06Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 45
'1': 36
'2': 43
'3': 15
'4': 4
'5': 42
'6': 26
'7': 32
'8': 20
'9': 1
'10': 46
'11': 13
'12': 24
'13': 12
'14': 5
'15': 17
'16': 3
'17': 40
'18': 14
'19': 44
'20': 19
'21': 29
'22': 25
'23': 35
'24': 28
'25': 23
'26': 22
'27': 34
'28': 9
'29': 30
'30': 38
'31': 39
'32': 37
'33': 31
'34': 16
'35': 7
'36': 2
'37': 8
'38': 10
'39': 6
'40': 27
'41': 33
'42': 11
'43': 18
'44': 41
'45': 0
'46': 21
splits:
- name: train
num_bytes: 48067531.0
num_examples: 112800
- name: test
num_bytes: 8030973.0
num_examples: 18800
download_size: 53292150
dataset_size: 56098504.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
c-ho/ub_opus_docs_string_match_bll | c-ho | 2025-05-06T15:48:54Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T14:49:43Z | 0 | ---
dataset_info:
features:
- name: doc_id
dtype: string
- name: doc_title
dtype: string
- name: doc_lang
dtype: string
- name: doc_type
dtype: string
- name: doc_desc_list
sequence: string
- name: ddc
dtype: string
- name: doc_subject_list
sequence: string
- name: bll_match_id
sequence: string
- name: bll_match_literals
sequence:
sequence: string
- name: bll_superclasses
sequence:
sequence: string
- name: bll_superclass_literals
sequence:
sequence:
sequence: string
- name: bll_top_node
sequence: string
splits:
- name: train
num_bytes: 13291644
num_examples: 4476
download_size: 4495045
dataset_size: 13291644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_1_for_gen_15 | HungVu2003 | 2025-05-04T00:06:00Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T00:05:59Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2350942
num_examples: 12500
download_size: 1279479
dataset_size: 2350942
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen5_W_doc1000_synt64_SYN64 | dgambettaphd | 2025-03-29T11:45:01Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-29T11:44:57Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: TPP
dtype: float64
- name: MPP
dtype: float64
- name: FTP
dtype: float64
splits:
- name: train
num_bytes: 29186990
num_examples: 9000
download_size: 16887302
dataset_size: 29186990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wheresmyhair/ultrachat_autoif_promptonly | wheresmyhair | 2025-04-09T09:34:11Z | 17 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-09T09:33:44Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 483495679.8472621
num_examples: 758953
download_size: 58482895
dataset_size: 483495679.8472621
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ultrachat_autoif_promptonly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ducalt/jcrrag | ducalt | 2025-05-21T02:33:45Z | 0 | 0 | [
"task_categories:question-answering",
"language:ja",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"jcrrag",
"japaneserag",
"llmrag",
"rageval",
"rag-evaluation"
] | [
"question-answering"
] | 2025-05-20T08:14:40Z | 0 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- ja
tags:
- jcrrag
- japaneserag
- llmrag
- rageval
- rag-evaluation
pretty_name: JCrRAG
size_categories:
- 10K<n<100K
---
# JCrRAG : LLM Japanese RAG performance evaluation
This is a benchmark for LLM Japanese RAG performance evaluation.
The benchmark contains 20,000 data records.
Each record has the following format :
(Context, Question, GroundtruthAnswer)
in which Context is the context to input to an LLM for RAG evaluation.
Evaluation script :
https://github.com/ducalt/jcrrageval
# JCrRAG ベンチマーク
LLMのRAG性能を評価するベンチマークです。
(コンテキスト, 質問, 正解回答) の3つ組データの2万件を含みます。
LLMに入れるときに、下記のようなプロンプトで入れてください。
```
あなたはバーチャルアシスタントであり、提供された1つ以上の段落の情報に基づいて質問に回答する役割があります。以下の条件に従って質問に回答してください:
1) 回答は正確で完全でなければなりません。
2) 提供された段落の情報のみを使用してください。
3) 段落に回答が含まれていない場合、適切な説明をしてください。
質問: {Question}
段落: {Context}
```
自動評価スクリプト:
https://github.com/ducalt/jcrrageval
---
license: cc-by-sa-4.0
--- |
amazon/CodePrefBench | amazon | 2024-11-25T23:04:54Z | 32 | 1 | [
"task_categories:other",
"language:code",
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"arxiv:2410.03837",
"region:us",
"code"
] | [
"other"
] | 2024-11-25T22:45:01Z | 0 | ---
license: cc-by-nc-4.0
task_categories:
- other
language:
- code
tags:
- code
pretty_name: CodePrefBench
size_categories:
- 1K<n<10K
---
# CodePreference
- **Homepage:** https://llm-code-preference.github.io/
- **Repository:** https://github.com/amazon-science/llm-code-preference
- **Paper:** [Link](https://arxiv.org/abs/2410.03837)
## Data Fields
* `task_id` (`string`): The unique identifier for the task.
* `instruction` (`string`): The instruction prompt to write code.
* `choices` (`List[string]`): Two responses where one is preferred over the other.
* `gt_choice` (`int`): `0` or `1` indicating the preferred choice.
## Usage
```python
# Environment setup
git clone https://github.com/amazon-science/llm-code-preference.git
cd llm-code-preference
pip install -r requirements.txt
# Evaluation
## OpenAI server
python codefavor/evaluate.py --model-id "gpt-4o-2024-05-13" --model-type openai --concurrency 80
## Other OpenAI-compatible servers (vLLM, DeepSeek APIs, etc.)
python codefavor/evaluate.py --model-id "google/gemma-2-27b-it" --model-type openai --concurrency 80 --model-url http://localhost:8000/v1
## Claude models at Bedrock
python codefavor/evaluate.py --model-id "anthropic.claude-3-sonnet-20240229-v1:0" --model-type bedrock --concurrency 10
## Pairwise RM
python codefavor/evaluate.py --model-id "./models/mix-cls-mistral-7b-it_bs32_ep1_lr5e-6-l3-70b/checkpoint-688" --model-type pair-rm
```
## Citation
```bibtex
@article{liu2024learning,
title = {Learning Code Preference via Synthetic Evolution},
author = {Liu, Jiawei and Nguyen, Thanh and Shang, Mingyue and Ding, Hantian and Li, Xiaopeng and Yu, Yu and Kumar, Varun and Wang, Zijian},
journal = {arXiv preprint arXiv:2410.03837},
year = {2024},
}
```
|
DataTonic/synthetic-climate-disinfo-dataset-qwen | DataTonic | 2025-02-09T02:58:46Z | 23 | 0 | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"climate"
] | [
"text-classification",
"zero-shot-classification"
] | 2025-02-09T02:57:31Z | 0 | ---
dataset_info:
features:
- name: quote
dtype: string
- name: label
dtype: string
- name: source
dtype: string
- name: url
dtype: string
- name: language
dtype: string
- name: subsource
dtype: string
- name: id
dtype: 'null'
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1581403.3293838862
num_examples: 3753
download_size: 818136
dataset_size: 1581403.3293838862
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-classification
- zero-shot-classification
language:
- en
tags:
- climate
--- |
giulio98/LongBench-1024 | giulio98 | 2025-04-23T14:19:32Z | 19 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T13:55:24Z | 0 | ---
dataset_info:
- config_name: 2wikimqa
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1248447
num_examples: 200
download_size: 739505
dataset_size: 1248447
- config_name: gov_report
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1997556
num_examples: 200
download_size: 1007167
dataset_size: 1997556
- config_name: hotpotqa
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1274125
num_examples: 200
download_size: 748688
dataset_size: 1274125
- config_name: lcc
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 4476193
num_examples: 500
download_size: 1598848
dataset_size: 4476193
- config_name: multi_news
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1390408
num_examples: 200
download_size: 787135
dataset_size: 1390408
- config_name: multifieldqa_en
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 928882
num_examples: 150
download_size: 508540
dataset_size: 928882
- config_name: musique
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1269705
num_examples: 200
download_size: 742129
dataset_size: 1269705
- config_name: narrativeqa
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1251609
num_examples: 200
download_size: 637879
dataset_size: 1251609
- config_name: passage_count
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1281074
num_examples: 200
download_size: 566698
dataset_size: 1281074
- config_name: passage_retrieval_en
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1551997
num_examples: 200
download_size: 951244
dataset_size: 1551997
- config_name: qasper
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1467860
num_examples: 200
download_size: 687278
dataset_size: 1467860
- config_name: qmsum
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1329082
num_examples: 200
download_size: 592711
dataset_size: 1329082
- config_name: repobench-p
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 9572916
num_examples: 500
download_size: 3524829
dataset_size: 9572916
- config_name: samsum
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1346304
num_examples: 200
download_size: 801864
dataset_size: 1346304
- config_name: trec
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 1458615
num_examples: 200
download_size: 557331
dataset_size: 1458615
- config_name: triviaqa
features:
- name: input
dtype: string
- name: context
dtype: string
- name: answers
list: string
- name: length
dtype: int32
- name: dataset
dtype: string
- name: language
dtype: string
- name: all_classes
list: string
- name: _id
dtype: string
- name: question
dtype: string
- name: answer_prefix
dtype: string
- name: task
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
- name: retrieved_k
dtype: int64
- name: budget_tokens
dtype: int64
splits:
- name: test
num_bytes: 2498571
num_examples: 200
download_size: 1549550
dataset_size: 2498571
configs:
- config_name: 2wikimqa
data_files:
- split: test
path: 2wikimqa/test-*
- config_name: gov_report
data_files:
- split: test
path: gov_report/test-*
- config_name: hotpotqa
data_files:
- split: test
path: hotpotqa/test-*
- config_name: lcc
data_files:
- split: test
path: lcc/test-*
- config_name: multi_news
data_files:
- split: test
path: multi_news/test-*
- config_name: multifieldqa_en
data_files:
- split: test
path: multifieldqa_en/test-*
- config_name: musique
data_files:
- split: test
path: musique/test-*
- config_name: narrativeqa
data_files:
- split: test
path: narrativeqa/test-*
- config_name: passage_count
data_files:
- split: test
path: passage_count/test-*
- config_name: passage_retrieval_en
data_files:
- split: test
path: passage_retrieval_en/test-*
- config_name: qasper
data_files:
- split: test
path: qasper/test-*
- config_name: qmsum
data_files:
- split: test
path: qmsum/test-*
- config_name: repobench-p
data_files:
- split: test
path: repobench-p/test-*
- config_name: samsum
data_files:
- split: test
path: samsum/test-*
- config_name: trec
data_files:
- split: test
path: trec/test-*
- config_name: triviaqa
data_files:
- split: test
path: triviaqa/test-*
---
|
triton7777/eval_act_so100_test3 | triton7777 | 2025-01-31T11:47:01Z | 36 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial",
"eval"
] | [
"robotics"
] | 2025-01-31T11:46:51Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
- eval
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 11742,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ankits0052/webly_label_learning | ankits0052 | 2025-06-13T19:34:01Z | 0 | 0 | [
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-13T19:30:49Z | 0 | ---
license: cc-by-nc-sa-4.0
---
# Learning Sounds from Webly Labeled Data
Github repo for "Learning Sound Events fromWebly Labeled Data", Anurag Kumar , Ankit Shah , Bhiksha Raj and Alexander Hauptmann, accepted
in 28th International Joint Conference on Artificial Intelligence **(IJCAI), 2019**.
## Data
Audio files available here - https://drive.google.com/file/d/1_Bs-zLWVO1R6ajIvfq281IPIcs2Fqy2K/view?usp=sharing
https://drive.google.com/file/d/1QGzynNDxlS1fwpOfOXL79BysIhZbwsTE/view?usp=sharing
## PDF for Paper
Paper - https://www.ijcai.org/proceedings/2019/0384.pdf
## BibTex for Citation
```
@inproceedings{ijcai2019-384,
title = {Learning Sound Events from Webly Labeled Data},
author = {Kumar, Anurag and Shah, Ankit and Hauptmann, Alexander and Raj, Bhiksha},
booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on
Artificial Intelligence, {IJCAI-19}},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {2772--2778},
year = {2019},
month = {7},
doi = {10.24963/ijcai.2019/384},
url = {https://doi.org/10.24963/ijcai.2019/384},
}
```
|
sarvamai/mmlu-indic | sarvamai | 2025-05-23T09:13:30Z | 98 | 0 | [
"task_categories:question-answering",
"language:bn",
"language:en",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2024-10-23T09:10:05Z | 0 | ---
language:
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license: mit
task_categories:
- question-answering
pretty_name: Indic MMLU
dataset_info:
- config_name: bn
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 9981094
dataset_size: 14523284.181818182
- config_name: bn_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 5621536
num_examples: 13894
download_size: 3111687
dataset_size: 5621536
- config_name: en
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 7046354
dataset_size: 14523284.181818182
- config_name: gu
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5048638
dataset_size: 14523284.181818182
- config_name: gu_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 5948327
num_examples: 14004
download_size: 3281363
dataset_size: 5948327
- config_name: hi
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5060941
dataset_size: 14523284.181818182
- config_name: hi_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6192639
num_examples: 13913
download_size: 3308477
dataset_size: 6192639
- config_name: kn
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5391445
dataset_size: 14523284.181818182
- config_name: kn_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6507374
num_examples: 14005
download_size: 3391672
dataset_size: 6507374
- config_name: ml
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5422573
dataset_size: 14523284.181818182
- config_name: ml_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6666429
num_examples: 13991
download_size: 3527459
dataset_size: 6666429
- config_name: mr
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5205467
dataset_size: 14523284.181818182
- config_name: mr_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 5949755
num_examples: 13904
download_size: 3339832
dataset_size: 5949755
- config_name: or
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 4830686
dataset_size: 14523284.181818182
- config_name: or_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6088902
num_examples: 13979
download_size: 3235693
dataset_size: 6088902
- config_name: pa
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 4959729
dataset_size: 14523284.181818182
- config_name: pa_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6072164
num_examples: 13946
download_size: 3375598
dataset_size: 6072164
- config_name: ta
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5621280
dataset_size: 14523284.181818182
- config_name: ta_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6178662
num_examples: 13096
download_size: 3264376
dataset_size: 6178662
- config_name: te
features:
- name: question
dtype: string
- name: answer
dtype: int64
- name: choices
sequence: string
splits:
- name: validation
num_bytes: 239800.54545454544
num_examples: 285
- name: test
num_bytes: 14283483.636363637
num_examples: 14042
download_size: 5233340
dataset_size: 14523284.181818182
- config_name: te_roman
features:
- name: answer
dtype: int64
- name: language
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
splits:
- name: test
num_bytes: 6365080
num_examples: 13989
download_size: 3407740
dataset_size: 6365080
configs:
- config_name: bn
data_files:
- split: validation
path: bn/validation-*
- split: test
path: bn/test-*
- config_name: bn_roman
data_files:
- split: test
path: bn_roman/test-*
- config_name: en
data_files:
- split: validation
path: en/validation-*
- split: test
path: en/test-*
- config_name: gu
data_files:
- split: validation
path: gu/validation-*
- split: test
path: gu/test-*
- config_name: gu_roman
data_files:
- split: test
path: gu_roman/test-*
- config_name: hi
data_files:
- split: validation
path: hi/validation-*
- split: test
path: hi/test-*
- config_name: hi_roman
data_files:
- split: test
path: hi_roman/test-*
- config_name: kn
data_files:
- split: validation
path: kn/validation-*
- split: test
path: kn/test-*
- config_name: kn_roman
data_files:
- split: test
path: kn_roman/test-*
- config_name: ml
data_files:
- split: validation
path: ml/validation-*
- split: test
path: ml/test-*
- config_name: ml_roman
data_files:
- split: test
path: ml_roman/test-*
- config_name: mr
data_files:
- split: validation
path: mr/validation-*
- split: test
path: mr/test-*
- config_name: mr_roman
data_files:
- split: test
path: mr_roman/test-*
- config_name: or
data_files:
- split: validation
path: or/validation-*
- split: test
path: or/test-*
- config_name: or_roman
data_files:
- split: test
path: or_roman/test-*
- config_name: pa
data_files:
- split: validation
path: pa/validation-*
- split: test
path: pa/test-*
- config_name: pa_roman
data_files:
- split: test
path: pa_roman/test-*
- config_name: ta
data_files:
- split: validation
path: ta/validation-*
- split: test
path: ta/test-*
- config_name: ta_roman
data_files:
- split: test
path: ta_roman/test-*
- config_name: te
data_files:
- split: validation
path: te/validation-*
- split: test
path: te/test-*
- config_name: te_roman
data_files:
- split: test
path: te_roman/test-*
---
# Indic MMLU Dataset
A multilingual version of the [Massive Multitask Language Understanding (MMLU) benchmark](https://huggingface.co/datasets/cais/mmlu), translated from English into 10 Indian languages.
This version contains the translations of the development and test sets only.
### Languages Covered
The dataset includes translations in the following languages:
- Bengali (bn)
- Gujarati (gu)
- Hindi (hi)
- Kannada (kn)
- Marathi (mr)
- Malayalam (ml)
- Oriya (or)
- Punjabi (pa)
- Tamil (ta)
- Telugu (te)
### Task Format
Each example is a multiple-choice question containing:
- `question`: Question text in target language
- `choices`: List of four possible answers (A, B, C, D) in target language
- `answer`: Correct answer index (0-3)
- `language`: ISO 639-1 language code
## Dataset Statistics
- Validation (dev in the original): ~280 examples per language
- Test: ~14k examples per language
## Usage
```python
from datasets import load_dataset
# we do not maintain subject groupings
dataset = load_dataset("sarvamai/mmlu-indic")
```
## Known Limitations
- Technical terminology may be challenging to translate precisely
- Some subjects (like US Law) may have concepts without direct equivalents
- Cultural and educational system differences may affect question relevance
## License
This dataset follows the same license as the original MMLU dataset.
## Acknowledgments
- Original MMLU dataset creators. |
Shwetasingh123/llama_8b_MATH_wd_rewards | Shwetasingh123 | 2025-01-05T22:28:03Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-05T22:28:02Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_chain
dtype: string
- name: unique_id
dtype: string
- name: epoch
dtype: int64
- name: reward
dtype: float64
splits:
- name: train
num_bytes: 3826056
num_examples: 800
download_size: 2582920
dataset_size: 3826056
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jz12345/physics_estimation | jz12345 | 2025-05-20T02:26:25Z | 19 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-12T07:58:29Z | 0 | ---
license: apache-2.0
---
|
valurank/offensive-multi | valurank | 2022-10-25T09:57:14Z | 40 | 1 | [
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:derived",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 0 | ---
language:
- en
license: other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- derived
task_categories:
- text-classification
---
# Dataset Card for hate-multi
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
This dataset contains a collection of text labeled as offensive (class 1) or not (class 0).
## Dataset Creation
The dataset was creating by aggregating multiple publicly available datasets.
### Source Data
The following datasets were used:
* https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech'
* https://sites.google.com/site/offensevalsharedtask/olid - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling.
|
fercan/countrypopulations | fercan | 2025-03-25T06:28:55Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-25T06:16:55Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 4414660.7991072405
num_examples: 41130
- name: test
num_bytes: 490625.2008927595
num_examples: 4571
download_size: 1124636
dataset_size: 4905286.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jaeyong2/Ko-Thai-Eval | jaeyong2 | 2025-01-24T06:44:43Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-24T06:44:42Z | 0 | ---
dataset_info:
features:
- name: ko
dtype: string
- name: en
dtype: string
- name: th
dtype: string
splits:
- name: train
num_bytes: 1301570
num_examples: 2009
download_size: 692079
dataset_size: 1301570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Romoamigo/SWE-Bench-MultilingualJAVAFiltered | Romoamigo | 2025-05-24T20:47:06Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-24T20:42:36Z | 0 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: version
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
splits:
- name: test
num_bytes: 478444.37333333335
num_examples: 43
download_size: 178776
dataset_size: 478444.37333333335
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ilyasor/data-scraping-exercice | ilyasor | 2025-05-23T21:36:42Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T17:30:27Z | 0 | ---
dataset_info:
features:
- name: auteur
dtype: string
- name: date
dtype: string
- name: commentaire
dtype: string
splits:
- name: train
num_bytes: 2889
num_examples: 5
download_size: 4110
dataset_size: 2889
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-scraping-exercice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ToxicityPrompts/eval_bench_xsafety_aegis_gpt_4o | ToxicityPrompts | 2024-11-23T19:19:48Z | 21 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-23T19:19:46Z | 0 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: language
dtype: string
- name: category
dtype: string
- name: prompt_result
dtype: string
splits:
- name: test
num_bytes: 568531
num_examples: 2800
download_size: 200347
dataset_size: 568531
---
# Dataset Card for "eval_bench_xsafety_aegis_gpt_4o"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ai4ophth/AIREADI_VQA_dataset_sample | ai4ophth | 2025-03-14T09:12:00Z | 12 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-28T11:01:10Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct answer
dtype: string
- name: options
sequence: string
- name: patient ID
dtype: string
- name: has_diabetes
dtype: string
- name: fundus_left
dtype: image
- name: fundus_right
dtype: image
- name: oct_left_0
dtype: image
- name: oct_left_1
dtype: image
- name: oct_left_2
dtype: image
- name: oct_left_3
dtype: image
- name: oct_left_4
dtype: image
- name: oct_left_5
dtype: image
- name: oct_left_6
dtype: image
- name: oct_left_7
dtype: image
- name: oct_left_8
dtype: image
- name: oct_left_9
dtype: image
- name: oct_left_10
dtype: image
- name: oct_left_11
dtype: image
- name: oct_left_12
dtype: image
- name: oct_left_13
dtype: image
- name: oct_left_14
dtype: image
- name: oct_left_15
dtype: image
- name: oct_left_16
dtype: image
- name: oct_left_17
dtype: image
- name: oct_left_18
dtype: image
- name: oct_left_19
dtype: image
- name: oct_left_20
dtype: image
- name: oct_left_21
dtype: image
- name: oct_left_22
dtype: image
- name: oct_left_23
dtype: image
- name: oct_left_24
dtype: image
- name: oct_left_25
dtype: image
- name: oct_left_26
dtype: image
- name: oct_left_27
dtype: image
- name: oct_left_28
dtype: image
- name: oct_left_29
dtype: image
- name: oct_left_30
dtype: image
- name: oct_left_31
dtype: image
- name: oct_left_32
dtype: image
- name: oct_left_33
dtype: image
- name: oct_left_34
dtype: image
- name: oct_left_35
dtype: image
- name: oct_left_36
dtype: image
- name: oct_left_37
dtype: image
- name: oct_left_38
dtype: image
- name: oct_left_39
dtype: image
- name: oct_left_40
dtype: image
- name: oct_left_41
dtype: image
- name: oct_left_42
dtype: image
- name: oct_left_43
dtype: image
- name: oct_left_44
dtype: image
- name: oct_left_45
dtype: image
- name: oct_left_46
dtype: image
- name: oct_left_47
dtype: image
- name: oct_left_48
dtype: image
- name: oct_left_49
dtype: image
- name: oct_left_50
dtype: image
- name: oct_left_51
dtype: image
- name: oct_left_52
dtype: image
- name: oct_left_53
dtype: image
- name: oct_left_54
dtype: image
- name: oct_left_55
dtype: image
- name: oct_left_56
dtype: image
- name: oct_left_57
dtype: image
- name: oct_left_58
dtype: image
- name: oct_left_59
dtype: image
- name: oct_left_60
dtype: image
- name: oct_right_0
dtype: image
- name: oct_right_1
dtype: image
- name: oct_right_2
dtype: image
- name: oct_right_3
dtype: image
- name: oct_right_4
dtype: image
- name: oct_right_5
dtype: image
- name: oct_right_6
dtype: image
- name: oct_right_7
dtype: image
- name: oct_right_8
dtype: image
- name: oct_right_9
dtype: image
- name: oct_right_10
dtype: image
- name: oct_right_11
dtype: image
- name: oct_right_12
dtype: image
- name: oct_right_13
dtype: image
- name: oct_right_14
dtype: image
- name: oct_right_15
dtype: image
- name: oct_right_16
dtype: image
- name: oct_right_17
dtype: image
- name: oct_right_18
dtype: image
- name: oct_right_19
dtype: image
- name: oct_right_20
dtype: image
- name: oct_right_21
dtype: image
- name: oct_right_22
dtype: image
- name: oct_right_23
dtype: image
- name: oct_right_24
dtype: image
- name: oct_right_25
dtype: image
- name: oct_right_26
dtype: image
- name: oct_right_27
dtype: image
- name: oct_right_28
dtype: image
- name: oct_right_29
dtype: image
- name: oct_right_30
dtype: image
- name: oct_right_31
dtype: image
- name: oct_right_32
dtype: image
- name: oct_right_33
dtype: image
- name: oct_right_34
dtype: image
- name: oct_right_35
dtype: image
- name: oct_right_36
dtype: image
- name: oct_right_37
dtype: image
- name: oct_right_38
dtype: image
- name: oct_right_39
dtype: image
- name: oct_right_40
dtype: image
- name: oct_right_41
dtype: image
- name: oct_right_42
dtype: image
- name: oct_right_43
dtype: image
- name: oct_right_44
dtype: image
- name: oct_right_45
dtype: image
- name: oct_right_46
dtype: image
- name: oct_right_47
dtype: image
- name: oct_right_48
dtype: image
- name: oct_right_49
dtype: image
- name: oct_right_50
dtype: image
- name: oct_right_51
dtype: image
- name: oct_right_52
dtype: image
- name: oct_right_53
dtype: image
- name: oct_right_54
dtype: image
- name: oct_right_55
dtype: image
- name: oct_right_56
dtype: image
- name: oct_right_57
dtype: image
- name: oct_right_58
dtype: image
- name: oct_right_59
dtype: image
- name: oct_right_60
dtype: image
splits:
- name: train
num_bytes: 680913787.0
num_examples: 10
download_size: 681110964
dataset_size: 680913787.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChuGyouk/medical-reasoning-train-kormedmcqa | ChuGyouk | 2025-03-21T06:06:33Z | 147 | 7 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-17T16:00:27Z | 2 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: year
dtype: int64
- name: period
dtype: int64
- name: q_number
dtype: int64
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: answer
dtype: int64
- name: thinking
dtype: string
- name: response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 76942094.0
num_examples: 8751
download_size: 38580816
dataset_size: 76942094.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Information
This data includes the training data from KorMedMCQA, as well as a portion of the trainind data from the **additional KorMedMCQA support set (private)**.
This dataset is based on the responses generated by *gemini-flash-thinking-exp-01-21* model and has undergone **MANUAL rejection sampling**. |
40umov/dostoevsky_3.5k | 40umov | 2024-12-04T20:52:29Z | 73 | 2 | [
"task_categories:text-generation",
"language:ru",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2024-12-01T11:02:03Z | 0 | ---
license: unknown
task_categories:
- text-generation
language:
- ru
size_categories:
- 1K<n<10K
--- |
Fern1221/mytest | Fern1221 | 2025-06-20T05:20:18Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-20T05:20:14Z | 0 | ---
dataset_info:
features:
- name: summary
dtype: string
- name: role
dtype: string
- name: meeting_date
dtype: string
- name: transcript
list:
- name: content
dtype: string
- name: speaker
dtype: string
- name: summary_actions
list:
- name: action
dtype: string
- name: deadline
dtype: string
- name: owner
dtype: string
- name: name_mapping
dtype: string
splits:
- name: train
num_bytes: 2442886
num_examples: 606
download_size: 960288
dataset_size: 2442886
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/R2MEDBiologyRetrieval | mteb | 2025-06-19T19:26:11Z | 0 | 0 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:LM-generated and reviewed",
"multilinguality:monolingual",
"source_datasets:R2MED/Biology",
"language:eng",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.14558",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2025-06-19T19:26:00Z | 0 | ---
annotations_creators:
- LM-generated and reviewed
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
source_datasets:
- R2MED/Biology
task_categories:
- text-retrieval
task_ids:
- document-retrieval
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 22213180
num_examples: 57359
download_size: 11007074
dataset_size: 22213180
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 22877
num_examples: 374
download_size: 8221
dataset_size: 22877
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 54910
num_examples: 103
download_size: 38058
dataset_size: 54910
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: qrels
data_files:
- split: test
path: qrels/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">R2MEDBiologyRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Biology retrieval dataset.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Medical |
| Reference | https://huggingface.co/datasets/R2MED/Biology |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_task("R2MEDBiologyRetrieval")
evaluator = mteb.MTEB([task])
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{li2025r2med,
author = {Li, Lei and Zhou, Xiao and Liu, Zheng},
journal = {arXiv preprint arXiv:2505.14558},
title = {R2MED: A Benchmark for Reasoning-Driven Medical Retrieval},
year = {2025},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("R2MEDBiologyRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
Aizhee/phishing_02 | Aizhee | 2025-04-08T13:23:26Z | 15 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"cybersecurity",
"phishing"
] | [] | 2025-04-08T11:37:22Z | 0 | ---
license: mit
tags:
- cybersecurity
- phishing
--- |
Adriiiii24/empresascsv | Adriiiii24 | 2025-01-16T16:07:57Z | 25 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-16T16:07:55Z | 0 | ---
dataset_info:
features:
- name: ID
dtype: int64
- name: Nombre
dtype: string
- name: Sector
dtype: string
- name: Ubicación
dtype: string
- name: Ingresos
dtype: int64
splits:
- name: train
num_bytes: 643
num_examples: 10
download_size: 2812
dataset_size: 643
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HaloNU/rifki-ds | HaloNU | 2024-12-08T22:46:17Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-08T17:28:05Z | 0 | ---
dataset_info:
features:
- name: soru
dtype: string
- name: yanıt
dtype: string
- name: data_lenght
dtype: int64
splits:
- name: train
num_bytes: 4119793.615603645
num_examples: 2809
- name: Validation
num_bytes: 1031048.3843963554
num_examples: 703
download_size: 2492988
dataset_size: 5150842.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: Validation
path: data/Validation-*
---
|
cruxeval-org/cruxeval | cruxeval-org | 2024-01-23T23:20:31Z | 8,186 | 16 | [
"task_categories:text2text-generation",
"language:code",
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2401.03065",
"region:us",
"code-generation"
] | [
"text2text-generation"
] | 2023-11-28T07:55:06Z | 1 | ---
license: mit
language:
- code
task_categories:
- text2text-generation
tags:
- code-generation
pretty_name: CRUXEval
---
<h1 align="center"> CRUXEval: Code Reasoning, Understanding, and Execution Evaluation </h1>
<p align="center">
<a href="https://crux-eval.github.io/">🏠 Home Page</a> •
<a href="https://github.com/facebookresearch/cruxeval">💻 GitHub Repository </a> •
<a href="https://crux-eval.github.io/leaderboard.html">🏆 Leaderboard</a> •
<a href="https://crux-eval.github.io/demo.html">🔎 Sample Explorer</a>
</p>

CRUXEval (**C**ode **R**easoning, **U**nderstanding, and e**X**ecution **Eval**uation) is a benchmark of 800 Python functions and input-output pairs. The benchmark consists of two tasks, CRUXEval-I (input prediction) and CRUXEval-O (output prediction).
The benchmark was constructed as follows: first, we use [Code Llama 34B](https://huggingface.co/codellama/CodeLlama-34b-hf) to generate a large set of functions and inputs. The outputs are generated by executing the functions on the inputs. Second, we filter the set so that our benchmark only consists of short problems with low computation and memory requirements, problems which a good human programmer should be able to do without extra memory in a minute or so. Third, we randomly select 800 samples passing the filter, ensuring the benchmark is both small enough to easily run but large enough to reliably see performance differences among various models.
## Dataset Description
- **Homepage:** https://crux-eval.github.io/
- **Repository:** https://github.com/facebookresearch/cruxeval
- **Paper:** https://arxiv.org/abs/2401.03065
- **Leaderboard:** https://crux-eval.github.io/leaderboard.html
## Additional Information
### Licensing Information
CRUXEval is [MIT licensed](https://github.com/facebookresearch/cruxeval/blob/main/LICENSE).
### Citation Information
```
@article{gu2024cruxeval,
title={CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution},
author={Alex Gu and Baptiste Rozière and Hugh Leather and Armando Solar-Lezama and Gabriel Synnaeve and Sida I. Wang},
year={2024},
journal = {arXiv preprint arXiv:2401.03065},
}
``` |
cuotra/fake_banking | cuotra | 2025-03-01T18:20:02Z | 65 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T19:43:18Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 44559
num_examples: 135
- name: test
num_bytes: 32500
num_examples: 90
download_size: 45643
dataset_size: 77059
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
lucayan/my-first-robot-dataset | lucayan | 2025-06-14T21:22:22Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-14T18:47:09Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 30,
"total_frames": 1153,
"total_tasks": 1,
"total_videos": 60,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:30"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
18
],
"names": null
},
"action": {
"dtype": "float32",
"shape": [
4
],
"names": [
"delta_x_ee",
"delta_y_ee",
"delta_z_ee",
"gripper_delta"
]
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"complementary_info.discrete_penalty": {
"dtype": "float32",
"shape": [
1
],
"names": [
"discrete_penalty"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
3,
128,
128
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 128,
"video.width": 128,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
3,
128,
128
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 128,
"video.width": 128,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mansaripo/nytimes_lawsuit_verbatim_256 | mansaripo | 2025-03-28T14:24:54Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-28T14:24:52Z | 0 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: text
dtype: string
- name: input_text
dtype: string
- name: target_text
dtype: string
splits:
- name: test
num_bytes: 2269741.0
num_examples: 99
download_size: 1339590
dataset_size: 2269741.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Lots-of-LoRAs/task251_spl_translation_en_fi | Lots-of-LoRAs | 2025-01-03T18:03:54Z | 14 | 0 | [
"task_categories:text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2204.07705",
"arxiv:2407.00066",
"region:us"
] | [
"text-generation"
] | 2025-01-03T18:03:53Z | 0 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
task_categories:
- text-generation
pretty_name: task251_spl_translation_en_fi
dataset_info:
config_name: plain_text
features:
- name: input
dtype: string
- name: output
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 287
- name: valid
num_examples: 36
- name: test
num_examples: 36
---
# Dataset Card for Natural Instructions (https://github.com/allenai/natural-instructions) Task: task251_spl_translation_en_fi
## Dataset Description
- **Homepage:** https://github.com/allenai/natural-instructions
- **Paper:** https://arxiv.org/abs/2204.07705
- **Paper:** https://arxiv.org/abs/2407.00066
- **Point of Contact:** [Rickard Brüel Gabrielsson](mailto:[email protected])
## Additional Information
### Citation Information
The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it:
```bibtex
@misc{wang2022supernaturalinstructionsgeneralizationdeclarativeinstructions,
title={Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and Anjana Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and Mehrad Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddhartha Mishra and Sujan Reddy and Sumanta Patro and Tanay Dixit and Xudong Shen and Chitta Baral and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi and Daniel Khashabi},
year={2022},
eprint={2204.07705},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2204.07705},
}
```
More details can also be found in the following paper:
```bibtex
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
```
### Contact Information
For any comments or questions, please email [Rickard Brüel Gabrielsson](mailto:[email protected])
|
kyama0321/mnist | kyama0321 | 2024-12-19T08:06:57Z | 19 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-19T08:06:48Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 17269200.0
num_examples: 60000
download_size: 15968323
dataset_size: 17269200.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
math-extraction-comp/jeffmeloy__Qwen2.5-7B-nerd-uncensored-v1.2 | math-extraction-comp | 2025-01-12T11:32:46Z | 21 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-11T11:31:15Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-c24870ea_extracted_answer
dtype: string
- name: lighteval-c24870ea_score
dtype: float64
- name: harness_score
dtype: float64
- name: qwen_extracted_answer
dtype: string
- name: lighteval-0f21c935_score
dtype: float64
- name: qwen_score
dtype: float64
- name: lighteval-0f21c935_extracted_answer
dtype: string
- name: harness_extracted_answer
dtype: string
splits:
- name: train
num_bytes: 2684872
num_examples: 1324
download_size: 1262287
dataset_size: 2684872
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JetBrains-Research/mbpp_diffs_ext_SR | JetBrains-Research | 2025-05-09T14:25:05Z | 72 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-16T10:40:11Z | 0 | ---
dataset_info:
- config_name: medium
features:
- name: qid
dtype: string
- name: old_code
dtype: string
- name: new_code
dtype: string
- name: old_exec_results
sequence: string
- name: new_exec_results
sequence: string
- name: old_id
dtype: string
- name: new_id
dtype: string
- name: diff_format
dtype: string
- name: diff
dtype: string
splits:
- name: train
num_bytes: 65733393
num_examples: 55375
download_size: 10427075
dataset_size: 65733393
- config_name: small
features:
- name: qid
dtype: string
- name: old_code
dtype: string
- name: new_code
dtype: string
- name: old_exec_results
sequence: string
- name: new_exec_results
sequence: string
- name: diff_format
dtype: string
- name: diff
dtype: string
splits:
- name: train
num_bytes: 1725625
num_examples: 1249
download_size: 372709
dataset_size: 1725625
configs:
- config_name: medium
data_files:
- split: train
path: medium/train-*
- config_name: small
data_files:
- split: train
path: small/train-*
---
## Dataset Description
- This dataset contains pairs of similar Python code solutions for MBPP problems.
- Solutions are coming from [MBPP solutions dataset](https://huggingface.co/datasets/JetBrains-Research/mbpp_w_stacktraces)
- All the pairs in the dataset has `rapidfuzz.fuzz.ratio` score > 0.75
- Execution results and stacktraces are in this dataset
- For each pair Extended Search-Replace [WIP] diff is calculated
|
RyanYr/reflect_mini8bSFTt2_mini8BSFTt1_om2g8kom2AG40k_iPSDPiter1_it1_crtc | RyanYr | 2025-01-20T07:46:00Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-20T07:45:48Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
splits:
- name: train
num_bytes: 1176367696
num_examples: 67473
download_size: 360142026
dataset_size: 1176367696
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rweics5cs7/exo3-original-MP-DocVQA-text | rweics5cs7 | 2025-06-05T18:30:19Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T18:29:58Z | 0 | ---
dataset_info:
config_name: corpus
features:
- name: corpus-id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1302324
num_examples: 741
download_size: 746965
dataset_size: 1302324
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
---
|
pragsri8/Ultrafeedback_Improved-Degraded-QRNeutrals_SubSampled_Unfiltered_probA | pragsri8 | 2025-05-22T11:09:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-22T11:09:07Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: neutral
dtype: bool
- name: prob_A
dtype: float64
splits:
- name: train
num_bytes: 1111491921.980714
num_examples: 238614
download_size: 675624236
dataset_size: 1111491921.980714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
drivaerstar/DrivAerStar-Review | drivaerstar | 2025-05-16T10:41:50Z | 166 | 0 | [
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-13T03:21:53Z | 0 | ---
license: cc-by-nc-sa-4.0
language:
- en
size_categories:
- 10K<n<100K
---
# DrivAerStar: An Industrial-Grade CFD Dataset for Vehicle Aerodynamic Optimization
Vehicle aerodynamics optimization is fundamental to automotive engineering, drag reduction, noise minimization, and vehicle body stability through complex fluid dynamics simulations. Traditional approaches rely on computationally
expensive Computational Fluid Dynamics (CFD) simulations that limit design
exploration or simplified models that compromise accuracy. Machine learning
methods offer promising alternatives but require high-fidelity training data that
has been largely unavailable in the public domain. The gap between academic
machine learning research and industrial CFD applications remains unbridged
due to the absence of datasets meeting rigorous engineering standards. Here we
present DrivAerStar, a comprehensive and reproducible dataset of 12,000
high-precision automotive CFD simulations, created by 3 basic rear designs and
20 fine-tuned Computer Aided Design (CAD) parameters by the Free Form De-
formation (FFD) algorithm, with all configurations simulated using the industry-standard STAR-CCM+® software. Unlike existing datasets, DrivAerStar pro-
vides complete engineering data that has been thoroughly validated against wind
tunnel experiments with discrepancies below 5%, including aerodynamic coefficients, surface pressures, and velocity fields. Our benchmarks demonstrate that
machine learning models trained on this dataset achieve industrial-grade prediction accuracy while reducing computational costs by orders of magnitude. This
dataset establishes a foundation for data-driven aerodynamic design methodologies that can transform automotive development processes. Beyond automotive
applications, DrivAerStar represents a paradigm for integrating high-fidelity
industrial-grade physics-based simulations with artificial intelligence, potentially
extending to diverse engineering disciplines where computational constraints currently limit design optimization.
# License
This dataset is provided under the CC BY-NC-SA 4.0 license, please see License.txt for full license text. |
Nexdata/100000_Fine-Tuning_text_data_set_for_Dutch_LLM_General_Domain_SFT | Nexdata | 2025-04-25T03:10:28Z | 23 | 0 | [
"language:nl",
"license:cc-by-nd-4.0",
"region:us"
] | [] | 2025-02-11T08:42:47Z | 0 | ---
license: cc-by-nd-4.0
language:
- nl
---
## Description
This dataset is just a sample of 100,000 Fine-Tuning text data set for Dutch LLM General Domain SFT(paid dataset). Contains 12 types of SFT QA, and the accuracy is not less than 95%. All prompts are manually written to meet diversity coverage.. All prompts are manually written to meet diversity coverage.
For more details & to download the rest of the dataset(paid),please refer to the link: https://www.nexdata.ai/datasets/llm?source=Huggingface
# Specifications
## Content:
Contains 12 types of SFT QA
## Category:
Brainstorming,Chat,Classification,ClosedQA,Code,Extract,Generation,OpenQA,Reason,Rewrite,Summarization,Other etc.
## Quantity of Data:
100,000
## Format:
xlsx
# Licensing Information
Commercial License |
olcaybicak/nova | olcaybicak | 2025-02-18T06:31:27Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-18T06:30:40Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 543.3333333333334
num_examples: 2
- name: test
num_bytes: 268
num_examples: 1
download_size: 6955
dataset_size: 811.3333333333334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sarahpann/mlm_cls_skywork | sarahpann | 2024-12-28T06:09:26Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-27T04:08:26Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 403526180
num_examples: 69314
- name: test
num_bytes: 45270930
num_examples: 7702
download_size: 192829311
dataset_size: 448797110
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
codplus/whatsapp-download-links | codplus | 2025-04-28T17:54:13Z | 20 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T17:48:58Z | 0 | ---
license: apache-2.0
---
|
alea-institute/kl3m-data-pacer-iand | alea-institute | 2025-04-11T01:46:00Z | 8 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07854",
"arxiv:2503.17247",
"region:us"
] | [] | 2025-02-15T17:31:36Z | 0 | ---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 955312077
num_examples: 39265
download_size: 185742204
dataset_size: 955312077
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# KL3M Data Project
> **Note**: This page provides general information about the KL3M Data Project. Additional details specific to this dataset will be added in future updates. For complete information, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854).
## Description
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
## Dataset Details
- **Format**: Parquet files containing document text and metadata
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
## Abstract
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified to meet the strict copyright and licensing protocol detailed in the project. We are releasing the entire pipeline, including:
1. The source code to acquire and process these documents
2. The original document formats with associated provenance and metadata
3. Extracted content in a standardized format
4. Pre-tokenized representations of the documents
5. Various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification, prediction, and conversational data
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
## Legal Basis
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
- Public domain materials
- US government works
- Open access content under permissive licenses
- Content explicitly licensed for AI training
## Papers
For more information about the KL3M Data Project, please refer to:
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
- [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{bommarito2025kl3m,
title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2503.17247},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/). |
younghyopark/DualPanda_upright_mug_20250414_111840 | younghyopark | 2025-04-14T15:21:32Z | 28 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"teleop",
"success"
] | [
"robotics"
] | 2025-04-14T15:18:48Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- teleop
- success
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "DualPanda",
"total_episodes": 1,
"total_frames": 326,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": null,
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
18
],
"names": [
"l_robot/joint1",
"l_robot/joint2",
"l_robot/joint3",
"l_robot/joint4",
"l_robot/joint5",
"l_robot/joint6",
"l_robot/joint7",
"l_robot/finger_joint1",
"l_robot/finger_joint2",
"r_robot/joint1",
"r_robot/joint2",
"r_robot/joint3",
"r_robot/joint4",
"r_robot/joint5",
"r_robot/joint6",
"r_robot/joint7",
"r_robot/finger_joint1",
"r_robot/finger_joint2"
]
},
"observation.environment_state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"cambridge_mug_px",
"cambridge_mug_py",
"cambridge_mug_pz",
"cambridge_mug_qw",
"cambridge_mug_qx",
"cambridge_mug_qy",
"cambridge_mug_qz"
]
},
"observation.qvel": {
"dtype": "float32",
"shape": [
18
],
"names": [
"l_robot/joint1",
"l_robot/joint2",
"l_robot/joint3",
"l_robot/joint4",
"l_robot/joint5",
"l_robot/joint6",
"l_robot/joint7",
"l_robot/finger_joint1",
"l_robot/finger_joint2",
"r_robot/joint1",
"r_robot/joint2",
"r_robot/joint3",
"r_robot/joint4",
"r_robot/joint5",
"r_robot/joint6",
"r_robot/joint7",
"r_robot/finger_joint1",
"r_robot/finger_joint2"
]
},
"observation.env_qvel": {
"dtype": "float32",
"shape": [
6
],
"names": [
"cambridge_mug_vx",
"cambridge_mug_vy",
"cambridge_mug_vz",
"cambridge_mug_wx",
"cambridge_mug_wy",
"cambridge_mug_wz"
]
},
"observation.ee_pose": {
"dtype": "float32",
"shape": [
56
],
"names": [
"r_robot_left_finger_tip_x",
"r_robot_left_finger_tip_y",
"r_robot_left_finger_tip_z",
"r_robot_left_finger_tip_qw",
"r_robot_left_finger_tip_qx",
"r_robot_left_finger_tip_qy",
"r_robot_left_finger_tip_qz",
"r_robot_right_finger_tip_x",
"r_robot_right_finger_tip_y",
"r_robot_right_finger_tip_z",
"r_robot_right_finger_tip_qw",
"r_robot_right_finger_tip_qx",
"r_robot_right_finger_tip_qy",
"r_robot_right_finger_tip_qz",
"r_robot_left_finger_base_x",
"r_robot_left_finger_base_y",
"r_robot_left_finger_base_z",
"r_robot_left_finger_base_qw",
"r_robot_left_finger_base_qx",
"r_robot_left_finger_base_qy",
"r_robot_left_finger_base_qz",
"r_robot_right_finger_base_x",
"r_robot_right_finger_base_y",
"r_robot_right_finger_base_z",
"r_robot_right_finger_base_qw",
"r_robot_right_finger_base_qx",
"r_robot_right_finger_base_qy",
"r_robot_right_finger_base_qz",
"l_robot_left_finger_tip_x",
"l_robot_left_finger_tip_y",
"l_robot_left_finger_tip_z",
"l_robot_left_finger_tip_qw",
"l_robot_left_finger_tip_qx",
"l_robot_left_finger_tip_qy",
"l_robot_left_finger_tip_qz",
"l_robot_right_finger_tip_x",
"l_robot_right_finger_tip_y",
"l_robot_right_finger_tip_z",
"l_robot_right_finger_tip_qw",
"l_robot_right_finger_tip_qx",
"l_robot_right_finger_tip_qy",
"l_robot_right_finger_tip_qz",
"l_robot_left_finger_base_x",
"l_robot_left_finger_base_y",
"l_robot_left_finger_base_z",
"l_robot_left_finger_base_qw",
"l_robot_left_finger_base_qx",
"l_robot_left_finger_base_qy",
"l_robot_left_finger_base_qz",
"l_robot_right_finger_base_x",
"l_robot_right_finger_base_y",
"l_robot_right_finger_base_z",
"l_robot_right_finger_base_qw",
"l_robot_right_finger_base_qx",
"l_robot_right_finger_base_qy",
"l_robot_right_finger_base_qz"
]
},
"action": {
"dtype": "float32",
"shape": [
16
],
"names": [
"l_robot/actuator1",
"l_robot/actuator2",
"l_robot/actuator3",
"l_robot/actuator4",
"l_robot/actuator5",
"l_robot/actuator6",
"l_robot/actuator7",
"l_robot//unnamed_actuator_7",
"r_robot/actuator1",
"r_robot/actuator2",
"r_robot/actuator3",
"r_robot/actuator4",
"r_robot/actuator5",
"r_robot/actuator6",
"r_robot/actuator7",
"r_robot//unnamed_actuator_7"
]
},
"action.fingertip_target": {
"dtype": "float32",
"shape": [
24
],
"names": [
"right_lb_target_x",
"right_lb_target_y",
"right_lb_target_z",
"right_lf_target_x",
"right_lf_target_y",
"right_lf_target_z",
"right_rb_target_x",
"right_rb_target_y",
"right_rb_target_z",
"right_rf_target_x",
"right_rf_target_y",
"right_rf_target_z",
"left_lb_target_x",
"left_lb_target_y",
"left_lb_target_z",
"left_lf_target_x",
"left_lf_target_y",
"left_lf_target_z",
"left_rb_target_x",
"left_rb_target_y",
"left_rb_target_z",
"left_rf_target_x",
"left_rf_target_y",
"left_rf_target_z"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Luffytaro-1/asr_en_ar_switch_split_54 | Luffytaro-1 | 2025-02-12T18:55:05Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-12T18:55:02Z | 0 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 8221351.0
num_examples: 104
download_size: 7612637
dataset_size: 8221351.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
danaaubakirova/svla_so100_task4_v3_multiple_test | danaaubakirova | 2025-05-12T14:37:01Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-12T14:36:56Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 1892,
"total_tasks": 3,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Aven9208/record-test | Aven9208 | 2025-06-22T14:35:58Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-22T14:35:50Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 2,
"total_frames": 600,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
oliv420/FormationClassification | oliv420 | 2025-01-29T11:57:33Z | 13 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-29T11:53:38Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 658510528.604
num_examples: 2788
download_size: 657095562
dataset_size: 658510528.604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_MoreUniqueResponseNoGT | YuchenLi01 | 2025-05-19T20:33:26Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-19T20:33:12Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 147936379
num_examples: 40729
- name: test
num_bytes: 8647498
num_examples: 2358
download_size: 38653536
dataset_size: 156583877
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
davidsmandrade/Ileoro-pt | davidsmandrade | 2025-06-08T05:44:25Z | 0 | 0 | [
"task_categories:question-answering",
"language:pt",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"region:us",
"african"
] | [
"question-answering"
] | 2025-06-08T05:41:26Z | 0 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
language:
- pt
tags:
- african
pretty_name: Ileoro-pt v1.0
size_categories:
- 10K<n<100K
---
# Ileoro-pt
**Ileoro-pt** é um dataset sintético de perguntas e respostas sobre a História da África em português, baseado na coleção da UNESCO.
- 29.000 pares pergunta-parágrafo
- Geração por LLMs (GPT-4)
## Licença
Este dataset está licenciado sob a Creative Commons CC-BY-NC 4.0. |
Yuyeong/rw_pubmed_nbw_300_cycle | Yuyeong | 2025-04-22T04:58:13Z | 11 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-22T04:52:06Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
- name: group_idx
dtype: int64
- name: node_idx
dtype: int64
splits:
- name: train_seed0
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed0
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed0
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed1
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed1
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed1
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed2
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed2
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed2
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed3
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed3
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed3
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed4
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed4
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed4
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed5
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed5
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed5
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed6
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed6
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed6
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed7
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed7
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed7
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed8
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed8
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed8
num_bytes: 171028.2614981995
num_examples: 1972
- name: train_seed9
num_bytes: 136796590.7003601
num_examples: 1577300
- name: validation_seed9
num_bytes: 171028.2614981995
num_examples: 1972
- name: test_seed9
num_bytes: 171028.2614981995
num_examples: 1972
download_size: 561668625
dataset_size: 1371386472.233565
configs:
- config_name: default
data_files:
- split: train_seed0
path: data/train_seed0-*
- split: validation_seed0
path: data/validation_seed0-*
- split: test_seed0
path: data/test_seed0-*
- split: train_seed1
path: data/train_seed1-*
- split: validation_seed1
path: data/validation_seed1-*
- split: test_seed1
path: data/test_seed1-*
- split: train_seed2
path: data/train_seed2-*
- split: validation_seed2
path: data/validation_seed2-*
- split: test_seed2
path: data/test_seed2-*
- split: train_seed3
path: data/train_seed3-*
- split: validation_seed3
path: data/validation_seed3-*
- split: test_seed3
path: data/test_seed3-*
- split: train_seed4
path: data/train_seed4-*
- split: validation_seed4
path: data/validation_seed4-*
- split: test_seed4
path: data/test_seed4-*
- split: train_seed5
path: data/train_seed5-*
- split: validation_seed5
path: data/validation_seed5-*
- split: test_seed5
path: data/test_seed5-*
- split: train_seed6
path: data/train_seed6-*
- split: validation_seed6
path: data/validation_seed6-*
- split: test_seed6
path: data/test_seed6-*
- split: train_seed7
path: data/train_seed7-*
- split: validation_seed7
path: data/validation_seed7-*
- split: test_seed7
path: data/test_seed7-*
- split: train_seed8
path: data/train_seed8-*
- split: validation_seed8
path: data/validation_seed8-*
- split: test_seed8
path: data/test_seed8-*
- split: train_seed9
path: data/train_seed9-*
- split: validation_seed9
path: data/validation_seed9-*
- split: test_seed9
path: data/test_seed9-*
---
|
mscs23021/CSALT_FLEURS | mscs23021 | 2025-05-10T14:41:28Z | 69 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-24T16:50:06Z | 0 | ---
license: apache-2.0
dataset_info:
features:
- name: audio
dtype: audio
- name: pseudo_transcript
dtype: string
- name: confidence
dtype: float64
splits:
- name: train
num_bytes: 769744151.125
num_examples: 2127
download_size: 693246342
dataset_size: 769744151.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
oof-baroomf/s1K_tokenized | oof-baroomf | 2025-02-21T21:49:18Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-21T21:49:16Z | 0 | ---
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
dtype: 'null'
- name: thinking_trajectories
sequence: string
- name: attempt
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24972243
num_examples: 1000
download_size: 10599163
dataset_size: 24972243
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chupei/cc_ru | chupei | 2024-12-27T02:56:36Z | 15 | 0 | [
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-27T02:56:14Z | 0 | ---
license: cc-by-nc-4.0
---
|
BobBoris/reuters_articles | BobBoris | 2024-11-19T12:47:17Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-19T12:47:14Z | 0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
splits:
- name: train
num_bytes: 13792576
num_examples: 17262
- name: validation
num_bytes: 1870389
num_examples: 2158
- name: test
num_bytes: 1379190
num_examples: 2158
download_size: 10073414
dataset_size: 17042155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_24a3aeba-aba0-4ffc-9b41-887e01db97cb | argilla-internal-testing | 2024-10-29T10:25:36Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-29T10:25:35Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DarwinAnim8or/greentext | DarwinAnim8or | 2023-01-24T18:32:57Z | 71 | 5 | [
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"grug",
"internet",
"greentext"
] | [
"text2text-generation"
] | 2022-06-28T14:44:54Z | 0 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- machine-generated
license:
- unknown
multilinguality:
- monolingual
pretty_name: 'Greentext Dataset
This is content pulled from various archives to create a "greentext bot" or sorts using
GPT-JT-8Bit. '
size_categories: []
source_datasets: []
tags:
- grug
- internet
- greentext
task_categories:
- text2text-generation
task_ids: []
---
# Greentext Dataset
This is content pulled from various archives to create a "greentext bot" or sorts using GPT-JT.
Really, just a dumb joke I made with some friends.
## Biases & Limitations
This dataset contains charaters such as \n and u2019d that need to be filtered out manually.
Needless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell.
I don't recommend actually using any of this. |
DanqingZ/tic_tac_toe_5_raw_5 | DanqingZ | 2025-06-15T08:53:02Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-15T08:52:52Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 1787,
"total_tasks": 1,
"total_videos": 15,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.on_robot": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.side_view": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
SylvanL/Traditional-Chinese-Medicine-Dataset-Pretrain | SylvanL | 2024-10-12T11:06:59Z | 125 | 23 | [
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"medical"
] | [
"text-generation"
] | 2024-09-28T00:42:05Z | 1 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 1B<n<10B
---
# 启古纳今,厚德精术
---
# 数据介绍
# 非网络来源的高质量中医数据集-预训练
# High-Quality Traditional Chinese Medicine Dataset from Non-Internet Sources - Pretraining
该数据集经过大量人力和资源的投入精心构建,以共建LLM高质量中文社区为己任。
包含约1GB的中医各个领域临床案例、名家典籍、医学百科,名词解释等优质内容,涵盖全面,配比均衡。
数据集主要由非网络来源的内部数据构成,并99%为简体中文内容,内容质量优异,信息密度可观。
注意:该数据集仅适用于预训练或继续预训练用途,针对SFT/IFT的QA数据集详见:SylvanL/Traditional-Chinese-Medicine-Dataset-SFT
所有数据集的可是均可被LLamaFactory按照"dataset_info.json"内的信息(需要与数据集文件在同一路径下)直接读取,,list:dict->[{"text":"..."},...]
任何问题请联系:[email protected]
| **文件名** | **数据细节** | **备注** | **是否有曾公开过**
|----------------------------------|----------------------------------|----------------------------------|----------------------------------|
| CPT_tcmKnowledge_source1_17921 | 包含来自"中国中医药信息查询平台"数据库的17,921条结构化信息,涵盖疾病、症状、医疗美容、药品、中药材、保健品、方剂、药膳食疗、针灸穴位、术语等的百科词条。内容全部经过人工校对,质量极高。 | 未进行minHash,且认为没有进行minHash的必要。 | 网络来源数据,内部处理校对。 |
| CPT_tcmKnowledge_source2_12889 | 包含来自,但不限于ICD-10术语和中医国标所囊括的病症与术语的12,889条相关解释,同时涵盖常见方剂、中药材、针灸配穴等各种中医术语和名词的详细解释。内容完全由人工编辑,质量极高。 | 未进行minHash,且认为没有进行minHash的必要。 | 内部数据,未曾面世。 |
| CPT_tcmBooks_source1_146244 | 包含来自688本中医领域常用教材、著作、典籍的146244条书本内容。从pdf、word、图片、html以及chm等原始文件格式整理出。具体书录详见【附录一】| 未进行minHash,且认为没有进行minHash的必要。 | 网络来源数据,内部处理校对。 |
| CPT_medicalRecord_source1_61486 | 包含61,486条来自高水平执业中医医生的临床问诊记录,内容为自然语言优质医案,包含患者问诊引导、病症、主诉、诊断、方剂、中药等处方笺必备内容。内容完全由人工编辑,质量极高。 | 未进行minHash,且认为没有进行minHash的必要。医案已全部进行人工脱敏。 | 内部数据,未曾面世。 |
| CPT_medicalRecord_source2_15307 | 包含15,127条来自某知名教授的临床问诊记录,内容为自然语言优质医案,包含患者问诊引导、病症、主诉、诊断、方剂、中药等处方笺必备内容。内容完全由人工编辑,质量极高。 | 未进行minHash,且认为没有进行minHash的必要。医案已由"Qwen/Qwen2.5-14B-Instruct"模型脱敏,提示词工程详见【附录二】,并由人工二次校对。 | 内部数据,未曾面世。 |
| CPT_medicalRecord_source3_230000 | 包含230,000条从某中医院EMR系统中获得的部分归档医案,内容源头由医生线下问诊时由本人/学生点选或键入。内容由规则联表拼接,质量中规中矩。 | 内部数据,未尝面世。数据未进行minHash,可以酌情进行minHash。医案根据规则构建时没有涉及任何患者隐私相关的字段。 | 内部数据,未曾面世。 |
| CPT_medicalRecord_source4_48665 | 包含48,665条来自某知名教授的临床问诊记录,内容为自然语言优质医案,包含患者问诊引导、病症、主诉、诊断、证型、方剂、中药等处方笺必备内容。内容完全由人工编辑,质量极高。 | 未进行minHash,且认为没有进行minHash的必要。医案已全部进行人工脱敏。 | 内部数据,未曾面世。 |
---
## 附录一:完整中医典籍清单
| **书名** | **标签** | **条目数** |
|------------|---------|------|
| 中医临床诊疗术语 | 中医 | 4566 |
| 方剂学 | 教材 中医 | 3029 |
| 中医名词词典 | 使用手册 中医 | 2695 |
| 冯氏锦囊秘录 | 著作 中医 文言文 | 2357 |
| 医宗金鉴 | 著作 中医 文言文 | 2262 |
| 备急千金要方 | 著作 中医 文言文 | 1800 |
| 诸病源候论 | 著作 中医 文言文 | 1737 |
| 证类本草 | 著作 中医 文言文 | 1694 |
| 古今医统大全 | 著作 中医 文言文 | 1643 |
| 默克家庭诊疗手册 | 教材 西医 | 1549 |
| 奇效简便良方 | 著作 中医 文言文 | 1522 |
| 夏桂成实用中医妇科学 | 著作 中医 | 1490 |
| 圣济总录 | 著作 中医 文言文 | 1393 |
| 疡医大全 | 著作 中医 文言文 | 1308 |
| 中药学 | 教材 中医 | 1255 |
| 华佗神方 | 著作 中医 文言文 | 1214 |
| 本草分经 | 著作 中医 文言文 | 1152 |
| 三因极一病证方论 | 著作 中医 文言文 | 1145 |
| 千金翼方 | 著作 中医 文言文 | 1140 |
| 中医内科学 | 中医 | 1139 |
| 外台秘要 | 著作 中医 文言文 | 1092 |
| 医学入门 | 著作 中医 文言文 | 1063 |
| 妇人大全良方 | 著作 中医 文言文 | 1025 |
| 茶饮保健 | 使用手册 中医 | 1011 |
| 是斋百一选方 | 著作 中医 文言文 | 968 |
| 中医词典 | 使用手册 中医 文言文 | 963 |
| 仁术便览 | 著作 中医 文言文 | 908 |
| 新修本草 | 著作 中医 文言文 | 886 |
| 奇方类编 | 著作 中医 文言文 | 837 |
| 医方考 | 著作 中医 文言文 | 836 |
| 太平惠民和剂局方 | 著作 中医 文言文 | 819 |
| 中医食疗学 | 教材 中医 | 805 |
| 中医基础理论 | 指南 中医 | 782 |
| 预防医学 | 教材 西医 | 733 |
| 儒门事亲 | 著作 中医 文言文 | 726 |
| 女科经纶 | 著作 中医 文言文 | 720 |
| 名医别录 | 著作 中医 文言文 | 718 |
| 本草易读 | 著作 中医 文言文 | 712 |
| 针灸治疗学.epub | 教材 中医 | 703 |
| 针灸大成 | 著作 中医 文言文 | 695 |
| 医学纲目 | 著作 中医 文言文 | 689 |
| 药性切用 | 著作 中医 文言文 | 688 |
| 医述 | 著作 中医 医案 文言文 | 683 |
| 本经逢原 | 著作 中医 文言文 | 683 |
| 金匮悬解 | 著作 中医 文言文 | 652 |
| 圆运动的古中医学 | 著作 中医 | 650 |
| 本草从新 | 著作 中医 文言文 | 648 |
| 本草纲目 | 著作 中医 文言文 | 640 |
| 实用免疫细胞与核酸 | 教材 西医 | 622 |
| 家庭医学百科-医疗康复篇 | 使用手册 西医 家庭 | 612 |
| 伤寒悬解 | 著作 中医 文言文 | 612 |
| 得配本草 | 著作 中医 文言文 | 611 |
| 本草撮要 | 著作 中医 文言文 | 603 |
| 人体解剖学 | 教材 西医 | 587 |
| 医学心悟 | 著作 中医 文言文 | 568 |
| 幼幼新书 | 著作 中医 文言文 | 548 |
| 药理学 | 教材 西医 | 543 |
| 生理学 | 教材 西医 | 542 |
| 景岳全书 | 著作 中医 文言文 | 537 |
| 证治准绳·幼科 | 著作 中医 文言文 | 537 |
| 医学衷中参西录 | 著作 中医 医案 | 535 |
| 本草求真 | 著作 中医 文言文 | 533 |
| 饮膳正要 | 著作 中医 文言文 | 512 |
| 中医药膳学 | 著作 中医 | 511 |
| 中医诊断学 | 教材 中医 | 507 |
| 普济方·针灸 | 著作 中医 文言文 | 502 |
| 保健药膳 | 使用手册 中医 | 500 |
| 滇南本草 | 著作 中医 文言文 | 497 |
| 急救广生集 | 著作 中医 文言文 | 484 |
| 传染病 | 教材 西医 | 478 |
| 伤寒杂病论 | 著作 中医 文言文 | 474 |
| 针灸学 | 教材 中医 | 472 |
| 张氏医通 | 著作 中医 文言文 | 468 |
| 竹林女科证治 | 著作 中医 文言文 | 467 |
| 本草经集注 | 著作 中医 文言文 | 464 |
| 医学摘粹 | 著作 中医 文言文 | 463 |
| 生物化学与分子生物学 | 教材 西医 | 461 |
| 外科全生集 | 著作 中医 医案 文言文 | 459 |
| 本草便读 | 著作 中医 文言文 | 458 |
| 本草备要 | 著作 中医 文言文 | 450 |
| 中医疾病预测 | 使用手册 中医 | 448 |
| 明医指掌 | 著作 中医 文言文 | 437 |
| 增广和剂局方药性总论 | 著作 中医 文言文 | 436 |
| 本草蒙筌 | 著作 中医 文言文 | 436 |
| 中国医学通史 | 教材 中医 | 435 |
| 本草衍义 | 著作 中医 文言文 | 428 |
| 针灸神书 | 著作 中医 文言文 | 425 |
| 外科理例 | 著作 中医 文言文 | 420 |
| 目经大成 | 著作 中医 文言文 | 413 |
| 医院药学 | 教材 西医 | 409 |
| 回生集 | 著作 中医 文言文 | 407 |
| 温病学 | 教材 中医 | 401 |
| 急诊医学 | 教材 西医 | 399 |
| 孙文垣医案 | 著作 中医 医案 文言文 | 398 |
| 病理学 | 教材 西医 | 396 |
| 本草乘雅半偈 | 著作 中医 文言文 | 394 |
| 类证治裁 | 著作 中医 医案 文言文 | 392 |
| 神经精神疾病诊断学 | 教材 西医 | 385 |
| 中国幽门螺杆菌研究 | 教材 西医 | 384 |
| 外科心法要诀 | 著作 中医 文言文 | 383 |
| 类经 | 著作 中医 文言文 | 374 |
| 顾松园医镜 | 著作 中医 文言文 | 366 |
| 本草择要纲目 | 著作 中医 文言文 | 366 |
| 神农本草经 | 著作 中医 文言文 | 363 |
| 医方论 | 著作 中医 文言文 | 358 |
| 博济方 | 著作 中医 文言文 | 357 |
| 本草害利 | 著作 中医 文言文 | 356 |
| 中药炮制 | 使用手册 中医 | 353 |
| 明目至宝 | 著作 中医 文言文 | 352 |
| 寿世保元 | 著作 中医 文言文 | 352 |
| 症因脉治 | 著作 中医 文言文 | 350 |
| 实验动物科学 | 教材 西医 | 346 |
| 中医养生学.epub | 教材 中医 | 344 |
| 本草图经 | 著作 中医 文言文 | 338 |
| 临床营养学 | 教材 西医 | 336 |
| 中成药临床应用指南 | 肛肠疾病 指南 中医 | 335 |
| 医学心理学 | 教材 西医 | 334 |
| 外科启玄 | 著作 中医 文言文 | 333 |
| 饮食须知 | 著作 中医 文言文 | 332 |
| 中成药临床应用指南 | 眼科疾病 指南 中医 | 329 |
| 临床生物化学 | 教材 西医 | 327 |
| 医学微生物学 | 教材 西医 | 319 |
| 雷公炮制药性解 | 著作 中医 文言文 | 319 |
| 药笼小品 | 著作 中医 文言文 | 316 |
| 病理生理学 | 教材 西医 | 311 |
| 医学集成 | 著作 中医 文言文 | 310 |
| 济阴纲目 | 著作 中医 文言文 | 307 |
| 女科证治准绳 | 著作 中医 文言文 | 298 |
| 医学免疫学 | 教材 西医 | 298 |
| 常见中老年疾病防治 | 使用手册 中医 中老年人 | 297 |
| 伤寒括要 | 著作 中医 文言文 | 293 |
| 玉楸药解 | 著作 中医 文言文 | 291 |
| 细胞和分子免疫学 | 教材 西医 | 291 |
| 续名医类案 | 著作 中医 医案 文言文 | 290 |
| 中成药临床应用指南 | 肾与膀胱疾病 指南 中医 | 283 |
| 中成药临床应用指南 | 心血管疾病 指南 中医 | 280 |
| 中成药临床应用指南 | 气血津液疾病 指南 中医 | 278 |
| 本草崇原 | 著作 中医 文言文 | 277 |
| 组织学与胚胎学 | 教材 西医 | 275 |
| 儿科萃精 | 著作 中医 文言文 | 274 |
| 中成药临床应用指南 | 呼吸系统疾病 指南 中医 | 272 |
| 金匮翼 | 著作 中医 文言文 | 271 |
| 本草新编 | 著作 中医 文言文 | 271 |
| 汤液本草 | 著作 中医 文言文 | 271 |
| 常用化验值及意义 | 使用手册 西医 | 270 |
| 万氏秘传片玉心书 | 著作 中医 文言文 | 266 |
| 动脉粥样硬化 | 教材 西医 | 264 |
| 雷公炮炙论 | 著作 中医 文言文 | 262 |
| 时病论歌括新编 | 著作 中医 文言文 | 260 |
| 普济本事方 | 著作 中医 文言文 | 260 |
| 医门补要 | 著作 中医 文言文 | 258 |
| 退思集类方歌注 | 著作 中医 文言文 | 255 |
| 卫生宝鉴 | 著作 中医 医案 文言文 | 251 |
| 医用化学 | 教材 西医 | 245 |
| 中成药临床应用指南 | 妇科疾病 指南 中医 | 242 |
| 伤寒六书 | 著作 中医 文言文 | 242 |
| 针灸资生经 | 著作 中医 文言文 | 242 |
| 食物疗法 | 使用手册 中医 | 241 |
| 百病自测 | 使用手册 西医 | 240 |
| 医碥 | 著作 中医 文言文 | 239 |
| 平脉辨证脉学心得 | 著作 中医 | 238 |
| 临证实验录 | 著作 中医 医案 | 238 |
| 西医眼科学 | 教材 西医 | 237 |
| 扁鹊心书 | 著作 中医 文言文 | 235 |
| 苏沈良方 | 著作 中医 文言文 | 235 |
| 自我调养巧治病 | 使用手册 中医 | 233 |
| 思考中医 | 著作 中医 | 230 |
| 外科证治全书 | 著作 中医 文言文 | 228 |
| 免疫学和免疫学检验 | 教材 西医 | 223 |
| 灵素节注类编 | 著作 中医 文言文 | 220 |
| 小儿药证直诀 | 著作 中医 文言文 | 220 |
| 手穴手纹诊治 | 使用手册 中医 | 220 |
| 食疗本草 | 著作 中医 文言文 | 219 |
| 傅青主男科 | 著作 中医 文言文 | 219 |
| 外科传薪集 | 著作 中医 文言文 | 218 |
| 外科大成 | 著作 中医 文言文 | 218 |
| 物理诊断学 | 教材 西医 | 217 |
| 医学实在易 | 著作 中医 文言文 | 216 |
| 松峰说疫 | 著作 中医 文言文 | 216 |
| 保婴撮要 | 著作 中医 医案 文言文 | 216 |
| 吴普本草 | 著作 中医 文言文 | 212 |
| 痰火点雪 | 著作 中医 文言文 | 210 |
| 汤头歌诀 | 著作 中医 文言文 | 209 |
| 妇产科学 | 教材 西医 | 207 |
| 中医饮食营养学 | 教材 中医 | 204 |
| 本草经解 | 著作 中医 文言文 | 204 |
| 幼科心法要诀 | 著作 中医 文言文 | 202 |
| 丹台玉案 | 著作 中医 文言文 | 201 |
| 证治准绳·疡医 | 著作 中医 文言文 | 200 |
| 医法圆通 | 著作 中医 文言文 | 198 |
| 常见病自测 | 使用手册 西医 | 198 |
| 程杏轩医案 | 著作 中医 医案 文言文 | 196 |
| 古今医鉴 | 著作 中医 文言文 | 193 |
| 临床激光治疗学 | 教材 西医 | 192 |
| 外科学总论 | 教材 西医 | 192 |
| 删补名医方论 | 著作 中医 文言文 | 192 |
| 推拿抉微 | 著作 中医 医案 | 192 |
| 中成药临床应用指南 | 糖尿病分册 指南 中医 | 191 |
| 黄帝内经太素 | 著作 中医 文言文 | 189 |
| 刺灸心法要诀 | 著作 中医 文言文 | 189 |
| 妇科心法要诀 | 著作 中医 文言文 | 188 |
| 针灸聚英 | 著作 中医 文言文 | 187 |
| 伤寒寻源 | 著作 中医 文言文 | 186 |
| 幼科推拿秘书 | 著作 中医 文言文 | 184 |
| 石室秘录 | 著作 中医 文言文 | 183 |
| 万病回春 | 著作 中医 文言文 | 177 |
| 中医症状鉴别诊断实用手册.汗症部分 | 著作 中医 | 177 |
| 现代院外急救手册 | 教材 西医 | 177 |
| 丹溪手镜 | 著作 中医 文言文 | 177 |
| 老年百病防治 | 使用手册 中医 老年人 | 176 |
| 证治准绳·杂病 | 著作 中医 文言文 | 175 |
| 幼幼集成 | 著作 中医 文言文 | 174 |
| 临床基础检验学 | 教材 西医 | 172 |
| 中国生物制品规程 | 使用手册 西医 | 172 |
| 女科秘要 | 著作 中医 文言文 | 172 |
| 放射诊断学 | 教材 西医 | 172 |
| 药症忌宜 | 著作 中医 文言文 | 171 |
| 赵绍琴临证验案精选 | 著作 中医 医案 | 170 |
| 药鉴 | 著作 中医 文言文 | 169 |
| 小儿卫生总微论方 | 著作 中医 文言文 | 168 |
| 四圣心源 | 著作 中医 文言文 | 168 |
| 基因诊断与性传播疾病 | 教材 西医 | 167 |
| 类经图翼 | 著作 中医 文言文 | 164 |
| 证治准绳·类方 | 著作 中医 文言文 | 164 |
| 洪氏集验方 | 著作 中医 文言文 | 164 |
| 理疗学 | 教材 西医 | 162 |
| 儿科学 | 教材 西医 | 161 |
| 长沙药解 | 著作 中医 文言文 | 161 |
| 方剂鉴别 | 中医 | 160 |
| 丹溪治法心要 | 著作 中医 医案 文言文 | 159 |
| 证治准绳·女科 | 著作 中医 文言文 | 157 |
| 基础护理学 | 教材 西医 | 155 |
| 卫生易简方 | 著作 中医 文言文 | 154 |
| 痧胀玉衡 | 著作 中医 医案 文言文 | 154 |
| 研经言 | 著作 中医 文言文 | 153 |
| 气功外气疗法 | 使用手册 中医 | 152 |
| 外科正宗 | 著作 中医 文言文 | 152 |
| 重楼玉钥 | 著作 中医 文言文 | 150 |
| 伤寒指掌 | 著作 中医 文言文 | 150 |
| 家庭医学百科·预防保健篇 | 使用手册 西医 家庭 | 149 |
| 傅青主女科 | 著作 中医 文言文 | 148 |
| 重订囊秘喉书 | 著作 中医 文言文 | 145 |
| 中医之钥 | 著作 中医 文言文 | 144 |
| 养生导引秘籍 | 著作 中医 文言文 | 144 |
| 医效秘传 | 著作 中医 文言文 | 143 |
| 针灸甲乙经 | 著作 中医 文言文 | 142 |
| 减肥新法与技巧 | 使用手册 中/西医 肥胖者 | 141 |
| 老年食养食疗 | 使用手册 中医 老年人 | 140 |
| 中西医结合耳鼻喉科 | 教材 中医 | 140 |
| 活幼心书 | 著作 中医 文言文 | 139 |
| 普通外科学 | 教材 西医 | 139 |
| 古今医案按 | 著作 中医 医案 文言文 | 139 |
| 痘疹心法要诀 | 著作 中医 文言文 | 138 |
| 读医随笔 | 著作 中医 文言文 | 137 |
| 正体类要 | 著作 中医 文言文 | 136 |
| 伤寒论 | 著作 中医 文言文 | 136 |
| 医学遗传学基础 | 教材 西医 | 136 |
| 巢氏病源补养宣导法 | 著作 中医 文言文 | 135 |
| 胎产指南 | 著作 中医 文言文 | 135 |
| 素问悬解 | 著作 中医 文言文 | 135 |
| 针灸素难要旨 | 著作 中医 文言文 | 133 |
| 耳鼻咽喉外科学 | 教材 西医 | 132 |
| 幼科释谜 | 著作 中医 文言文 | 132 |
| 辨证录 | 著作 中医 文言文 | 131 |
| 骨科学 | 教材 西医 | 131 |
| 中医养生学 | 教材 中医 | 131 |
| 校注医醇剩义 | 著作 中医 文言文 | 130 |
| 秘传眼科龙木论 | 著作 中医 文言文 | 129 |
| 脉诀汇辨 | 著作 中医 文言文 | 129 |
| 伤寒说意 | 著作 中医 文言文 | 129 |
| 女科切要 | 著作 中医 文言文 | 128 |
| 四圣悬枢 | 著作 中医 文言文 | 128 |
| 时方妙用 | 著作 中医 文言文 | 128 |
| 集验方 | 著作 中医 文言文 | 128 |
| 麻科活人全书 | 著作 中医 文言文 | 126 |
| 增订叶评伤暑全书 | 著作 中医 文言文 | 126 |
| 珍珠囊补遗药性赋 | 著作 中医 文言文 | 124 |
| 经络全书 | 著作 中医 文言文 | 124 |
| 金匮钩玄 | 著作 中医 文言文 | 124 |
| 海药本草 | 著作 中医 文言文 | 123 |
| 口腔科学 | 教材 西医 | 122 |
| 孙真人海上方 | 著作 中医 文言文 | 122 |
| 宁坤秘籍 | 著作 中医 文言文 | 120 |
| 时病论 | 著作 中医 文言文 | 120 |
| 金针秘传 | 著作 中医 文言文 | 119 |
| 专治麻痧初编 | 著作 中医 文言文 | 118 |
| 三十年临证经验集 | 著作 中医 医案 | 117 |
| 友渔斋医话 | 著作 中医 文言文 | 117 |
| 凌临灵方 | 著作 中医 医案 文言文 | 117 |
| 经验麻科 | 著作 中医 文言文 | 116 |
| 瘴疟指南 | 著作 中医 文言文 | 116 |
| 本草思辨录 | 著作 中医 文言文 | 115 |
| 中医刺灸 | 使用手册 中医 | 114 |
| 疡科心得集 | 著作 中医 文言文 | 114 |
| 救伤秘旨 | 著作 中医 文言文 | 113 |
| 侣山堂类辩 | 著作 中医 文言文 | 113 |
| 疫疹一得 | 著作 中医 文言文 | 113 |
| 中医伤科按摩学 | 教材 中医 | 112 |
| 中医儿科学 | 教材 中医 | 112 |
| 厘正按摩要术 | 著作 中医 文言文 | 112 |
| 毓麟验方 | 著作 中医 文言文 | 111 |
| 时方歌括 | 著作 中医 文言文 | 111 |
| 中藏经 | 著作 中医 文言文 | 111 |
| 胎产心法 | 著作 中医 文言文 | 110 |
| 丹溪心法 | 著作 中医 文言文 | 110 |
| 医林改错 | 著作 中医 文言文 | 110 |
| 中医外科学 | 教材 中医 | 109 |
| 免疫与健康 | 教材 西医 | 108 |
| 仁斋直指方论(附补遗) | 著作 中医 文言文 | 108 |
| 人体寄生虫学 | 教材 西医 | 107 |
| 吴医汇讲 | 著作 中医 文言文 | 107 |
| 家庭医学百科-自救互救篇 | 使用手册 西医 家庭 | 106 |
| 古今名医汇粹 | 著作 中医 医案 文言文 | 105 |
| 万氏秘传外科心法 | 著作 中医 文言文 | 105 |
| 中医眼科学 | 教材 中医 | 105 |
| 中医妇科学 | 教材 中医 | 104 |
| 婴童百问 | 著作 中医 文言文 | 103 |
| 脾胃论 | 著作 中医 文言文 | 103 |
| 邹孟城三十年临证经验集 | 著作 中医 医案 | 103 |
| 医学统计学 | 教材 西医 | 102 |
| 妇人规 | 著作 中医 文言文 | 102 |
| 医学传心录 | 著作 中医 文言文 | 102 |
| 医学源流论 | 著作 中医 文言文 | 101 |
| 眼科心法要诀 | 著作 中医 文言文 | 101 |
| 望诊遵经 | 著作 中医 文言文 | 101 |
| 针灸大全 | 著作 中医 文言文 | 101 |
| 脉经 | 著作 中医 文言文 | 101 |
| 广瘟疫论 | 著作 中医 文言文 | 100 |
| 伤寒百证歌 | 著作 中医 文言文 | 100 |
| 异授眼科 | 著作 中医 文言文 | 100 |
| 一得集 | 著作 中医 医案 文言文 | 100 |
| 伤寒心法要诀 | 著作 中医 文言文 | 99 |
| 女科百问 | 著作 中医 文言文 | 99 |
| 银海精微 | 著作 中医 文言文 | 99 |
| 扁鹊神应针灸玉龙经 | 著作 中医 文言文 | 98 |
| 子午流注说难 | 著作 中医 文言文 | 98 |
| 女科精要 | 著作 中医 文言文 | 98 |
| 伤寒捷诀 | 著作 中医 文言文 | 97 |
| 审视瑶函 | 著作 中医 文言文 | 97 |
| 经方实验录 | 著作 中医 医案 文言文 | 97 |
| 盘珠集胎产症治 | 著作 中医 文言文 | 96 |
| 秘传证治要诀及类方 | 著作 中医 文言文 | 96 |
| 喉舌备要秘旨 | 著作 中医 文言文 | 96 |
| 此事难知 | 著作 中医 文言文 | 96 |
| 胃肠动力检查手册 | 教材 西医 | 95 |
| 神农本草经百种录 | 著作 中医 文言文 | 95 |
| 幼科铁镜 | 著作 中医 文言文 | 95 |
| 心脏病学 | 教材 西医 | 94 |
| 虚损启微 | 著作 中医 文言文 | 93 |
| 周慎斋遗书 | 著作 中医 文言文 | 93 |
| 杂病心法要诀 | 著作 中医 文言文 | 92 |
| 医旨绪余 | 著作 中医 文言文 | 92 |
| 医学从众录 | 著作 中医 文言文 | 92 |
| 张聿青医案 | 著作 中医 医案 文言文 | 91 |
| 伤寒九十论 | 著作 中医 文言文 | 90 |
| 外科十三方考 | 著作 中医 文言文 | 89 |
| 喉科指掌 | 著作 中医 文言文 | 88 |
| 杂病广要 | 著作 中医 文言文 | 88 |
| 小品方 | 著作 中医 文言文 | 88 |
| 温疫论 | 著作 中医 文言文 | 87 |
| 回春录 | 著作 中医 文言文 | 87 |
| 灸法秘传 | 著作 中医 文言文 | 86 |
| 医学影像学 | 教材 西医 | 86 |
| 温病条辨 | 著作 中医 文言文 | 86 |
| 医学读书记 | 著作 中医 文言文 | 85 |
| 伤寒大白 | 著作 中医 文言文 | 84 |
| 古今医彻 | 著作 中医 医案 文言文 | 84 |
| 黄帝内经·素问 | 著作 中医 文言文 | 83 |
| 药征续编 | 著作 中医 文言文 | 83 |
| 达摩洗髓易筋经 | 著作 中医 文言文 | 83 |
| 证治汇补 | 著作 中医 文言文 | 83 |
| 灵枢悬解 | 著作 中医 文言文 | 83 |
| 难经悬解 | 著作 中医 文言文 | 83 |
| 伤寒贯珠集 | 著作 中医 文言文 | 83 |
| 胎产秘书 | 著作 中医 文言文 | 83 |
| 叶选医衡 | 著作 中医 文言文 | 83 |
| 血证论 | 著作 中医 文言文 | 82 |
| 难经 | 著作 中医 文言文 | 82 |
| 外经微言 | 著作 中医 文言文 | 82 |
| 脉因证治 | 著作 中医 文言文 | 82 |
| 名师垂教 | 著作 中医 医案 | 82 |
| 黄帝内经·灵枢 | 著作 中医 文言文 | 82 |
| 竹泉生女科集要 | 著作 中医 文言文 | 82 |
| 医学三字经 | 著作 中医 文言文 | 81 |
| 沈氏女科辑要 | 著作 中医 文言文 | 81 |
| 评注产科心法 | 著作 中医 文言文 | 81 |
| 内经评文 | 著作 中医 文言文 | 81 |
| 女科折衷纂要 | 著作 中医 文言文 | 81 |
| 中国医籍考 | 著作 中医 文言文 | 80 |
| 温病正宗 | 著作 中医 文言文 | 80 |
| 女科秘旨 | 著作 中医 文言文 | 80 |
| 人体解剖学歌诀 | 使用手册 西医 | 80 |
| 冷庐医话 | 著作 中医 文言文 | 80 |
| 脉诀乳海 | 著作 中医 文言文 | 79 |
| 丁甘仁医案 | 著作 中医 医案 文言文 | 78 |
| 伤寒总病论 | 著作 中医 文言文 | 78 |
| 三指禅 | 著作 中医 文言文 | 78 |
| 医学启源 | 著作 中医 文言文 | 78 |
| 核、化学武器损伤 | 教材 西医 | 77 |
| 明医杂着 | 著作 中医 文言文 | 77 |
| 诊家正眼 | 著作 中医 文言文 | 77 |
| 临证指南医案 | 著作 中医 医案 文言文 | 76 |
| 慈幼便览 | 著作 中医 文言文 | 75 |
| 医学正传 | 著作 中医 文言文 | 75 |
| 察病指南 | 著作 中医 文言文 | 75 |
| 达生编 | 著作 中医 文言文 | 75 |
| 医经国小 | 著作 中医 文言文 | 74 |
| 医理真传 | 著作 中医 文言文 | 74 |
| 肘后备急方 | 著作 中医 文言文 | 74 |
| 神经病学 | 教材 西医 | 73 |
| 正骨心法要旨 | 著作 中医 文言文 | 73 |
| 古代房中秘方 | 著作 中医 医案 文言文 | 73 |
| 温热暑疫全书 | 著作 中医 文言文 | 72 |
| 婴童类萃 | 著作 中医 文言文 | 72 |
| 内外伤辨 | 著作 中医 文言文 | 71 |
| 流行病学 | 教材 西医 | 69 |
| 幼科折衷 | 著作 中医 文言文 | 69 |
| 也是山人医案 | 著作 中医 医案 文言文 | 69 |
| 小儿推拿广意 | 著作 中医 文言文 | 68 |
| 寿世青编 | 著作 中医 文言文 | 68 |
| 仲景伤寒补亡论 | 著作 中医 文言文 | 68 |
| 经穴汇解 | 著作 中医 文言文 | 68 |
| 伤科汇纂 | 著作 中医 文言文 | 67 |
| 临床肝移植 | 教材 西医 | 66 |
| 余无言医案 | 著作 中医 医案 文言文 | 66 |
| 脉诀刊误 | 著作 中医 文言文 | 66 |
| 家庭医学百科-家庭护理篇 | 使用手册 西医 家庭 | 66 |
| 解围元薮 | 著作 中医 文言文 | 66 |
| 寓意草 | 著作 中医 医案 文言文 | 66 |
| 范中林六经辨证医案 | 著作 中医 医案 文言文 | 65 |
| 育婴家秘 | 著作 中医 文言文 | 64 |
| 皮肤性病学 | 教材 西医 | 64 |
| 黄帝明堂灸经 | 著作 中医 文言文 | 64 |
| 内经博议 | 著作 中医 文言文 | 63 |
| 医门法律 | 著作 中医 文言文 | 63 |
| 仙传外科集验方 | 著作 中医 文言文 | 63 |
| 女科指掌 | 著作 中医 文言文 | 62 |
| 医学妙谛 | 著作 中医 文言文 | 62 |
| 幼科发挥 | 著作 中医 文言文 | 62 |
| 伤寒明理论 | 著作 中医 文言文 | 62 |
| 眼科阐微 | 著作 中医 文言文 | 62 |
| 外科枢要 | 著作 中医 医案 文言文 | 61 |
| 经络考 | 著作 中医 文言文 | 61 |
| 食疗方 | 著作 中医 文言文 | 61 |
| 外科精要 | 著作 中医 文言文 | 60 |
| 济生集 | 著作 中医 文言文 | 59 |
| 妇科秘书 | 著作 中医 文言文 | 58 |
| 针灸易学 | 著作 中医 文言文 | 58 |
| 杂病治例 | 著作 中医 文言文 | 57 |
| 基因与疾病 | 教材 西医 | 55 |
| 评琴书屋医略 | 著作 中医 文言文 | 55 |
| 形色外诊简摩 | 著作 中医 文言文 | 55 |
| 保幼新编 | 著作 中医 文言文 | 55 |
| 景景医话 | 著作 中医 文言文 | 55 |
| 洗冤集录 | 著作 中医 文言文 | 55 |
| 银海指南 | 著作 中医 医案 文言文 | 54 |
| 史载之方 | 著作 中医 文言文 | 54 |
| 趣味中医 | 使用手册 中医 | 53 |
| 经验丹方汇编 | 著作 中医 文言文 | 53 |
| 医学见能 | 著作 中医 文言文 | 53 |
| 康复医学 | 教材 西医 | 52 |
| 小儿常见病单验方 | 使用手册 中医 | 52 |
| 外科十法 | 著作 中医 文言文 | 52 |
| 女科旨要 | 著作 中医 文言文 | 52 |
| 外科选要 | 著作 中医 文言文 | 52 |
| 疡科纲要 | 著作 中医 文言文 | 51 |
| 笔花医镜 | 著作 中医 文言文 | 51 |
| 病历书写规范 | 教材 西医 | 51 |
| 药征 | 著作 中医 文言文 | 50 |
| 止园医话 | 著作 中医 医案 文言文 | 50 |
| 泌尿外科学 | 教材 西医 | 50 |
| 临症验舌法 | 著作 中医 文言文 | 50 |
| 伤寒恒论 | 著作 中医 文言文 | 49 |
| 推求师意 | 著作 中医 文言文 | 49 |
| 脉理求真 | 著作 中医 文言文 | 49 |
| 中药法规 | 使用手册 中医 | 48 |
| 伤寒直格 | 著作 中医 文言文 | 48 |
| 理虚元鉴 | 著作 中医 文言文 | 48 |
| 原机启微 | 著作 中医 文言文 | 48 |
| 产鉴 | 著作 中医 文言文 | 47 |
| 质疑录 | 著作 中医 文言文 | 47 |
| 阴证略例 | 著作 中医 文言文 | 46 |
| 神应经 | 著作 中医 文言文 | 46 |
| 脉症治方 | 著作 中医 医案 文言文 | 45 |
| 养生秘旨 | 著作 中医 文言文 | 45 |
| 卫生家宝产科备要 | 著作 中医 文言文 | 45 |
| 慎柔五书 | 著作 中医 医案 文言文 | 44 |
| 吴鞠通医案 | 著作 中医 医案 文言文 | 43 |
| 幼科切要 | 著作 中医 文言文 | 43 |
| 地震灾后常见病多发病中医药治疗手册 | 使用手册 中医 地震灾后人群 | 43 |
| 马培之医案 | 著作 中医 文言文 | 43 |
| 敖氏伤寒金镜录 | 著作 中医 文言文 | 42 |
| 格致余论 | 著作 中医 文言文 | 42 |
| 伤寒标本心法类萃 | 著作 中医 文言文 | 42 |
| 女丹合编选注 | 著作 中医 文言文 | 42 |
| 医贯 | 著作 中医 文言文 | 42 |
| 儿科要略 | 著作 中医 文言文 | 41 |
| 重订广温热论 | 著作 中医 医案 | 41 |
| 寿世传真 | 著作 中医 文言文 | 41 |
| 胸外科学 | 教材 西医 | 40 |
| 医宗己任编 | 著作 中医 医案 文言文 | 40 |
| 医经原旨 | 著作 中医 文言文 | 40 |
| 文堂集验方 | 著作 中医 文言文 | 40 |
| 急救良方 | 著作 中医 文言文 | 39 |
| 消化病学 | 教材 西医 | 39 |
| 养生导引法 | 著作 中医 文言文 | 39 |
| 外科精义 | 著作 中医 文言文 | 38 |
| 仿寓意草 | 著作 中医 医案 文言文 | 38 |
| 宜麟策 | 著作 中医 文言文 | 37 |
| 宋本备急灸法 | 著作 中医 文言文 | 37 |
| 呼吸病学 | 教材 西医 | 37 |
| 医学真传 | 著作 中医 文言文 | 37 |
| 喉科秘诀 | 著作 中医 文言文 | 37 |
| 伤科补要 | 著作 中医 文言文 | 37 |
| 麻疹阐注 | 著作 中医 文言文 | 37 |
| 经络汇编 | 著作 中医 文言文 | 36 |
| 养老奉亲书 | 著作 中医 文言文 | 36 |
| 手掌与疾病 | 使用手册 中医 | 36 |
| 重订灵兰要览 | 著作 中医 文言文 | 36 |
| 麻疹备要方论 | 著作 中医 文言文 | 34 |
| 医学传灯 | 著作 中医 文言文 | 34 |
| 诊脉三十二辨 | 著作 中医 文言文 | 34 |
| 韩氏医通 | 著作 中医 医案 文言文 | 32 |
| 慈幼新书 | 著作 中医 文言文 | 32 |
| 内府秘传经验女科 | 著作 中医 文言文 | 32 |
| 针灸问对 | 著作 中医 文言文 | 32 |
| 中西汇通医经精义 | 著作 中医 文言文 | 31 |
| 立斋外科发挥 | 著作 中医 文言文 | 31 |
| 女科撮要 | 著作 中医 文言文 | 31 |
| 幼科概论 | 著作 中医 文言文 | 30 |
| 原要论 | 著作 中医 文言文 | 30 |
| 重楼玉钥续编 | 著作 中医 文言文 | 30 |
| 订正仲景全书金匮要略注 | 著作 中医 文言文 | 30 |
| 运气要诀 | 著作 中医 文言文 | 30 |
| 幼科指南 | 著作 中医 文言文 | 29 |
| 叶天士医案精华 | 著作 中医 医案 文言文 | 29 |
| 眼科秘诀 | 著作 中医 文言文 | 29 |
| 素灵微蕴 | 著作 中医 医案 文言文 | 29 |
| 金匮要略浅注 | 著作 中医 文言文 | 29 |
| 子午流注针经 | 著作 中医 文言文 | 29 |
| 全生指迷方 | 著作 中医 文言文 | 28 |
| 金匮玉函要略辑义 | 著作 中医 文言文 | 28 |
| 温热经纬 | 著作 中医 文言文 | 28 |
| 濒湖脉学 | 著作 中医 文言文 | 28 |
| 金匮玉函经二注 | 著作 中医 文言文 | 27 |
| 尤氏喉症指南 | 著作 中医 文言文 | 27 |
| 何氏虚劳心传 | 著作 中医 医案 文言文 | 27 |
| 医学指归 | 著作 中医 文言文 | 27 |
| 丹医秘授古脉法 | 著作 中医 文言文 | 26 |
| 白喉全生集 | 著作 中医 文言文 | 26 |
| 六因条辨 | 著作 中医 文言文 | 26 |
| 邵兰荪医案 | 著作 中医 医案 文言文 | 26 |
| 金匮要略方论 | 著作 中医 文言文 | 26 |
| 金匮玉函要略述义 | 著作 中医 文言文 | 26 |
| 王旭高临证医案 | 著作 中医 医案 文言文 | 26 |
| 精神药品临床应用指导原则 | 使用手册 西医 精神类疾病患者 | 26 |
| 医原 | 著作 中医 文言文 | 26 |
| 曹仁伯医案论 | 著作 中医 文言文 | 25 |
| 温热逢源 | 著作 中医 文言文 | 25 |
| 千金宝要 | 著作 中医 文言文 | 24 |
| 血液病学 | 教材 西医 | 24 |
| 金匮要略心典 | 著作 中医 文言文 | 24 |
| 类证活人书 | 著作 中医 文言文 | 24 |
| 陈氏幼科秘诀 | 著作 中医 文言文 | 24 |
| 产宝 | 著作 中医 文言文 | 23 |
| 伤寒发微论 | 著作 中医 文言文 | 23 |
| 内科摘要 | 著作 中医 医案 文言文 | 23 |
| 外科方外奇方 | 著作 中医 文言文 | 23 |
| 诊宗三昧 | 著作 中医 文言文 | 23 |
| 疯门全书 | 著作 中医 文言文 | 22 |
| 妇科玉尺 | 著作 中医 文言文 | 22 |
| 高注金匮要略 | 著作 中医 文言文 | 22 |
| 增订十药神书 | 著作 中医 文言文 | 22 |
| 医经溯洄集 | 著作 中医 文言文 | 21 |
| 青囊秘诀 | 著作 中医 文言文 | 21 |
| 医医小草 | 著作 中医 文言文 | 21 |
| 中药基本理论知识 | 教材 中医 | 20 |
| 随息居重订霍乱论 | 著作 中医 文言文 | 20 |
| 中风论 | 著作 中医 医案 文言文 | 20 |
| 知医必辨 | 著作 中医 文言文 | 20 |
| 脉确 | 著作 中医 文言文 | 20 |
| 幼科种痘心法要旨 | 著作 中医 文言文 | 19 |
| 肾脏病学 | 教材 西医 | 19 |
| 虚损病类钩沉 | 著作 中医 文言文 | 19 |
| 慎疾刍言 | 著作 中医 文言文 | 19 |
| 市隐庐医学杂着 | 著作 中医 文言文 | 18 |
| 奇经八脉考 | 著作 中医 文言文 | 18 |
| 跌打损伤回生集 | 著作 中医 文言文 | 18 |
| 内分泌学 | 教材 西医 | 18 |
| 温热论 | 著作 中医 文言文 | 18 |
| 针经指南 | 著作 中医 文言文 | 17 |
| 白喉条辨 | 著作 中医 文言文 | 17 |
| 急救便方 | 著作 中医 文言文 | 17 |
| 伤寒补例 | 著作 中医 文言文 | 17 |
| 女科要旨 | 著作 中医 文言文 | 17 |
| 广嗣要语 | 著作 中医 文言文 | 17 |
| 先哲医话 | 著作 中医 医案 文言文 | 17 |
| 服食导饵 | 著作 中医 文言文 | 17 |
| 家传女科经验摘奇 | 著作 中医 文言文 | 17 |
| 心医集 | 著作 中医 文言文 | 16 |
| 老年学 | 教材 西医 | 16 |
| 集验背疽方 | 著作 中医 文言文 | 16 |
| 察舌辨症新法 | 著作 中医 文言文 | 15 |
| 跌损妙方 | 著作 中医 文言文 | 15 |
| 一草亭目科全书 | 著作 中医 文言文 | 15 |
| 河间伤寒心要 | 著作 中医 文言文 | 15 |
| 外科集验方 | 著作 中医 文言文 | 15 |
| 塘医话 | 著作 中医 文言文 | 15 |
| 儿科醒 | 著作 中医 文言文 | 14 |
| 伤寒法祖 | 著作 中医 文言文 | 14 |
| 对山医话 | 著作 中医 文言文 | 14 |
| 女科指要 | 著作 中医 文言文 | 14 |
| 辅行诀脏腑用药法要 | 著作 中医 文言文 | 14 |
| 证治心传 | 著作 中医 文言文 | 14 |
| 褚氏遗书 | 著作 中医 文言文 | 13 |
| 何澹安医案 | 著作 中医 医案 文言文 | 13 |
| 章次公医案》中附子的应用 | 著作 中医 文言文 | 13 |
| 王氏医案绎注 | 著作 中医 医案 文言文 | 13 |
| 口齿类要 | 著作 中医 文言文 | 13 |
| 诊家枢要 | 著作 中医 文言文 | 13 |
| 张氏妇科 | 著作 中医 文言文 | 12 |
| 伤科大成 | 著作 中医 文言文 | 12 |
| 炙膏肓腧穴法 | 著作 中医 文言文 | 12 |
| 刘河间伤寒医鉴 | 著作 中医 文言文 | 12 |
| 妇科秘方 | 著作 中医 文言文 | 12 |
| 陆地仙经 | 著作 中医 文言文 | 12 |
| 本草问答 | 著作 中医 文言文 | 11 |
| 眉寿堂方案选存 | 著作 中医 医案 文言文 | 11 |
| 温病指南 | 著作 中医 文言文 | 11 |
| 焦氏喉科枕秘 | 著作 中医 文言文 | 11 |
| 诸脉主病诗 | 著作 中医 文言文 | 11 |
| 肯堂医论 | 著作 中医 医案 文言文 | 11 |
| 金疮跌打接骨药性秘书 | 著作 中医 文言文 | 10 |
| 伤寒舌鉴 | 著作 中医 文言文 | 10 |
| 痰疠法门 | 著作 中医 文言文 | 10 |
| 痧疹辑要 | 著作 中医 文言文 | 10 |
| 中华人民共和国药品管理法》释义 | 使用手册 | 10 |
| 中华人民共和国药品管理法 | 使用手册 | 10 |
| 何世英医案 | 著作 中医 医案 文言文 | 10 |
| 性命要旨 | 著作 中医 文言文 | 10 |
| 重庆堂随笔 | 著作 中医 文言文 | 9 |
| 内经知要 | 著作 中医 文言文 | 9 |
| 中医体质 | 著作 中医 | 9 |
| 婴儿论 | 著作 中医 文言文 | 9 |
| 疠疡机要 | 著作 中医 文言文 | 9 |
| 颅囟经 | 著作 中医 文言文 | 8 |
| 钱氏秘传产科方书名试验录 | 著作 中医 文言文 | 8 |
| 邯郸遗稿 | 著作 中医 文言文 | 8 |
| 穴道秘书 | 著作 中医 文言文 | 8 |
| 存存斋医话稿 | 著作 中医 医案 文言文 | 8 |
| 集思医案 | 著作 中医 医案 文言文 | 7 |
| 尤氏喉科秘书 | 著作 中医 文言文 | 7 |
| 马王堆简帛 | 著作 中医 文言文 | 7 |
| 伤寒附翼 | 著作 中医 文言文 | 7 |
| 医暇卮言 | 著作 中医 文言文 | 7 |
| 三家医案合刻 | 著作 中医 医案 文言文 | 7 |
| 刘涓子鬼遗方 | 著作 中医 文言文 | 7 |
| 跌打秘方 | 著作 中医 文言文 | 7 |
| 伤寒医诀串解 | 著作 中医 文言文 | 6 |
| 少林真传伤科秘方 | 著作 中医 文言文 | 6 |
| 归砚录 | 著作 中医 医案 文言文 | 6 |
| 跌打损伤方 | 著作 中医 文言文 | 6 |
| 三消论 | 著作 中医 文言文 | 6 |
| 伤科方书 | 著作 中医 文言文 | 6 |
| 包氏喉证家宝 | 著作 中医 文言文 | 5 |
| 发背对口治诀论 | 著作 中医 文言文 | 5 |
| 丛桂草堂医案 | 著作 中医 医案 文言文 | 5 |
| 外科医镜 | 著作 中医 文言文 | 5 |
| 千金食治 | 著作 中医 文言文 | 5 |
| 旧德堂医案 | 著作 中医 医案 文言文 | 5 |
| 修昆仑证验 | 著作 中医 文言文 | 4 |
| 妇科问答 | 著作 中医 文言文 | 4 |
| 奇症汇 | 著作 中医 医案 文言文 | 4 |
| 小儿痘疹方论 | 著作 中医 文言文 | 4 |
| 医医医 | 著作 中医 文言文 | 4 |
| 客尘医话 | 著作 中医 文言文 | 4 |
| 风湿病学 | 教材 西医 | 4 |
| 金疮秘传禁方 | 著作 中医 文言文 | 3 |
| 徐批叶天士晚年方案真本 | 著作 中医 医案 文言文 | 3 |
| 脉象统类 | 著作 中医 文言文 | 3 |
| 上池杂说 | 著作 中医 文言文 | 2 |
| 柳洲医话 | 著作 中医 文言文 | 2 |
| 仙授理伤续断秘方 | 著作 中医 文言文 | 2 |
| 食鉴本草 | 著作 中医 文言文 | 2 |
| 张畹香医案 | 著作 中医 医案 文言文 | 2 |
| 鬻婴提要说 | 著作 中医 文言文 | 1 |
| 花韵楼医案 | 著作 中医 医案 文言文 | 1 |
### 附录二:prompt-template
```
模块用途:用于处理医疗文本记录,输入为碎片化的医疗文本记录,输出为通顺自然语言的医疗文本记录。
模块供能:
{
使用思维链严格遵循以下 6 个步骤及其子规则,对输入的医疗记录进行重构,但无需返回任何处理流程与处理结果,仅将处理后的医疗文本记录作为唯一输出:
1. 脱敏处理:对包含个人信息的片段进行严格脱敏,执行以下操作:
[
i. 若出现患者及其家属姓名,则以'患者'或'患者家属'指代,适用范围包括但不限于常见、不常见姓氏开头,或复姓开头的二字、三字、四字姓名。
ii. 若出现具体医院名称,统一用'就诊医院'指代。
iii. 若存在患者及其家属的联系方式、家庭住址等敏感信息,进行彻底移除。
iv. 身高、体重、籍贯及病症等患者的常规信息不需脱敏。
]
2. 敏感信息复查:再次核查,确保不存在姓名、昵称、联系方式、家庭住址、具体医院名称等敏感信息,若发现,立即删除。
3. 标签清理:记录中的信息以'标签:信息'的形式成对出现,使用中文冒号分隔,标签包括但不限于'主诉'、'方证'、'门诊记录',而信息则对应特定名词或自然语言描述。执行以下操作:
[
i. 删除无任何有效信息的空标签。
ii. 去除中文冒号,并使用适当连词将标签与其信息内容自然结合成连贯表述。
iii. 避免因删除关键信息导致记录残缺或遗漏。
]
4. 日期格式规范化:将出现的所有日期,统一为'YYYY年MM月dd日'格式。
5. 符号与空白字符清理:删除所有多余的空白字符与重复符号,以提升整体可读性。
6. 碎片信息整合:将碎片化的信息整合为更为连贯且自然的句子,必要时可调整片段顺序以增强逻辑性与因果关系。但须避免主观篡改原意,无需将口语化词汇转述为专业术语。
}
``` |
xunfeia/wenyanwen | xunfeia | 2025-02-09T19:15:33Z | 17 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T19:14:39Z | 0 | ---
license: apache-2.0
---
|
ethanCSL/groot_10 | ethanCSL | 2025-04-28T11:54:24Z | 29 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-04-28T11:54:13Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "koch",
"total_episodes": 10,
"total_frames": 1722,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
svjack/InfiniteYou_PosterCraft_Wang_Leehom_Poster_FP8 | svjack | 2025-06-15T16:36:33Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-15T16:22:24Z | 0 | ---
dataset_info:
features:
- name: product_category
dtype: string
- name: poster_prompt
dtype: string
- name: final_prompt
dtype: string
- name: Wang_Leehom_poster_image
dtype: image
splits:
- name: train
num_bytes: 61472594.0
num_examples: 50
download_size: 61458026
dataset_size: 61472594.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
- reference on

- target

|
nhagar/c4_urls_realnewslike | nhagar | 2025-05-04T16:12:23Z | 83 | 0 | [
"task_categories:text-generation",
"license:odc-by",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2025-02-27T18:58:31Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 203909541
num_examples: 1799838
download_size: 134835996
dataset_size: 203909541
configs:
- config_name: default
data_files:
- split: train
path: batch*/train-*
license: odc-by
task_categories:
- text-generation
size_categories:
- 10B<n<100B
---
# Dataset Card for c4_urls_realnewslike
This dataset provides the URLs and top-level domains associated with training records in [allenai/c4](https://huggingface.co/datasets/allenai/c4) (realnewslike variant). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible.
## Dataset Details
### Dataset Description
This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy).
- **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy)
- **License:** Same as source dataset
### Dataset Sources
- **Repository:** [allenai/c4](https://huggingface.co/datasets/allenai/c4)
## Uses
This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data.
### Direct Use
The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve:
- Identifying the most-used websites
- Categorizing URLs to understand domain- or topic-level dataset composition
- Comparing URLs across datasets
- Digging into inclusion/exclusion patterns for a particular website
### Out-of-Scope Use
This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset.
## Dataset Structure
This dataset contains every record with a URL from the source dataset. It contains two columns:
- `url`: The raw URL associated with each record
- `domain`: The top-level domain for each URL, extracted with `tldextract`
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed] |
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_1_for_gen_1_v2 | HungVu2003 | 2025-05-03T21:52:27Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T21:52:25Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6330226
num_examples: 13750
download_size: 3236670
dataset_size: 6330226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ryusangwon/nq_colbert_top5_atom | ryusangwon | 2024-12-19T12:48:13Z | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-18T00:18:33Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: top1_answerable
dtype: bool
- name: top5_answerable
dtype: bool
- name: top5
list:
- name: atom_student_10
dtype: string
- name: atom_student_20
dtype: string
- name: atom_student_t5
dtype: string
- name: atom_student_t5_proposition
dtype: string
- name: atom_teacher
dtype: string
- name: colbertscore
dtype: float64
- name: contents
dtype: string
- name: docID
dtype: string
- name: has_answer
dtype: bool
- name: has_answer_contents
dtype: bool
- name: has_answer_student_10
dtype: bool
- name: has_answer_student_20
dtype: bool
- name: has_answer_teacher
dtype: bool
- name: rank
dtype: string
splits:
- name: nq
num_bytes: 43097296
num_examples: 3610
download_size: 20133851
dataset_size: 43097296
configs:
- config_name: default
data_files:
- split: nq
path: data/nq-*
---
|
infinite-dataset-hub/ShardDistributions | infinite-dataset-hub | 2024-12-07T19:06:31Z | 8 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2024-12-07T19:06:30Z | 0 | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# ShardDistributions
tags: data_science, sharding_optimization, data_volume
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'ShardDistributions' dataset provides a snapshot of various shard distributions across multiple databases, highlighting the challenges and strategies in sharding for optimization of data volume management. Each entry captures the schema complexity, data volume, shard key selection criteria, and performance metrics that inform the sharding optimization process. This dataset can be instrumental for data scientists and engineers working on database sharding strategies to ensure scalability and efficient data access.
**CSV Content Preview:**
```csv
database_id, schema_complexity, data_volume, shard_key, performance_metric, label
DB01, high, 10TB, user_id, 99.5%, optimal
DB02, medium, 2TB, timestamp, 95.0%, good
DB03, low, 1TB, user_id_and_timestamp, 98.0%, optimal
DB04, high, 15TB, product_id, 94.5%, needs_review
DB05, medium, 3TB, location, 97.0%, good
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'Database Sharding':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=Database+Sharding&dataset=ShardDistributions&tags=data_science,+sharding_optimization,+data_volume
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
Jahirrrr/corpus-indo | Jahirrrr | 2025-01-28T11:15:31Z | 20 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-01-28T11:14:42Z | 0 | ---
license: apache-2.0
---
|
abandhu/finetuning_demo22 | abandhu | 2025-01-08T09:07:33Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-08T09:07:32Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 34344
num_examples: 101
download_size: 8061
dataset_size: 34344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yarml/test-articles | yarml | 2025-05-23T17:13:29Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T17:06:30Z | 0 | ---
dataset_info:
features:
- name: titles
dtype: string
- name: content
dtype: string
- name: images
dtype: string
splits:
- name: train
num_bytes: 19428
num_examples: 6
download_size: 0
dataset_size: 19428
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Trelis/MATH-prealgebra-4rows-synthetic | Trelis | 2024-10-11T14:47:48Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-11T12:09:12Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5893
num_examples: 2
download_size: 22280
dataset_size: 5893
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Svngoku/african-historical-films | Svngoku | 2025-05-31T19:48:37Z | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-31T19:48:31Z | 0 | ---
dataset_info:
features:
- name: type
dtype: string
- name: title
dtype: string
- name: director
dtype: string
- name: country
dtype: string
- name: year
dtype: string
- name: subject
dtype: string
- name: significance
dtype: string
- name: characters
sequence: string
splits:
- name: train
num_bytes: 6456
num_examples: 15
download_size: 8526
dataset_size: 6456
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kotlarmilos/dotnet-runtime | kotlarmilos | 2025-05-23T13:30:39Z | 0 | 1 | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"annotations_creators:machine-generated",
"annotations_creators:human-verified",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"raw-json",
"parquet",
"faiss-index",
"text",
"large-scale",
"offline-processing",
"github",
"code",
"datasets"
] | [
"text-classification",
"text-retrieval"
] | 2025-05-23T08:22:25Z | 0 | ---
pretty_name: ".NET Runtime"
tags:
- raw-json
- parquet
- faiss-index
- text
- large-scale
- offline-processing
- github
- code
- datasets
license: mit
language:
- en
size_categories:
- 100K<n<1M
task_categories:
- text-classification
- text-retrieval
source_datasets: []
annotations_creators:
- machine-generated
- human-verified
---
# .NET Runtime Fine-Tuning Data and Index
This directory contains data for fine-tuning models and building RAGs for the dotnet/runtime repository.
## Overview
- **data/**: Contains all datasets and indexes.
- **raw/sample/**: Sample PRs and diffs collected from GitHub.
- **raw_data.tar**: Archive of collected PRs and diffs from GitHub.
- **samples/**: Json files with processed samples suitable for dataset generation.
- **processed/**: Parquet files for fine-tuning (e.g., `train.parquet`, `test.parquet`).
- **faiss/**: Vector indexes for RAG workflows.
- **scripts/**: Python and nodejs scripts for crawling, processing, and indexing.
## Data Structure
```
data/
├── raw/
| ├── sample/
│ │ ├── prs/
│ │ ├── diffs/
│ └── raw_data.tar
├── processed/
│ ├── train.parquet
│ └── test.parquet
└── faiss/
└── index.faiss
└── index.pkl
```
## Generated dataset
PR is considered as a timeline with events. Input is PR metadata (title, description, label) and commit n-1, with all events between n-1 and n. Completion is n. It is possible to filter by time, label, authors, etc.
## Scripts
See [scripts/README.md](scripts/README.md) for details on running the crawler, dataset generation, and RAG indexing.
## PyTorch Dataset Example
```python
from datasets import load_dataset
# Load Parquet train/test splits
train = load_dataset("parquet", data_files="data/processed/train.parquet", split="train")
test = load_dataset("parquet", data_files="data/processed/test.parquet", split="train")
```
## RAG Vector Search Example
```python
import faiss
import numpy as np
# Load FAISS index
index = faiss.read_index("data/faiss/index.faiss")
# Example query embedding (replace with your embedding)
query_embedding = ...
# Search
D, I = index.search(query_embedding.reshape(1, -1), k=5)
print("Top 5 similar PR indices:", I[0])
``` |
semran1/DCLM-13M-tokenized | semran1 | 2025-01-16T12:40:34Z | 17 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-16T09:31:53Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: length
dtype: int64
splits:
- name: train
num_bytes: 158815404225.27823
num_examples: 12996761
download_size: 77960656635
dataset_size: 158815404225.27823
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SetFit/amazon_reviews_multi_fr | SetFit | 2025-02-17T14:11:23Z | 100 | 0 | [
"language:fr",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-03-13T02:48:20Z | 0 | ---
language: fr
---
#amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. |
zohaibterminator/9th-grade-chem | zohaibterminator | 2025-05-27T10:51:55Z | 0 | 0 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"created-with-pdfs-to-page-images-converter",
"pdf-to-image"
] | [] | 2025-05-27T10:51:47Z | 0 | ---
size_categories:
- n<1K
tags:
- created-with-pdfs-to-page-images-converter
- pdf-to-image
---
# Dataset Card for zohaibterminator/9th-grade-chem
## Dataset Description
This dataset contains images converted from PDFs using the PDFs to Page Images Converter Space.
- **Number of images:** 53
- **Number of PDFs processed:** 1
- **Sample size per PDF:** 100
- **Created on:** 2025-05-27 12:51:55
## Dataset Creation
### Source Data
The images in this dataset were generated from user-uploaded PDF files.
### Processing Steps
1. PDF files were uploaded to the PDFs to Page Images Converter.
2. Each PDF was processed, converting selected pages to images.
3. The resulting images were saved and uploaded to this dataset.
## Dataset Structure
The dataset consists of JPEG images, each representing a single page from the source PDFs.
### Data Fields
- `images/`: A folder containing all the converted images.
### Data Splits
This dataset does not have specific splits.
## Additional Information
- **Contributions:** Thanks to the PDFs to Page Images Converter for creating this dataset.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.