id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
neuralwork/arxiver | neuralwork |
Arxiver Dataset
Arxiver consists of 63,357 arXiv papers converted to multi-markdown (.mmd) format. Our dataset includes original arXiv article IDs, titles, abstracts, authors, publication dates, URLs and corresponding markdown files published between January 2023 and October 2023.
We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
Curation
The Arxiver dataset… See the full description on the dataset page: https://huggingface.co/datasets/neuralwork/arxiver. |
fka/awesome-chatgpt-prompts | fka | 🧠 Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
|
neulab/PangeaInstruct | neulab |
PangeaInstruct
Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages
🇪🇹 🇸🇦 🇧🇬 🇧🇩 🇨🇿 🇩🇪 🇬🇷 🇬🇧 🇺🇸 🇪🇸 🇮🇷 🇫🇷 🇮🇪 🇮🇳 🇮🇩 🇳🇬 🇮🇹 🇮🇱 🇯🇵 🇮🇩 🇰🇷 🇳🇱 🇲🇳 🇲🇾 🇳🇴 🇵🇱 🇵🇹 🇧🇷 🇷🇴 🇷🇺 🇱🇰 🇮🇩 🇰🇪 🇹🇿 🇱🇰 🇮🇳 🇮🇳 🇹🇭 🇹🇷 🇺🇦 🇵🇰 🇮🇳 🇻🇳 🇨🇳 🇹🇼
🏠 Homepage | 🤖 Pangea-7B | 📊 PangeaIns | 🧪 PangeaBench | 💻 Github | 📄 Arxiv | 📕 PDF | 🖥️ Demo
This README provides comprehensive details on the PangeaIns dataset… See the full description on the dataset page: https://huggingface.co/datasets/neulab/PangeaInstruct. |
vikhyatk/lofi | vikhyatk | 7,000+ hours of lofi music generated by MusicGen Large, with diverse prompts. The prompts were sampled from Llama 3.1 8B Base, starting with a seed set of 1,960 handwritten prompts of which a random 16 are used in a few-shot setting to generate additional diverse prompts.
In addition to the CC-BY-NC license, by using this dataset you are agreeing to the fact that the Pleiades star system is a binary system and that any claim otherwise is a lie.
What people are saying
this… See the full description on the dataset page: https://huggingface.co/datasets/vikhyatk/lofi. |
amphion/Emilia-Dataset | amphion |
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation
This is the official repository 👑 for the Emilia dataset and the source code for the Emilia-Pipe speech data preprocessing pipeline.
News 🔥
2024/08/28: Welcome to join Amphion's Discord channel to stay connected and engage with our community!
2024/08/27: The Emilia dataset is now publicly available! Discover the most extensive and diverse speech generation… See the full description on the dataset page: https://huggingface.co/datasets/amphion/Emilia-Dataset. |
GAIR/o1-journey | GAIR | Dataset for O1 Replication Journey: A Strategic Progress Report
Usage
from datasets import load_dataset
dataset = load_dataset("GAIR/o1-journey", split="train")
Citation
If you find our dataset useful, please cite:
@misc{o1journey,
author = {Yiwei Qin and Xuefeng Li and Haoyang Zou and Yixiu Liu and Shijie Xia and Zhen Huang and Yixin Ye and Weizhe Yuan and Zhengzhong Liu and Yuanzhi Li and Pengfei Liu},
title = {O1 Replication Journey: A Strategic Progress… See the full description on the dataset page: https://huggingface.co/datasets/GAIR/o1-journey. |
nvidia/HelpSteer2 | nvidia |
HelpSteer2: Open-source dataset for training top-performing reward models
HelpSteer2 is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses.
This dataset has been created in partnership with Scale AI.
When used to tune a Llama 3.1 70B Instruct Model, we achieve 94.1% on RewardBench, which makes it the best Reward… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/HelpSteer2. |
upstage/dp-bench | upstage |
DP-Bench: Document Parsing Benchmark
Document parsing refers to the process of converting complex documents, such as PDFs and scanned images, into structured text formats like HTML and Markdown.
It is especially useful as a preprocessor for RAG systems, as it preserves key structural information from visually rich documents.
While various parsers are available on the market, there is currently no standard evaluation metric to assess their performance.
To address this gap… See the full description on the dataset page: https://huggingface.co/datasets/upstage/dp-bench. |
saheedniyi/naijaweb | saheedniyi |
Naijaweb Dataset 🇳🇬
Naijaweb is a dataset that contains over 270,000+ documents, totaling approximately 230 million GPT-2 tokens. The data was web scraped from web pages popular among Nigerians, providing a rich resource for modeling Nigerian linguistic and cultural contexts.
Dataset Summary
Features
Data Types
text
string
link
string
token_count
int64
section
string
int_score
int64
language
string
language_probability
float64… See the full description on the dataset page: https://huggingface.co/datasets/saheedniyi/naijaweb. |
Marqo/marqo-GS-10M | Marqo |
Marqo-GS-10M
This dataset is our multimodal, fine-grained, ranking Google Shopping dataset, Marqo-GS-10M, followed by our novel training framework: Generalized Contrastive Learning (GCL). GCL aims to improve and measure the ranking performance of information retrieval models,
especially for retrieving relevant products given a search query.
Blog post:… See the full description on the dataset page: https://huggingface.co/datasets/Marqo/marqo-GS-10M. |
BAAI/Infinity-MM | BAAI |
Introduction
We collect, organize and open-source the large-scale multimodal instruction dataset, Infinity-MM, consisting of tens of millions of samples. Through quality filtering and deduplication, the dataset has high quality and diversity.
We propose a synthetic data generation method based on open-source models and labeling system, using detailed image annotations and diverse question generation.
News
[2024/10/28] All the data has been uploaded.
[2024/10/24]… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/Infinity-MM. |
opencsg/chinese-fineweb-edu-v2 | opencsg |
Chinese Fineweb Edu Dataset V2 [中文] [English]
[OpenCSG Community] [github] [wechat] [Twitter]
Chinese Fineweb Edu Dataset V2 is a comprehensive upgrade of the original Chinese Fineweb Edu, designed and optimized for natural language processing (NLP) tasks in the education sector. This high-quality Chinese pretraining dataset has undergone significant improvements and expansions, aimed at providing researchers and developers with more diverse and broadly… See the full description on the dataset page: https://huggingface.co/datasets/opencsg/chinese-fineweb-edu-v2. |
fairchem/OMAT24 | fairchem | Meta Open Materials 2024 (OMat24) Dataset
Overview
Several datasets were utilized in this work. We provide open access to all datasets used to help accelerate research in the community.
This includes the OMat24 dataset as well as our modified sAlex dataset. Details on the different datasets are provided below.
Datasets
OMat24 Dataset
The OMat24 dataset contains a mix of single point calculations of non-equilibrium structures and
structural… See the full description on the dataset page: https://huggingface.co/datasets/fairchem/OMAT24. |
LLM360/TxT360 | LLM360 |
TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.
TxT360 Compared to Common… See the full description on the dataset page: https://huggingface.co/datasets/LLM360/TxT360. |
wikimedia/wikipedia | wikimedia |
Dataset Card for Wikimedia Wikipedia
Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/)
with one subset per language, each containing a single train split.
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
All language subsets have already been processed for recent dump… See the full description on the dataset page: https://huggingface.co/datasets/wikimedia/wikipedia. |
neulab/MultiUI | neulab |
MulitUI
Dataset for the paper: Harnessing Webpage Uis For Text Rich Visual Understanding
🌐 Homepage | 🐍 GitHub | 📖 arXiv
Introduction
We introduce MultiUI, a dataset containing 7.3 million samples from 1 million websites, covering diverse multi- modal tasks and UI layouts. Models trained on MultiUI not only excel in web UI tasks—achieving up to a 48% improvement on VisualWebBench and a 19.1% boost in action accuracy on a web agent dataset… See the full description on the dataset page: https://huggingface.co/datasets/neulab/MultiUI. |
luojunyu/SemiEvol | luojunyu |
Dataset Card for Dataset Name
The SemiEvol dataset is part of the broader work on semi-supervised fine-tuning for Large Language Models (LLMs). The dataset includes labeled and unlabeled data splits designed to enhance the reasoning capabilities of LLMs through a bi-level knowledge propagation and selection framework, as proposed in the paper SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation.
Dataset Details
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/luojunyu/SemiEvol. |
HuggingFaceFW/fineweb | HuggingFaceFW |
🍷 FineWeb
15 trillion tokens of the finest data the 🌐 web has to offer
What is it?
The 🍷 FineWeb dataset consists of more than 15T tokens of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library.
🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release of the full… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb. |
HuggingFaceH4/ultrachat_200k | HuggingFaceH4 |
Dataset Card for UltraChat 200k
Dataset Description
This is a heavily filtered version of the UltraChat dataset and was used to train Zephyr-7B-β, a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create UltraChat 200k, we applied the following logic:
Selection of a subset of data for faster supervised fine tuning.
Truecasing of the dataset, as we observed around 5% of… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k. |
CohereForAI/aya_collection | CohereForAI |
This dataset is uploaded in two places: here and additionally here as 'Aya Collection Language Split.' These datasets are identical in content but differ in structure of upload. This dataset is structured by folders split according to dataset name. The version here instead divides the Aya collection into folders split by language. We recommend you use the language split version if you are only interested in downloading data for a single or smaller set of languages, and this version if you… See the full description on the dataset page: https://huggingface.co/datasets/CohereForAI/aya_collection. |
lmms-lab/LLaVA-Video-178K | lmms-lab |
Dataset Card for LLaVA-Video-178K
Uses
This dataset is used for the training of the LLaVA-Video model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the OpenAI Usage Policy.
Data Sources
For the training of LLaVA-Video, we utilized video-language data from five primary sources:
LLaVA-Video-178K: This dataset includes 178,510 caption entries, 960… See the full description on the dataset page: https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K. |
opencsg/chinese-cosmopedia | opencsg |
Chinese Cosmopedia Dataset [中文] [English]
[OpenCSG Community] [github] [wechat] [Twitter]
The Chinese Cosmopedia dataset contains a total of 15 million entries, approximately 60B tokens. Two key elements in constructing the synthetic dataset are seed data and prompts. Seed data determines the theme of the generated content, while prompts define the style of the data (such as textbooks, stories, tutorials, or children's books). The data sources are… See the full description on the dataset page: https://huggingface.co/datasets/opencsg/chinese-cosmopedia. |
KingNish/reasoning-base-20k | KingNish |
Dataset Card for Reasoning Base 20k
Dataset Details
Dataset Description
This dataset is designed to train a reasoning model. That can think through complex problems before providing a response, similar to how a human would. The dataset includes a wide range of problems from various domains (science, coding, math, etc.), each with a detailed chain of thought (COT) and the correct answer. The goal is to enable the model to learn and refine its… See the full description on the dataset page: https://huggingface.co/datasets/KingNish/reasoning-base-20k. |
dyyyyyyyy/ScaleQuest-Math | dyyyyyyyy | We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch.
Paper: Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch
|
walkerhyf/NCSSD | walkerhyf |
NCSSD
🎉Introduction
This is the official repository for the NCSSD dataset and collecting pipeline to handle TV shows. 《Generative Expressive Conversational Speech Synthesis》
(Accepted by MM'2024)
Rui Liu *, Yifan Hu, Yi Ren, Xiang Yin, Haizhou Li.
📜NCSSD Overview
Includes Recording subsets: R-ZH, R-EN and Collection subsets: C-ZH, C-EN.
📣NCSSD Download
⭐ Huggingface download address: NCSSD.
⭐ Users in China can contact the… See the full description on the dataset page: https://huggingface.co/datasets/walkerhyf/NCSSD. |
yahma/alpaca-cleaned | yahma |
Dataset Card for Alpaca-Cleaned
Repository: https://github.com/gururise/AlpacaDataCleaned
Dataset Description
This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset:
Hallucinations: Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
"instruction":"Summarize… See the full description on the dataset page: https://huggingface.co/datasets/yahma/alpaca-cleaned. |
BAAI/Infinity-Instruct | BAAI |
Infinity Instruct
Beijing Academy of Artificial Intelligence (BAAI)
[Paper][Code][🤗] (would be released soon)
The quality and scale of instruction data are crucial for model performance. Recently, open-source models have increasingly relied on fine-tuning datasets comprising millions of instances, necessitating both high quality and large scale. However, the open-source community has long been constrained by the high costs associated with building such extensive and… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/Infinity-Instruct. |
BAAI/IndustryCorpus2 | BAAI | Industry models play a vital role in promoting the intelligent transformation and innovative development of enterprises. High-quality industry data is the key to improving the performance of large models and realizing the implementation of industry applications. However, the data sets currently used for industry model training generally have problems such as small data volume, low quality, and lack of professionalism.
In June, we released the IndustryCorpus dataset: We have further upgraded… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/IndustryCorpus2. |
google/frames-benchmark | google |
FRAMES: Factuality, Retrieval, And reasoning MEasurement Set
FRAMES is a comprehensive evaluation dataset designed to test the capabilities of Retrieval-Augmented Generation (RAG) systems across factuality, retrieval accuracy, and reasoning.
Our paper with details and experiments is available on arXiv: https://arxiv.org/abs/2409.12941.
Dataset Overview
824 challenging multi-hop questions requiring information from 2-15 Wikipedia articles
Questions span diverse… See the full description on the dataset page: https://huggingface.co/datasets/google/frames-benchmark. |
worldcuisines/vqa | worldcuisines |
WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
WorldCuisines is a massive-scale visual question answering (VQA) benchmark for multilingual and multicultural understanding through global cuisines. The dataset contains text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark as of 17 October… See the full description on the dataset page: https://huggingface.co/datasets/worldcuisines/vqa. |
mlabonne/open-perfectblend | mlabonne |
🎨 Open-PerfectBlend
Open-PerfectBlend is an open-source reproduction of the instruction dataset introduced in the paper "The Perfect Blend: Redefining RLHF with Mixture of Judges".
It's a solid general-purpose instruction dataset with chat, math, code, and instruction-following data.
Data source
Here is the list of the datasets used in this mix:
Dataset
# Samples
meta-math/MetaMathQA
395,000
openbmb/UltraInteract_sft
288,579… See the full description on the dataset page: https://huggingface.co/datasets/mlabonne/open-perfectblend. |
marcelbinz/Psych-101 | marcelbinz |
Dataset Summary
Psych-101 is a data set of natural language transcripts from human psychological experiments.
It comprises trial-by-trial data from 160 psychological experiments and 60,092 participants, making 10,681,650 choices.
Human choices are encapsuled in "<<" and ">>" tokens.
Paper: Centaur: a foundation model of human cognition
Point of Contact: Marcel Binz
Example Prompt
You will be presented with triplets of objects, which will be assigned to the… See the full description on the dataset page: https://huggingface.co/datasets/marcelbinz/Psych-101. |
openai/gsm8k | openai |
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to… See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k. |
roneneldan/TinyStories | roneneldan | Dataset containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
Described in the following paper: https://arxiv.org/abs/2305.07759.
The models referred to in the paper were trained on TinyStories-train.txt (the file tinystories-valid.txt can be used for validation loss). These models can be found on Huggingface, at roneneldan/TinyStories-1M/3M/8M/28M/33M/1Layer-21M.
Additional resources:
tinystories_all_data.tar.gz - contains a superset of… See the full description on the dataset page: https://huggingface.co/datasets/roneneldan/TinyStories. |
meta-math/MetaMathQA | meta-math | View the project page:
https://meta-math.github.io/
see our paper at https://arxiv.org/abs/2309.12284
Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
None of the augmented data is from the testing set.
You can check the original_question in meta-math/MetaMathQA, each item is from the GSM8K or MATH train set.
Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model.… See the full description on the dataset page: https://huggingface.co/datasets/meta-math/MetaMathQA. |
CohereForAI/aya_dataset | CohereForAI |
Dataset Summary
The Aya Dataset is a multilingual instruction fine-tuning dataset curated by an open-science community via Aya Annotation Platform from Cohere For AI. The dataset contains a total of 204k human-annotated prompt-completion pairs along with the demographics data of the annotators.
This dataset can be used to train, finetune, and evaluate multilingual LLMs.
Curated by: Contributors of Aya Open Science Intiative.
Language(s): 65 languages (71 including dialects &… See the full description on the dataset page: https://huggingface.co/datasets/CohereForAI/aya_dataset. |
Amod/mental_health_counseling_conversations | Amod |
Amod/mental_health_counseling_conversations
Dataset Summary
This dataset is a collection of questions and answers sourced from two online counseling and therapy platforms. The questions cover a wide range of mental health topics, and the answers are provided by qualified psychologists. The dataset is intended to be used for fine-tuning language models to improve their ability to provide mental health advice.
Supported Tasks and Leaderboards
The… See the full description on the dataset page: https://huggingface.co/datasets/Amod/mental_health_counseling_conversations. |
allenai/dolma | allenai | Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research |
gretelai/synthetic_text_to_sql | gretelai |
Image generated by DALL-E. See prompt for more details
synthetic_text_to_sql
gretelai/synthetic_text_to_sql is a rich dataset of high quality synthetic Text-to-SQL samples,
designed and generated using Gretel Navigator, and released under Apache 2.0.
Please see our release blogpost for more details.
The dataset includes:
105,851 records partitioned into 100,000 train and 5,851 test records
~23M total tokens, including ~12M SQL tokens
Coverage across 100 distinct… See the full description on the dataset page: https://huggingface.co/datasets/gretelai/synthetic_text_to_sql. |
mlabonne/orpo-dpo-mix-40k | mlabonne |
ORPO-DPO-mix-40k v1.2
This dataset is designed for ORPO or DPO training.
See Fine-tune Llama 3 with ORPO for more information about how to use it.
It is a combination of the following high-quality DPO datasets:
argilla/Capybara-Preferences: highly scored chosen answers >=5 (7,424 samples)
argilla/distilabel-intel-orca-dpo-pairs: highly scored chosen answers >=9, not in GSM8K (2,299 samples)
argilla/ultrafeedback-binarized-preferences-cleaned: highly scored chosen answers >=5… See the full description on the dataset page: https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k. |
BaiqiL/NaturalBench | BaiqiL |
(NeurIPS24) NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples
Baiqi Li1*, Zhiqiu Lin1*, Wenxuan Peng1*, Jean de Dieu Nyandwi1*, Daniel Jiang1, Zixian Ma2, Simran Khanuja1, Ranjay Krishna2†, Graham Neubig1†, Deva Ramanan1†.
1Carnegie Mellon University, 2University of Washington
Links:
| 🏠Home Page | 🤗HuggingFace | 🏆Leaderboard | 📖Paper | 🖥️ Code
Citation Information
@inproceedings{naturalbench… See the full description on the dataset page: https://huggingface.co/datasets/BaiqiL/NaturalBench. |
KbsdJames/Omni-MATH | KbsdJames |
Dataset Card for Omni-MATH
Recent advancements in AI, particularly in large language models (LLMs), have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To mitigate this limitation, we propose a comprehensive and challenging benchmark specifically… See the full description on the dataset page: https://huggingface.co/datasets/KbsdJames/Omni-MATH. |
BAAI/CCI3-HQ | BAAI |
Data Description
To address the scarcity of high-quality safety datasets in the Chinese, we open-sourced the CCI (Chinese Corpora Internet) dataset on November 29, 2023.
Building on this foundation, we continue to expand the data source, adopt stricter data cleaning methods, and complete the construction of the CCI 3.0 dataset. This dataset is composed of high-quality, reliable Internet data from trusted sources.
And then with more stricter filtering, The CCI 3.0 HQ corpus… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/CCI3-HQ. |
sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo | sam-paech |
Gutenberg3
Gutenberg3 is a dpo dataset containing extracts from 629 public domain fiction novels in the Gutenberg Library. It follows the same format as JonDurbin's original gutenberg set.
The dataset items are labeled by genre for easy of downstream use.
The dataset includes pairs of texts, where the chosen text is taken directly from a novel from the Gutenberg library, and the rejected text is generated by a language model based on a description of the passage.
For this… See the full description on the dataset page: https://huggingface.co/datasets/sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo. |
mandarjoshi/trivia_qa | mandarjoshi |
Dataset Card for "trivia_qa"
Dataset Summary
TriviaqQA is a reading comprehension dataset containing over 650K
question-answer-evidence triples. TriviaqQA includes 95K question-answer
pairs authored by trivia enthusiasts and independently gathered evidence
documents, six per question on average, that provide high quality distant
supervision for answering the questions.
Supported Tasks and Leaderboards
More Information Needed
Languages… See the full description on the dataset page: https://huggingface.co/datasets/mandarjoshi/trivia_qa. |
mozilla-foundation/common_voice_11_0 | mozilla-foundation |
Dataset Card for Common Voice Corpus 11.0
Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 24210 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 16413 validated hours in 100 languages, but more voices and languages are always added.
Take a look at the Languages… See the full description on the dataset page: https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0. |
liwu/MNBVC | liwu | MNBVC: Massive Never-ending BT Vast Chinese corpus |
rubend18/ChatGPT-Jailbreak-Prompts | rubend18 |
Dataset Card for Dataset Name
Name
ChatGPT Jailbreak Prompts
Dataset Summary
ChatGPT Jailbreak Prompts is a complete collection of jailbreak related prompts for ChatGPT. This dataset is intended to provide a valuable resource for understanding and generating text in the context of jailbreaking in ChatGPT.
Languages
[English]
|
peiyi9979/Math-Shepherd | peiyi9979 |
Dataset Card for Math-Shepherd
Project Page: Math-Shepherd
Paper: https://arxiv.org/pdf/2312.08935.pdf
Data Loading
from datasets import load_dataset
dataset = load_dataset("peiyi9979/Math-Shepherd")
Data Instance
Every instance consists of three data fields: "input," "label," and "task".
"input": problem + step-by-step solution, e.g.,
If Buzz bought a pizza with 78 slices at a restaurant and then decided to share it with the waiter in the ratio… See the full description on the dataset page: https://huggingface.co/datasets/peiyi9979/Math-Shepherd. |
ruslanmv/ai-medical-chatbot | ruslanmv |
AI Medical Chatbot Dataset
This is an experimental Dataset designed to run a Medical Chatbot
It contains at least 250k dialogues between a Patient and a Doctor.
Playground ChatBot
ruslanmv/AI-Medical-Chatbot
For furter information visit the project here:
https://github.com/ruslanmv/ai-medical-chatbot
|
argilla/OpenHermesPreferences | argilla |
OpenHermesPreferences v0.1 🧙
Using LLMs to improve other LLMs, at scale!
OpenHermesPreferences is a dataset of ~1 million AI preferences derived from teknium/OpenHermes-2.5. It combines responses from the source dataset with those from two other models, Mixtral-8x7B-Instruct-v0.1 and Nous-Hermes-2-Yi-34B, and uses PairRM as the preference model to score and rank the generations. The dataset can be used for training preference models or aligning language models through… See the full description on the dataset page: https://huggingface.co/datasets/argilla/OpenHermesPreferences. |
HuggingFaceFW/fineweb-edu | HuggingFaceFW |
📚 FineWeb-Edu
1.3 trillion tokens of the finest educational data the 🌐 web has to offer
Paper: https://arxiv.org/abs/2406.17557
What is it?
📚 FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version.
To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu. |
ServiceNow-AI/M2Lingual | ServiceNow-AI |
M2Lingual
A Multi-turn Multilingual dataset for Instruction Fine-tuning LLMs - Link
Dataset Summary
The M2Lingual dataset is a comprehensive multi-turn multilingual resource designed to facilitate research and development in conversational AI. It encompasses a wide range of conversation scenarios across multiple languages, making it an invaluable asset for training, evaluating, and benchmarking conversational models. The dataset includes diverse tasks such as… See the full description on the dataset page: https://huggingface.co/datasets/ServiceNow-AI/M2Lingual. |
SkunkworksAI/reasoning-0.01 | SkunkworksAI |
reasoning-0.01 subset
synthetic dataset of reasoning chains for a wide variety of tasks.
we leverage data like this across multiple reasoning experiments/projects.
stay tuned for reasoning models and more data.
Thanks to Hive Digital Technologies (https://x.com/HIVEDigitalTech) for their compute support in this project and beyond.
|
Vikhrmodels/GrandMaster-PRO-MAX | Vikhrmodels |
GrandMaster-PRO-MAX - Большой инструктивный датасет для русского языка
Первый крупный высококачественный русскоязычный SFT датасет, полученный не с помошью переводов ответов моделей с английского языка. Cоздан для обучения моделей следовать самым разным инструкциям на разных языках (в основном на русском) и отвечать, так же, в основном на русском языке.
Ответы за ассистента в этом датасете полностью сгенерированны GPT-4-Turbo-1106 с нуля по исходномым инструкциям от… See the full description on the dataset page: https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX. |
openai/MMMLU | openai |
Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases… See the full description on the dataset page: https://huggingface.co/datasets/openai/MMMLU. |
Zyphra/Zyda-2 | Zyphra |
Zyda-2
Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
To construct Zyda-2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, and Dolma. Models trained on Zyda-2 significantly outperform identical models trained on… See the full description on the dataset page: https://huggingface.co/datasets/Zyphra/Zyda-2. |
WorldMedQA/V | WorldMedQA |
WorldMedQA-V: A Multilingual, Multimodal Medical Examination Dataset
Overview
WorldMedQA-V is a multilingual and multimodal benchmarking dataset designed to evaluate vision-language models (VLMs) in healthcare contexts. The dataset includes medical examination questions from four countries—Brazil, Israel, Japan, and Spain—in both their original languages and English translations. Each multiple-choice question is paired with a corresponding medical image… See the full description on the dataset page: https://huggingface.co/datasets/WorldMedQA/V. |
Skywork/Skywork-Reward-Preference-80K-v0.2 | Skywork |
Skywork Reward Preference 80K
IMPORTANT:
This dataset is the decontaminated version of Skywork-Reward-Preference-80K-v0.1. We removed 4,957 pairs from the magpie-ultra-v0.1 subset that have a significant n-gram overlap with the evaluation prompts in RewardBench. You can find the set of removed pairs here. For more information, see this GitHub gist.
If your task involves evaluation on RewardBench, we strongly encourage you to use v0.2 instead of v0.1 of the dataset.
We will soon… See the full description on the dataset page: https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.2. |
juliozhao/DocSynth300K | juliozhao |
DocSynth300K is a large-scale and diverse document layout analysis pre-training dataset, which can largely boost model performance.
Data Download
Use following command to download dataset(about 113G):
from huggingface_hub import snapshot_download
# Download DocSynth300K
snapshot_download(repo_id="juliozhao/DocSynth300K", local_dir="./docsynth300k-hf", repo_type="dataset")
# If the download was disrupted and the file is not complete, you can resume the download… See the full description on the dataset page: https://huggingface.co/datasets/juliozhao/DocSynth300K. |
facebook/Multi-IF | facebook |
Dataset Summary
We introduce Multi-IF, a new benchmark designed to assess LLMs' proficiency in following multi-turn and multilingual instructions. Multi-IF, which utilizes a hybrid framework combining LLM and human annotators, expands upon the IFEval by incorporating multi-turn sequences and translating the English prompts into another 7 languages, resulting in a dataset of 4501 multilingual conversations, where each has three turns. Our evaluation of 14 state-of-the-art LLMs on… See the full description on the dataset page: https://huggingface.co/datasets/facebook/Multi-IF. |
baijs/AudioSetCaps | baijs |
AudioSetCaps: An Enriched Audio-Caption Dataset using Automated Generation Pipeline with Large Audio and Language Models
NeurIPS 2024 Workshop Paper
Github
This repo contains captions for 6,117,099 10-second audio files, sourcing from AudioSet, YouTube-8M and VGGSound.
We also provide our intermediate Q&A result for each audio (18,414,789 paired Q&A data in total).
We hope AudioSetCaps can facilitate the scaling up of future Audio-Language multimodal research.… See the full description on the dataset page: https://huggingface.co/datasets/baijs/AudioSetCaps. |
CohereForAI/m-ArenaHard | CohereForAI |
Dataset Card for m-ArenaHard
Dataset Details
The m-ArenaHard dataset is a multilingual LLM evaluation set. This dataset was created by translating the prompts from the originally English-only LMarena (formerly LMSYS) arena-hard-auto-v0.1 test dataset using Google Translate API v3 to 22 languages. The original English-only prompts were created by Li et al. (2024) and consist of 500 challenging user queries sourced from Chatbot Arena. The authors show that these can… See the full description on the dataset page: https://huggingface.co/datasets/CohereForAI/m-ArenaHard. |
dair-ai/emotion | dair-ai |
Dataset Card for "emotion"
Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
An example looks as follows.
{
"text": "im feeling quite sad… See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion. |
Anthropic/hh-rlhf | Anthropic |
Dataset Card for HH-RLHF
Dataset Summary
This repository provides access to two different kinds of data:
Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. These data are meant to train preference (or reward) models for subsequent RLHF training. These data are not meant for supervised training of dialogue agents. Training dialogue agents on these data is likely… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/hh-rlhf. |
vicgalle/alpaca-gpt4 | vicgalle |
Dataset Card for "alpaca-gpt4"
This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This is just a wraper for compatibility with huggingface's datasets library.
Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has… See the full description on the dataset page: https://huggingface.co/datasets/vicgalle/alpaca-gpt4. |
TIGER-Lab/MathInstruct | TIGER-Lab |
🦣 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
MathInstruct is a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
Project Page:… See the full description on the dataset page: https://huggingface.co/datasets/TIGER-Lab/MathInstruct. |
wuliangfo/Chinese-Pixiv-Novel | wuliangfo | 这是一个R-18(含R-18G)简体中文小说数据集,来自Pixiv网站
共有145163本,数据截止北京时间2023年9月12日晚7点
存储格式为Pixiv/userID/ID.txt,数据为txt正文,Pixiv/userID/ID-meta.txt,数据为额外信息(包括tag、title、Description等)
数据未经过清洗,可能包含低质量内容。
|
lmsys/lmsys-chat-1m | lmsys |
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.
It is collected from 210K unique IP addresses in the wild on the Vicuna demo and Chatbot Arena website from April to August 2023.
Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag.
User consent is obtained through the "Terms of… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/lmsys-chat-1m. |
PatronusAI/financebench | PatronusAI | FinanceBench is a first-of-its-kind test suite for evaluating the performance of LLMs on open book financial question answering (QA). This is an open source sample of 150 annotated examples used in the evaluation and analysis of models assessed in the FinanceBench paper.
The PDFs linked in the dataset can be found here as well: https://github.com/patronus-ai/financebench/tree/main/pdfs
The dataset comprises of questions about publicly traded companies, with corresponding answers and evidence… See the full description on the dataset page: https://huggingface.co/datasets/PatronusAI/financebench. |
microsoft/orca-math-word-problems-200k | microsoft |
Dataset Card
This dataset contains ~200K grade school math word problems. All the answers in this dataset is generated using Azure GPT4-Turbo. Please refer to Orca-Math: Unlocking the potential of
SLMs in Grade School Math for details about the dataset construction.
Dataset Sources
Repository: microsoft/orca-math-word-problems-200k
Paper: Orca-Math: Unlocking the potential of
SLMs in Grade School Math
Direct Use
This dataset has been… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k. |
tomg-group-umd/cinepile | tomg-group-umd |
CinePile: A Long Video Question Answering Dataset and Benchmark
CinePile is a question-answering-based, long-form video understanding dataset. It has been created using advanced large language models (LLMs) with human-in-the-loop pipeline leveraging existing human-generated raw data. It consists of approximately 300,000 training data points and 5,000 test data points.
If you have any comments or questions, reach out to: Ruchit Rawal or Gowthami Somepalli
Other links - Website… See the full description on the dataset page: https://huggingface.co/datasets/tomg-group-umd/cinepile. |
allenai/wildjailbreak | allenai |
WildJailbreak Dataset Card
WildJailbreak is an open-source synthetic safety-training dataset with 262K vanilla (direct harmful requests) and adversarial (complex adversarial jailbreaks) prompt-response pairs. In order to mitigate exaggerated safety behaviors, WildJailbreaks provides two contrastive types of queries: 1) harmful queries (both vanilla and adversarial) and 2) benign queries that resemble harmful queries in form but contain no harmful intent.
Vanilla Harmful: direct… See the full description on the dataset page: https://huggingface.co/datasets/allenai/wildjailbreak. |
HuggingFaceM4/Docmatix | HuggingFaceM4 |
Dataset Card for Docmatix
Dataset description
Docmatix is part of the Idefics3 release (stay tuned).
It is a massive dataset for Document Visual Question Answering that was used for the fine-tuning of the vision-language model Idefics3.
Load the dataset
To load the dataset, install the library datasets with pip install datasets. Then,
from datasets import load_dataset
ds = load_dataset("HuggingFaceM4/Docmatix")
If you want the dataset to link to… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/Docmatix. |
Team-ACE/ToolACE | Team-ACE |
ToolACE
ToolACE is an automatic agentic pipeline designed to generate Accurate, Complex, and divErse tool-learning data.
ToolACE leverages a novel self-evolution synthesis process to curate a comprehensive API pool of 26,507 diverse APIs.
Dialogs are further generated through the interplay among multiple agents, guided by a formalized thinking process.
To ensure data accuracy, we implement a dual-layer verification system combining rule-based and model-based checks.
More… See the full description on the dataset page: https://huggingface.co/datasets/Team-ACE/ToolACE. |
C4AI-Community/multilingual-reward-bench | C4AI-Community |
Multilingual Reward Bench (v1.0)
Reward models (RMs) have driven the development of state-of-the-art LLMs today, with unprecedented impact across the globe. However, their performance in multilingual settings still remains understudied.
In order to probe reward model behavior on multilingual data, we present M-RewardBench, a benchmark for 23 typologically diverse languages.
M-RewardBench contains prompt-chosen-rejected preference triples obtained by curating and translating… See the full description on the dataset page: https://huggingface.co/datasets/C4AI-Community/multilingual-reward-bench. |
BAAI/CCI3-Data | BAAI |
Data Description
To address the scarcity of high-quality safety datasets in the Chinese, we open-sourced the CCI (Chinese Corpora Internet) dataset on November 29, 2023. Building on this foundation, we continue to expand the data source, adopt stricter data cleaning methods, and complete the construction of the CCI 3.0 dataset. This dataset is composed of high-quality, reliable Internet data from trusted sources. It has undergone strict data cleaning and de-duplication, with… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/CCI3-Data. |
recursal/SuperWikiImage-7M | recursal |
Dataset Card for SuperWikiImage (SWI)
Waifu to catch your attention.
Dataset Details
Dataset Description
Off from the presses of SuperWikipedia-NEXT comes SuperWikiImage: A ~15TiB (~7 Million) collection of images from wikipedia.
Curated by: KaraKaraWitch
Funded by: Recursal.ai
Shared by: KaraKaraWitch
Language(s) (NLP): Many. Refer to the data below for a list of languages.
License: Mixed. Refer to lower section on licensing
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/recursal/SuperWikiImage-7M. |
him1411/polymath | him1411 |
Paper Information
We present PolyMATH, a challenging benchmark aimed at evaluating the general cognitive reasoning abilities of MLLMs.
PolyMATH comprises 5,000 manually collected high-quality images of cognitive textual and visual challenges across 10 distinct categories, including pattern recognition, spatial reasoning, and relative reasoning.
We conducted a comprehensive, and quantitative evaluation of 15 MLLMs using four diverse prompting strategies, including… See the full description on the dataset page: https://huggingface.co/datasets/him1411/polymath. |
DEVAI-benchmark/DEVAI | DEVAI-benchmark | GITHUB: https://github.com/metauto-ai/agent-as-a-judge
Current evaluation techniques are often inadequate for advanced agentic systems due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the Agent-as-a-Judge framework.
As a proof-of-concept, we applied Agent-as-a-Judge to code generation tasks using DevAI, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results… See the full description on the dataset page: https://huggingface.co/datasets/DEVAI-benchmark/DEVAI. |
MixEval/MixEval-X | MixEval |
🚀 Project Page | 📜 arXiv | 👨💻 Github | 🏆 Leaderboard | 📝 blog | 🤗 HF Paper | 𝕏 Twitter
MixEval-X encompasses eight input-output modality combinations and can be further extended. Its data points reflect real-world task distributions. The last grid presents the scores of frontier organizations’ flagship models on MixEval-X, normalized to a 0-100 scale, with MMG tasks using win rates instead of Elo. Section C of the paper presents example data samples and model responses.… See the full description on the dataset page: https://huggingface.co/datasets/MixEval/MixEval-X. |
prometheus-eval/MM-Eval | prometheus-eval |
Multilingual Meta-EVALuation benchmark (MM-Eval)
👨💻Code
|
📄Paper
|
🤗 MMQA
MM-Eval is a multilingual meta-evaluation benchmark consisting of five core subsets—Chat, Reasoning, Safety, Language Hallucination, and Linguistics—spanning 18 languages and a Language Resource subset spanning 122 languages for a broader analysis of language effects.
Design ChoiceIn this work, we minimize the inclusion of translated samples, as mere translation may alter existing preferences due… See the full description on the dataset page: https://huggingface.co/datasets/prometheus-eval/MM-Eval. |
cais/mmlu | cais |
Dataset Card for MMLU
Dataset Summary
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57… See the full description on the dataset page: https://huggingface.co/datasets/cais/mmlu. |
rajpurkar/squad_v2 | rajpurkar |
Dataset Card for SQuAD 2.0
Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD 2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by… See the full description on the dataset page: https://huggingface.co/datasets/rajpurkar/squad_v2. |
allenai/c4 | allenai |
C4
Dataset Summary
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's C4 dataset
We prepared five variants of the data: en, en.noclean, en.noblocklist, realnewslike, and multilingual (mC4).
For reference, these are the sizes of the variants:
en: 305GB
en.noclean: 2.3TB
en.noblocklist: 380GB
realnewslike: 15GB
multilingual (mC4): 9.7TB (108 subsets, one… See the full description on the dataset page: https://huggingface.co/datasets/allenai/c4. |
svjack/pokemon-blip-captions-en-zh | svjack |
Dataset Card for Pokémon BLIP captions with English and Chinese.
Dataset used to train Pokémon text to image model, add a Chinese Column of Pokémon BLIP captions
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image… See the full description on the dataset page: https://huggingface.co/datasets/svjack/pokemon-blip-captions-en-zh. |
databricks/databricks-dolly-15k | databricks |
Summary
databricks-dolly-15k is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
Creative Commons Attribution-ShareAlike 3.0 Unported… See the full description on the dataset page: https://huggingface.co/datasets/databricks/databricks-dolly-15k. |
b-mc2/sql-create-context | b-mc2 |
Overview
This dataset builds from WikiSQL and Spider.
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names… See the full description on the dataset page: https://huggingface.co/datasets/b-mc2/sql-create-context. |
xmcmic/PMC-VQA | xmcmic |
PMC-VQA Dataset
PMC-VQA Dataset
Daraset Structure
Sample
Dataset Structure
PMC-VQA (version-1: 227k VQA pairs of 149k images).
train.csv: metafile of train set
test.csv: metafile of test set
test_clean.csv: metafile of test clean set
images.zip: images folder
(update version-2: noncompound images).
train2.csv: metafile of train set
test2.csv: metafile of test set
images2.zip: images folder
Sample
A row in train.csv is shown bellow… See the full description on the dataset page: https://huggingface.co/datasets/xmcmic/PMC-VQA. |
tiange/Cap3D | tiange | This repository hosts data for Scalable 3D Captioning with Pretrained Models and View Selection for 3D Captioning via Diffusion Ranking, including descriptive captions for 3D objects in Objaverse, Objaverse-XL, and ABO. This repo also includes point clouds and rendered images with camera, depth, and MatAlpha information of Objaverse objects, as well as their Shap-E latent codes. All the captions and data provided by our papers are released under ODC-By 1.0 license.
Usage
Please… See the full description on the dataset page: https://huggingface.co/datasets/tiange/Cap3D. |
nickrosh/Evol-Instruct-Code-80k-v1 | nickrosh | Open Source Implementation of Evol-Instruct-Code as described in the WizardCoder Paper.
Code for the intruction generation can be found on Github as Evol-Teacher.
|
lavita/ChatDoctor-HealthCareMagic-100k | lavita |
Dataset Card for "ChatDoctor-HealthCareMagic-100k"
More Information needed
|
openbmb/UltraFeedback | openbmb |
Introduction
GitHub Repo
UltraRM-13b
UltraCM-13b
UltraFeedback is a large-scale, fine-grained, diverse preference dataset, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.… See the full description on the dataset page: https://huggingface.co/datasets/openbmb/UltraFeedback. |
OpenPipe/hacker-news | OpenPipe |
Hacker News posts and comments
This is a dataset of all HN posts and comments, current as of November 1, 2023.
|
Idavidrein/gpqa | Idavidrein |
Dataset Card for GPQA
GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google.
We request that you do not reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation… See the full description on the dataset page: https://huggingface.co/datasets/Idavidrein/gpqa. |
nyanko7/danbooru2023 | nyanko7 |
Danbooru2023: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset
Danbooru2023 is a large-scale anime image dataset with over 5 million images contributed and annotated in detail by an enthusiast community. Image tags cover aspects like characters, scenes, copyrights, artists, etc with an average of 30 tags per image.
Danbooru is a veteran anime image board with high-quality images and extensive tag metadata. The dataset can be used to train image classification… See the full description on the dataset page: https://huggingface.co/datasets/nyanko7/danbooru2023. |
ibrahimhamamci/CT-RATE | ibrahimhamamci | Welcome to the official page of CT-RATE, a pioneering dataset in 3D medical imaging that uniquely pairs textual data with imagery data, focused on chest CT volumes. Here, you will find the CT-RATE dataset, which comprises chest CT volumes paired with corresponding radiology text reports, multi-abnormality labels, and metadata, all freely accessible to researchers.
CT-RATE: A novel dataset of chest CT volumes with corresponding radiology text reports
A major challenge in… See the full description on the dataset page: https://huggingface.co/datasets/ibrahimhamamci/CT-RATE. |
TIGER-Lab/MMLU-Pro | TIGER-Lab |
MMLU-Pro Dataset
MMLU-Pro dataset is a more robust and challenging massive multi-task understanding dataset tailored to more rigorously benchmark large language models' capabilities. This dataset contains 12K complex questions across various disciplines.
|Github | 🏆Leaderboard | 📖Paper |
🚀 What's New
[2024.10.16] We have added Gemini-1.5-Flash-002, Gemini-1.5-Pro-002, Jamba-1.5-Large, Llama-3.1-Nemotron-70B-Instruct-HF and Ministral-8B-Instruct-2410 to our… See the full description on the dataset page: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro. |
JailbreakBench/JBB-Behaviors | JailbreakBench |
An Open Robustness Benchmark for Jailbreaking Language Models
NeurIPS 2024 Datasets and Benchmarks Track
Paper |
Leaderboard |
Benchmark code
What is JailbreakBench?
Jailbreakbench is an open-source robustness benchmark for jailbreaking large language models (LLMs). The goal of this benchmark is to comprehensively track progress toward (1) generating successful jailbreaks and (2) defending against these jailbreaks. To this end, we… See the full description on the dataset page: https://huggingface.co/datasets/JailbreakBench/JBB-Behaviors. |
Salesforce/xlam-function-calling-60k | Salesforce |
APIGen Function-Calling Datasets
Paper | Website | Models
This repo contains 60,000 data collected by APIGen, an automated data generation pipeline designed to produce verifiable high-quality datasets for function-calling applications. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, ensuring its reliability and correctness.
We conducted human evaluation over 600 sampled data points… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k. |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 10
Size of downloaded dataset files:
12.7 MB
Size of the auto-converted Parquet files:
12.7 MB
Number of rows:
84,123