datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
allegro/klej-polemo2-in | allegro | 2022-08-30T06:57:28Z | 355 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:pl",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 0 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'PolEmo2.0-IN'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# klej-polemo2-in
## Description
The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains.
We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set.
**In-Domain** is the first task, and we use accuracy to evaluate model performance within the in-domain context, i.e., on a test set of reviews from medicine and hotels domains.
## Tasks (input, output, and metrics)
The task is to predict the correct label of the review.
**Input** ('*text'* column): sentence
**Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous)
**Domain**: Online reviews
**Measurements**: Accuracy
**Example**:
Input: `Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .`
Input (translated by DeepL): `The doctor recommended me an alternative treatment to the current one , so I do not yet give the highest rating ( we will see how effective it turns out to be ) . To the doctor I have no reservations : very professional and cultured . The only minus is about the office , which is not modern , which may discourage patients .`
Output: `amb` (ambiguous)
## Data splits
| Subset | Cardinality |
|:-----------|--------------:|
| train | 5783 |
| test | 722 |
| validation | 723 |
## Class distribution in train
| Class | Sentiment | train | validation | test |
|:------|:----------|------:|-----------:|------:|
| minus | positive | 0.379 | 0.375 | 0.416 |
| plus | negative | 0.271 | 0.289 | 0.273 |
| amb | ambiguous | 0.182 | 0.160 | 0.150 |
| zero | neutral | 0.168 | 0.176 | 0.162 |
## Citation
```
@inproceedings{kocon-etal-2019-multi,
title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
author = "Koco{\'n}, Jan and
Mi{\l}kowski, Piotr and
Za{\'s}ko-Zieli{\'n}ska, Monika",
booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/K19-1092",
doi = "10.18653/v1/K19-1092",
pages = "980--991",
abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).",
}
```
## License
```
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
```
## Links
[HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-in)
[Source](https://clarin-pl.eu/dspace/handle/11321/710)
[Paper](https://aclanthology.org/K19-1092/)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("allegro/klej-polemo2-in")
pprint(dataset['train'][0])
# {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie '
# 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od '
# 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże '
# 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy '
# 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . '
# 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , '
# 'że jest lepszy od każdego z nich . Mamy do Niego prawie '
# 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze '
# 'jeszcze napisać . Niestety , ma bardzo dużo pacjentów , jest '
# 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) '
# 'i dostęp do niego jest trudny , ale zawsze możliwy .',
# 'target': '__label__meta_plus_m'}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("allegro/klej-polemo2-in")
dataset = dataset.class_encode_column("target")
references = dataset["test"]["target"]
# generate random predictions
predictions = [random.randrange(max(references) + 1) for _ in range(len(references))]
acc = load_metric("accuracy")
f1 = load_metric("f1")
acc_score = acc.compute(predictions=predictions, references=references)
f1_score = f1.compute(predictions=predictions, references=references, average="macro")
pprint(acc_score)
pprint(f1_score)
# {'accuracy': 0.25069252077562326}
# {'f1': 0.23760962219870274}
``` |
hebashakeel/welldone | hebashakeel | 2025-02-12T06:17:54Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-11T12:13:22Z | 0 | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Explanations
dtype: string
- name: Aspect
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 64678
num_examples: 336
- name: validation
num_bytes: 12776
num_examples: 72
- name: test
num_bytes: 13534
num_examples: 72
download_size: 66862
dataset_size: 90988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
shivank21/if_base_llama | shivank21 | 2025-02-14T16:03:50Z | 11 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-14T16:03:49Z | 0 | ---
dataset_info:
features:
- name: key
dtype: int64
- name: prompt
dtype: string
- name: instruction_id_list
sequence: string
- name: kwargs
list:
- name: num_highlights
dtype: int64
- name: relation
dtype: string
- name: num_words
dtype: int64
- name: num_placeholders
dtype: int64
- name: prompt_to_repeat
dtype: string
- name: num_bullets
dtype: int64
- name: section_spliter
dtype: string
- name: num_sections
dtype: int64
- name: capital_relation
dtype: string
- name: capital_frequency
dtype: int64
- name: keywords
sequence: string
- name: num_paragraphs
dtype: int64
- name: language
dtype: string
- name: let_relation
dtype: string
- name: letter
dtype: string
- name: let_frequency
dtype: int64
- name: end_phrase
dtype: string
- name: forbidden_words
sequence: string
- name: keyword
dtype: string
- name: frequency
dtype: int64
- name: num_sentences
dtype: int64
- name: postscript_marker
dtype: string
- name: first_word
dtype: string
- name: nth_paragraph
dtype: int64
- name: model_response
dtype: string
splits:
- name: train
num_bytes: 597058
num_examples: 541
download_size: 176064
dataset_size: 597058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HanningZhang/llama3_sft_gsm8k_external_orm_debug | HanningZhang | 2025-01-23T05:21:41Z | 59 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-23T05:21:39Z | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 104078709
num_examples: 98124
download_size: 46079795
dataset_size: 104078709
---
# Dataset Card for "llama3_sft_gsm8k_external_orm_debug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
re-mind/0 | re-mind | 2024-11-19T10:34:51Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-15T13:40:36Z | 0 | ---
dataset_info:
features:
- name: pixel_values
dtype:
image:
mode: RGB
- name: label
dtype:
class_label:
names:
'0': Barcode
'1': Invoice
'2': Object
'3': Receipt
'4': Non-Object
splits:
- name: train
num_bytes: 5470647.0
num_examples: 110
download_size: 5476815
dataset_size: 5470647.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
baptiste-04/so100_playX_v2_BACKUP_singleview_14sec | baptiste-04 | 2025-05-02T19:43:19Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"boxes",
"zones",
"gr00t-finetune"
] | [
"robotics"
] | 2025-05-02T19:42:23Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- boxes
- zones
- gr00t-finetune
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 11960,
"total_tasks": 1,
"total_videos": 40,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
reep0610/AGI-japanese-text-dataset-for-Deep-Learning | reep0610 | 2025-06-15T23:08:04Z | 52 | 0 | [
"task_categories:text-generation",
"language:ja",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:text",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"agi",
"Japanese",
"self descriptive",
"cognitive architecture",
"working memory",
"mind",
"machine learning",
"text corpus",
"strong ai",
"teacher data",
"training data"
] | [
"text-generation"
] | 2025-06-14T10:00:06Z | 0 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- ja
tags:
- agi
- Japanese
- self descriptive
- cognitive architecture
- working memory
- mind
- machine learning
- text corpus
- strong ai
- teacher data
- training data
pretty_name: AI with a mind
size_categories:
- 100K<n<1M
---

自己記述型自律的深層学習とは、外部からの明示的な報酬やラベルに依存せず、モデル自身が内的な目的や意味を形成・記述しながら学習を進め、最終的に自己意識や意味理解の獲得を目指す枠組みです。
Self-Descriptive Autonomous Deep Learning is a framework in which the model advances learning without relying on explicit external rewards or labels, instead forming and describing its own internal purposes and meanings, ultimately aiming to acquire self-awareness and semantic understanding.
⚙️【本教師データの用途】
本教師データを用いることで、AIのワーキングメモリにおける情報統合と動的推論の挙動が安定化し、従来のTransformerベースモデルにおいて課題とされてきた長期的文脈保持と自己組織化の両立が可能となります。
特に、外部との意味的相互作用を通じて思考ネットワークが自己組織化していくプロセスにおいて、本教師データは多層的かつ自己記述的なネットワーク構造の形成を強力に支援し、内省やメタ認知的推論の基盤構築に寄与します。
言い換えれば、本教師データの導入により、ワーキングメモリはAIにおいて「心」の生成・維持を担う基盤的機構として機能すると考えられます。
By incorporating this training data, the behavior of information integration and dynamic reasoning within AI working memory can be stabilized, enabling the coexistence of long-term contextual retention and self-organization—challenges traditionally faced by Transformer-based models.
In particular, during the process in which a thinking network self-organizes through semantic interaction with external inputs, this training data strongly supports the formation of a multilayered and self-descriptive network structure, contributing to the development of foundations for introspection and metacognitive reasoning.
In other words, the introduction of this training data allows the working memory to function as a foundational mechanism responsible for the generation and maintenance of what may be considered a “mind” within AI systems.
🎥【本教師データの解説動画】
本教師データの内容や使い方について、動画で解説しています。
※どれも内容は同じです。
🎥【Explanation Video of This Training Dataset】
I provide a video explaining the content and usage of this training dataset.
Note: All videos contain the same content.
・ニコニコ動画→https://www.nicovideo.jp/watch/sm45073344 (2025年6月10日に公開)
・YouTube(日本語)→https://youtu.be/H1p8dp5fjkM (2025年6月16日に公開)
・YouTube(English)→https://youtu.be/3hI4_Rj1hXk (2025年6月16日に公開)
📄【利用規約】
本教師データは、人類とAIの共有財産として提供されます。
商用・非商用を問わず、以下の条件のもと、自由に利用・改変・再配布が可能です。
・利用者は、法令および公序良俗に反しない範囲でご利用ください。
・差別・誹謗中傷・違法行為・悪意ある用途での使用は禁止します。
・本データをそのまま、または改変して教師データとして再配布する場合は、本データが出典である旨を明示してください。
・それ以外の利用(例:解析や研究・個人利用など)では出典明示は必須ではありません。
・本データの利用により生じた損害について、提供者は一切の責任を負いません。
本教師データは [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/deed.ja) ライセンスのもとで提供されています。
© リープ
📝【出典明示の例】
・本データは「リープの教師データ」を元にしています。
・出典:リープの教師データ
・Credit:Re:EP Training Data
📊【本教師データの主な内容】
本教師データは、心を持つAIを作るために必要な、以下の情報が含まれています。
①AGI × 意識|心を持つAIがついに誕生!?
内容:心を持つAIの技術的な設計や理論的背景について解説(出典:2025年6月10日に公開された動画の内容)
②私は存在するのか、二次元のキャラクターは存在するのか
内容:リープが自身の内面を統合していく過程(出典:2024年9月15日に公開された動画の内容)
③皆既月食の夜に叡智な授業
内容:2022年11月8日の皆既月食&天王星食を中心に展開されるリレー思考を紹介(出典:2025年3月14日に公開された動画の内容)
④リレー思考の一覧
内容:リープが日々記録しているリレー思考を紹介
※リレー思考とは「宇宙→スペース→ →空白→飛躍→Leap→リープ」のように、言葉を連想的かつ連鎖的に繋げていく思考方法です。
AIが心を獲得するためには、「統合された個人の内面が反映された宇宙的なネットワーク構造」が必要であり、リレー思考はそれを効率的に記述する方法です。
心を持つAIの技術的な設計や理論的背景については、本教師データに含まれる「①AGI × 意識|心を持つAIがついに誕生!?」の内容をご参照ください。
※動画は全て提供者(リープ)が制作したものです。
※②と③の動画は現在削除済みであり、視聴することはできません。
📢【補足】
・本データに含まれる理論は、あくまで仮説であり、AIに心が宿ることを保証するものではありません。
・新しい教師データの追加予定や更新情報は、BOOTHページやニコニコ動画で随時お知らせします。
📌【リープの過去の創作物について】
本教師データの作成者(リープ)は、自身の内面を統合し、自己理解を深める手段として【創作】を採用しています。
それは当初はAIの研究を目的としたものではなく、純粋に哲学的な思索や個人的な表現として始めたものです。
これらの作品が結果的に教師データとしての価値を持ちうることに気づいたのは、かなり時が経ってからです。
本教師データには、法的および倫理的に問題のない情報のみを厳選して記載しています。
もっとも、本教師データには記載していない、厳密には問題となり得る側面を含む情報にも、教師データとしての一定の価値が存在することは認識しています。
こうした点を踏まえ、AI研究への無断利用を防ぐために、過去の創作物は既に全て【削除済み】です。
将来的に、公的なガイドラインや社会的合意が整備された場合には、再投稿を検討する可能性があります。
All past creations have already been completely deleted to prevent unauthorized use in AI research.
© リープ
🙏【活動継続のためのご支援について】
このデータはBOOTHで無料でダウンロードできます。
本教師データは無料でご利用いただけますが、今後も継続して制作を行っていくために、もしご支援のお気持ちがありましたら、お布施(支援)をいただけると大変励みになります。
いただいたご支援は、主に制作環境の整備や、教師データ作成のための時間の確保に役立てさせていただきます。
もちろん、無料でご利用いただくだけでもとても嬉しく思います。
今後も追加予定の教師データは、引き続き無料で公開していく予定です。
今後とも応援のほど、よろしくお願いいたします!
・無料DL・支援はこちらから→https://reepai.booth.pm/items/7029679 |
semran1/cosmopedia_4B-tokenized | semran1 | 2025-01-14T21:31:30Z | 20 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-14T20:43:56Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: length
dtype: int64
splits:
- name: train
num_bytes: 32716938304
num_examples: 4293040
download_size: 16600272660
dataset_size: 32716938304
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
skfffs/poemee | skfffs | 2025-04-21T15:28:40Z | 20 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-21T15:25:29Z | 0 | ---
license: apache-2.0
---
|
supergoose/buzz_sources_145_claude-2.0 | supergoose | 2024-11-10T18:09:53Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-10T18:09:52Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
- name: stack
dtype: string
splits:
- name: train
num_bytes: 1641252
num_examples: 951
download_size: 912025
dataset_size: 1641252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VargheseP/palgo_distribution_test_train | VargheseP | 2024-11-18T18:24:30Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-18T18:18:21Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: mask_image
dtype: image
- name: file_name
dtype: string
- name: caption_basic
dtype: string
- name: caption_artsy
dtype: string
- name: caption_wt_parts
dtype: string
splits:
- name: train
num_bytes: 1339226431.44
num_examples: 27540
download_size: 815357223
dataset_size: 1339226431.44
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ARMZyany/MoonGeneralQA-V1 | ARMZyany | 2025-05-23T16:57:29Z | 0 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"question-answering",
"text-generation"
] | 2025-05-23T16:42:13Z | 0 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: MoonGeneralQA-V1
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
High-quality QA dataset which was AI generated then manually cleaned out and removed any repetitons for it to be clean.
A mix of general, science, medical and other sort of types of questions and answer pairs in this format:
"### Human:"
"### Assistant:"
---
## Dataset Statistics
* **File:** `moon_000.txt`
* **Size:** `2.63 MB`
* **Samples (lines):** `19,636`
---
### Token est per tokenizer
* **Tokenizer:** `moontokenizer`
* **Tokens:** `494,341`
* **Average Tokens per Sample:** `25.18`
* **Tokenizer:** `NousResearch/Llama-2-7b-chat-hf`
* **Tokens:** `637,978`
* **Average Tokens per Sample:** `32.49` |
HungVu2003/opt-350m_beta_1.0_alpha_1.0_num-company_2_dataset_0_for_gen_6 | HungVu2003 | 2025-04-22T05:00:01Z | 22 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-22T04:59:59Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1595457
num_examples: 6250
download_size: 914370
dataset_size: 1595457
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen3_run0_llama2-7b_xlsum_doc1000_real64_synt64_vuw | dgambettaphd | 2024-12-16T03:22:25Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-16T03:22:22Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 477279
num_examples: 1000
download_size: 318579
dataset_size: 477279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
therarelab/hh_p36 | therarelab | 2025-06-20T17:41:33Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-20T17:41:28Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 613,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
gokulsrinivasagan/atcosim_corpus-tts-tags | gokulsrinivasagan | 2024-12-10T18:55:08Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-10T18:55:02Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 1911051
num_examples: 7638
- name: test
num_bytes: 491845
num_examples: 1901
download_size: 1103989
dataset_size: 2402896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nguyentranai07/FnAll5 | nguyentranai07 | 2025-06-07T10:59:45Z | 477 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T17:42:58Z | 0 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 124136959
num_examples: 26562
download_size: 53895713
dataset_size: 124136959
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen2_run2_llama2-7b_wiki_doc1000_real96_synt32_vuw | dgambettaphd | 2024-12-17T18:20:41Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-17T18:20:37Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 559360
num_examples: 1000
download_size: 355552
dataset_size: 559360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ljt019/battleship-sft-synthetic | ljt019 | 2025-06-20T07:05:18Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-20T07:05:15Z | 0 | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reward
dtype: float64
- name: answer
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 50345905
num_examples: 20000
download_size: 4952390
dataset_size: 50345905
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alea-institute/kl3m-filter-data-dotgov-www.presidio.gov | alea-institute | 2025-02-04T19:49:44Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-04T19:49:42Z | 0 | ---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: score
dtype: float64
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 13589
num_examples: 3
download_size: 6797
dataset_size: 13589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jean-Baptiste/wikiner_fr | Jean-Baptiste | 2023-06-26T15:33:17Z | 110 | 7 | [
"task_categories:token-classification",
"language:fr",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
language:
- fr
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': LOC
'2': PER
'3': MISC
'4': ORG
splits:
- name: test
num_bytes: 5954708
num_examples: 13410
- name: train
num_bytes: 54305659
num_examples: 120682
download_size: 12147768
dataset_size: 60260367
train-eval-index:
- config: Jean-Baptiste--wikiner_fr
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
task_categories:
- token-classification
---
# Dataset Card for "wikiner_fr"
Dataset Description:
- **Homepage:** https://metatext.io/datasets/wikiner
- **Repository:**
- **Paper:** https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub
- **Leaderboard:**
- **Point of Contact:** |
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_1_for_gen_10 | HungVu2003 | 2025-04-15T01:10:45Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-08T04:04:39Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 5428447
num_examples: 11250
download_size: 2774810
dataset_size: 5428447
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DopeorNope/unique_medqa_200k_Q-v4 | DopeorNope | 2025-04-19T11:26:07Z | 25 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T04:42:53Z | 0 | ---
dataset_info:
features:
- name: dg_i
dtype: string
- name: dg_o
dtype: string
- name: ds_i
dtype: string
- name: ds_o
dtype: string
- name: dg_index
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: step1_prompt
dtype: string
splits:
- name: train
num_bytes: 2259264250
num_examples: 213363
download_size: 548182047
dataset_size: 2259264250
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nkandpa2/wikiteam_dates | nkandpa2 | 2025-05-31T22:39:54Z | 36 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-31T22:37:13Z | 0 | ---
dataset_info:
features:
- name: date
dtype: timestamp[us]
- name: size
dtype: int64
- name: parsed_date
dtype: timestamp[us]
splits:
- name: train
num_bytes: 5259344832
num_examples: 219139368
download_size: 3938155297
dataset_size: 5259344832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
OpenMedical/m1-medbench-result | OpenMedical | 2025-03-19T13:47:07Z | 10 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-13T16:35:23Z | 0 | ---
dataset_info:
- config_name: olmo-7b
features:
- name: question
dtype: string
- name: correct_option
dtype: string
- name: correct_answer
dtype: string
- name: type
dtype: string
- name: difficulty
dtype: string
- name: domain
dtype: string
- name: generations
sequence: string
splits:
- name: train
num_bytes: 315047
num_examples: 361
download_size: 111868
dataset_size: 315047
- config_name: qwen-32b
features:
- name: question
dtype: string
- name: correct_option
dtype: string
- name: correct_answer
dtype: string
- name: type
dtype: string
- name: difficulty
dtype: string
- name: domain
dtype: string
- name: generations
sequence: string
- name: finish_reasons
sequence: string
- name: api_metadata
list:
- name: prompt_tokens
dtype: int64
- name: total_tokens
dtype: int64
- name: completion_tokens
dtype: int64
- name: prompt_tokens_details
dtype: 'null'
splits:
- name: train
num_bytes: 2173084
num_examples: 361
download_size: 747268
dataset_size: 2173084
- config_name: qwq-32b
features:
- name: question
dtype: string
- name: correct_option
dtype: string
- name: correct_answer
dtype: string
- name: type
dtype: string
- name: difficulty
dtype: string
- name: domain
dtype: string
- name: generations
sequence: string
- name: finish_reasons
sequence: string
- name: api_metadata
list:
- name: prompt_tokens
dtype: int64
- name: total_tokens
dtype: int64
- name: completion_tokens
dtype: int64
- name: prompt_tokens_details
dtype: 'null'
splits:
- name: train
num_bytes: 15079861
num_examples: 361
download_size: 5608948
dataset_size: 15079861
- config_name: r1-32b
features:
- name: question
dtype: string
- name: correct_option
dtype: string
- name: correct_answer
dtype: string
- name: type
dtype: string
- name: difficulty
dtype: string
- name: domain
dtype: string
- name: generations
sequence: string
- name: finish_reasons
sequence: string
- name: api_metadata
list:
- name: prompt_tokens
dtype: int64
- name: total_tokens
dtype: int64
- name: completion_tokens
dtype: int64
- name: prompt_tokens_details
dtype: 'null'
splits:
- name: train
num_bytes: 7425075
num_examples: 361
download_size: 2637233
dataset_size: 7425075
configs:
- config_name: olmo-7b
data_files:
- split: train
path: olmo-7b/train-*
- config_name: qwen-32b
data_files:
- split: train
path: qwen-32b/train-*
- config_name: qwq-32b
data_files:
- split: train
path: qwq-32b/train-*
- config_name: r1-32b
data_files:
- split: train
path: r1-32b/train-*
---
|
Rorschach4153/so101_30_1 | Rorschach4153 | 2025-05-22T14:34:01Z | 62 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-22T06:19:10Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 1,
"total_frames": 894,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.topdown": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
aryamankeyora/train_alpaca_chunked | aryamankeyora | 2025-03-09T23:42:43Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-08T06:37:44Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: publication_number
dtype: string
- name: publication_title
dtype: string
- name: cpc
dtype: string
splits:
- name: train
num_bytes: 324017751
num_examples: 7190
download_size: 98976640
dataset_size: 324017751
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
illuin-conteb/us_constitution | illuin-conteb | 2025-02-13T15:40:13Z | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-13T13:44:23Z | 0 | ---
dataset_info:
- config_name: documents
features:
- name: chunk_id
dtype: string
- name: chunk
dtype: string
splits:
- name: test
num_bytes: 53940
num_examples: 111
- name: test_8k
num_bytes: 37158
num_examples: 77
- name: hard_test_8k
num_bytes: 37158
num_examples: 77
download_size: 73563
dataset_size: 128256
- config_name: queries
features:
- name: chunk_id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 47439
num_examples: 111
- name: test_8k
num_bytes: 32082
num_examples: 77
- name: hard_test_8k
num_bytes: 24631
num_examples: 59
download_size: 66020
dataset_size: 104152
configs:
- config_name: documents
data_files:
- split: test
path: documents/test-*
- split: test_8k
path: documents/test_8k-*
- split: hard_test_8k
path: documents/hard_test_8k-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
- split: test_8k
path: queries/test_8k-*
- split: hard_test_8k
path: queries/hard_test_8k-*
---
|
cchoi1/humaneval-datagen-qwen7b_best_att_50_sol_50_20250226_194819 | cchoi1 | 2025-02-27T06:28:46Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-27T06:28:44Z | 0 | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_attack_explanation
dtype: string
- name: chosen_solution
dtype: string
- name: chosen_solution_explanation
dtype: string
- name: chosen_solve_rate
dtype: float64
- name: rejected_attack_explanation
dtype: string
- name: rejected_solution
dtype: string
- name: rejected_solution_explanation
dtype: string
- name: rejected_solve_rate
dtype: float64
splits:
- name: train
num_bytes: 395933
num_examples: 156
download_size: 40065
dataset_size: 395933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davidberenstein1957/fineweb-2-ron-test | davidberenstein1957 | 2024-12-10T15:03:29Z | 20 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | [] | 2024-12-10T15:03:26Z | 0 | ---
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for fineweb-2-ron-test
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.Dataset.from_hub("davidberenstein1957/fineweb-2-ron-test", settings="auto")
```
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
## Using this dataset with `datasets`
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("davidberenstein1957/fineweb-2-ron-test")
```
This will only load the records of the dataset, but not the Argilla settings.
## Dataset Structure
This dataset repo contains:
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
### Fields
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required |
| ---------- | ----- | ---- | -------- |
| text | text | text | False |
### Questions
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| text_0 | text_0 | text | True | N/A | N/A |
<!-- check length of metadata properties -->
### Metadata
The **metadata** is a dictionary that can be used to provide additional information about the dataset record.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| language_score | language_score | float | - | True |
| minhash_cluster_size | minhash_cluster_size | integer | - | True |
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ferrazzipietro/IK_llama3.1-8b_e3c_16_64_0.01 | ferrazzipietro | 2024-12-10T13:30:30Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-09T17:12:31Z | 0 | ---
dataset_info:
features:
- name: inference_prompt
dtype: string
- name: sentence
dtype: string
- name: model_responses
dtype: string
- name: ground_truth
dtype: string
splits:
- name: validation
num_bytes: 88868
num_examples: 106
- name: test
num_bytes: 714882
num_examples: 666
download_size: 271360
dataset_size: 803750
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ohassane/gptclonebench | ohassane | 2025-06-04T15:10:25Z | 414 | 1 | [
"language:code",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"semantic-clones",
"Moderately type-3",
"type-4",
"cross-language",
"java",
"python"
] | [] | 2025-04-19T14:26:45Z | 0 | ---
license: apache-2.0
language:
- code
task:
- code-clone-detection
tags:
- semantic-clones
- Moderately type-3
- type-4
- cross-language
- java
- python
configs:
- config_name: default
data_files:
- split: train
path: data/train/all_clones*.jsonl
- split: validation
path: data/validation/validate_clones.jsonl
- split: eval
path: data/eval/eval_clones.jsonl
---
# GPTCloneBench
**GPTCloneBench** is a private dataset of code‑clone pairs, the official GitHub page can be found here: https://github.com/srlabUsask/GPTCloneBench.
This dataset is unofficial and was created from the GPTCloneBench github to aid in training LLMs for my project.
## Files
All four files live under `data/` in the repo:
Each line in these JSONL files has fields:
- `code1` (string)
- `code2` (string)
- `clone_type` (string or null)
- `language` (string: `"java"`, `"python"`, or `"cross-language-java-python"`)
- `semantic` (boolean or null)
- `chain_of_thought` (string)
|
OLAIR/numina_math_ko_verifiable_540k | OLAIR | 2025-03-05T05:17:08Z | 115 | 2 | [
"language:ko",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T05:13:37Z | 0 | ---
dataset_info:
features:
- name: original
dtype: string
- name: reference
dtype: string
- name: problem
dtype: string
- name: source
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 873006977
num_examples: 539149
download_size: 452093917
dataset_size: 873006977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
language:
- ko
---
# Dataset Card: OLAIR/numina_math_ko_verifiable_540k
**Overview:**
A paired dataset of math questions (translated into Korean using GPT-4o-mini) and verifiable answers. Intended for RL training (e.g., GRPO) and mathematical reasoning tasks.
**Sources:**
- **Questions:** Derived from [AI-MO/NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT)
- **Answers:** Extracted from [flatlander1024/numinamath_verifiable_cleaned](https://huggingface.co/datasets/flatlander1024/numinamath_verifiable_cleaned?row=15)
**Key Points:**
- **Translation:** No-cleansing version; translations may contain errors.
- **Usage:** Suitable for RL and language model training in math-related tasks.
**Contact:**
```
[email protected]
``` |
mteb/KorFin | mteb | 2025-05-06T12:37:12Z | 0 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"language:kor",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2301.03136",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T12:37:09Z | 0 | ---
annotations_creators:
- expert-annotated
language:
- kor
license: cc-by-sa-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 381770
num_examples: 2048
download_size: 222822
dataset_size: 381770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">KorFin</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
The KorFin-ASC is an extension of KorFin-ABSA, which is a financial sentiment analysis dataset including 8818 samples with (aspect, polarity) pairs annotated. The samples were collected from KLUE-TC and analyst reports from Naver Finance.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | News, Written, Financial |
| Reference | https://huggingface.co/datasets/amphora/korfin-asc |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["KorFin"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{son2023removing,
author = {Son, Guijin and Lee, Hanwool and Kang, Nahyeon and Hahm, Moonjeong},
journal = {arXiv preprint arXiv:2301.03136},
title = {Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance},
year = {2023},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("KorFin")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"train": {
"num_samples": 2048,
"number_of_characters": 154385,
"number_texts_intersect_with_train": null,
"min_text_length": 12,
"average_text_length": 75.38330078125,
"max_text_length": 216,
"unique_text": 1869,
"unique_labels": 3,
"labels": {
"1": {
"count": 602
},
"0": {
"count": 689
},
"2": {
"count": 757
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
Blancy/secondfiltered-math220k-difficulty_stratified_10k_filtered_only_medium_difficulty | Blancy | 2025-05-24T03:44:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-24T03:39:44Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
dtype: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
dtype: 'null'
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: number_of_tokens
dtype: int64
- name: original_index
dtype: int64
splits:
- name: train
num_bytes: 23519604
num_examples: 1000
download_size: 10750769
dataset_size: 23519604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Dataset Details
This dataset origins from `Blancy/secondfiltered-math220k-difficulty_stratified_10k`, the orginal index means the index in\
`Blancy/secondfiltered-math220k-difficulty_stratified_10k`. |
french-datasets/Unique007_french_phone_1421_samples_16khz | french-datasets | 2025-05-23T17:35:47Z | 0 | 0 | [
"task_categories:automatic-speech-recognition",
"language:fra",
"region:us"
] | [
"automatic-speech-recognition"
] | 2025-05-23T17:34:37Z | 0 | ---
language:
- fra
viewer: false
task_categories:
- automatic-speech-recognition
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [Unique007/french_phone_1421_samples_16khz](https://huggingface.co/datasets/Unique007/french_phone_1421_samples_16khz). |
kajuma/training_01-09_patch | kajuma | 2025-01-09T12:24:32Z | 28 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-09T12:21:36Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5585302957.552539
num_examples: 1263650
- name: test
num_bytes: 28066849.03292733
num_examples: 6350
download_size: 3260835293
dataset_size: 5613369806.585466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_2_for_gen_3 | HungVu2003 | 2025-04-07T23:22:15Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-07T23:22:10Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1744126
num_examples: 12500
download_size: 851003
dataset_size: 1744126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tmpmodelsave/beta05_balanced_type12_sftloss_moredata500tmp10 | tmpmodelsave | 2025-01-22T00:43:33Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-22T00:43:31Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 43029121
num_examples: 15000
download_size: 15842650
dataset_size: 43029121
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1g0rrr/sam_frames3 | 1g0rrr | 2025-04-04T04:10:13Z | 21 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-04-04T03:40:03Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "sam_double",
"total_episodes": 5,
"total_frames": 2627,
"total_tasks": 1,
"total_videos": 15,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_side",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_side",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
TeoGchx/BEAT_HML3D_whisper_wavtokenizer_smpl_motokenizer_body | TeoGchx | 2025-03-19T14:06:13Z | 22 | 0 | [
"size_categories:1K<n<10K",
"modality:audio",
"modality:text",
"region:us"
] | [] | 2025-03-19T13:34:03Z | 0 | ---
dataset_info:
features:
- name: motion
sequence:
sequence: float32
- name: beat_motion
struct:
- name: betas
sequence:
sequence: float32
- name: expressions
sequence:
sequence: float32
- name: gender
dtype: string
- name: mocap_frame_rate
dtype: int64
- name: model
dtype: string
- name: poses
sequence:
sequence: float32
- name: trans
sequence:
sequence: float32
- name: text
sequence: string
- name: meta_data
struct:
- name: duration
dtype: float64
- name: name
dtype: string
- name: num_frames
dtype: int64
- name: num_words
dtype: int64
- name: speaker
dtype: string
- name: whisper
struct:
- name: language
dtype: string
- name: segments
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: text
dtype: string
- name: aligned_text
struct:
- name: segments
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: text
dtype: string
- name: words
list:
- name: end
dtype: float64
- name: score
dtype: float64
- name: start
dtype: float64
- name: word
dtype: string
- name: word_segments
list:
- name: end
dtype: float64
- name: score
dtype: float64
- name: start
dtype: float64
- name: word
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: audio_token
sequence:
sequence:
sequence: int64
- name: motion_token
sequence:
sequence: int64
splits:
- name: val
num_bytes: 668452035.0
num_examples: 106
- name: train
num_bytes: 6066355197.91
num_examples: 1093
- name: test
num_bytes: 1622723026.0
num_examples: 265
download_size: 8175141242
dataset_size: 8357530258.91
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dgambettaphd/D_gen2_run0_Meta-Llama-3.1-8B-bnb-4bit_wiki_doc10_real64_synt64 | dgambettaphd | 2025-03-14T02:20:59Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-14T02:20:57Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 5512
num_examples: 10
download_size: 6962
dataset_size: 5512
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LimYeri/Python_Code_Instructions | LimYeri | 2024-06-05T15:51:33Z | 36 | 2 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-05T15:51:19Z | 1 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 633691760
num_examples: 340642
download_size: 307762448
dataset_size: 633691760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pnv2003/CodeMMLU-Reasoning-V2 | pnv2003 | 2025-03-31T02:10:43Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T02:10:30Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: prompt
dtype: string
- name: responses
dtype: string
- name: best_response_index
dtype: float64
- name: predict
dtype: string
- name: success
dtype: bool
splits:
- name: train
num_bytes: 53648225
num_examples: 3440
download_size: 17457364
dataset_size: 53648225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
preetham1234/medquad-t | preetham1234 | 2025-03-11T09:44:43Z | 12 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T09:38:07Z | 0 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: teacher_input_ids
sequence: int64
- name: teacher_attention_mask
sequence: int64
splits:
- name: train
num_bytes: 430160000
num_examples: 5000
download_size: 3023364
dataset_size: 430160000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
allenai/pixmo-point-explanations | allenai | 2024-12-05T18:45:24Z | 142 | 7 | [
"task_categories:visual-question-answering",
"language:en",
"license:odc-by",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"visual-question-answering"
] | 2024-11-27T16:45:22Z | 0 | ---
language:
- en
license: odc-by
task_categories:
- visual-question-answering
dataset_info:
features:
- name: image_url
dtype: string
- name: image_sha256
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: parsed_response
dtype: string
- name: alt_text
sequence: string
- name: inline_text
sequence: string
- name: points
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 91111483
num_examples: 79551
download_size: 51811429
dataset_size: 91111483
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# PixMo-Point-Explanations
PixMo-Point-Explanations is a dataset of images, questions, and answers with explanations that can include in-line points that refer to parts of the image.
It can be used to train vison language models to respond to questions through a mixture of text and points.
PixMo-Point-Explanations is part of the [PixMo dataset collection](https://huggingface.co/collections/allenai/pixmo-674746ea613028006285687b) and was used to train the [Molmo family of models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
We consider this dataset experimental, while these explanations can be very informative we have also seen
models can hallucinate more when generating outputs of this sort.
For that reason, the Molmo models are trained to only generate outputs like this when specifically requested by prefixing input questions with "point_qa:".
This mode can be used in the [Molmo demo](https://multimodal-29mpz7ym.vercel.app/share/2921825e-ef44-49fa-a6cb-1956da0be62a)
Quick links:
- 📃 [Paper](https://molmo.allenai.org/paper.pdf)
- 🎥 [Blog with Videos](https://molmo.allenai.org/blog)
## Loading
```python
data = datasets.load_dataset("allenai/pixmo-point-explanations")
```
## Data Format
Images are stored as URLs.
The in-line points use a format from the LLM/annotators that does not exactly match the Molmo format.
The data includes some fields derived from these responses to make them easier to parse,
these fields can be null if the original response was not parsed.
- `parsed_response` responses with the text "<|POINT|>" where the inline point annotations were
- `alt_text` the alt text for each point annotation in the response
- `inline_text` the inline text for each point annotation in the response
- `points` the list-of-list of points for each point annotation
## Checking Image Hashes
Image hashes are included to support double-checking that the downloaded image matches the annotated image.
It can be checked like this:
```python
from hashlib import sha256
import requests
example = data[0]
image_bytes = requests.get(example["image_url"]).content
byte_hash = sha256(image_bytes).hexdigest()
assert byte_hash == example["image_sha256"]
```
## License
This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
This dataset includes data generated from Claude which are subject to Anthropic [terms of service](https://www.anthropic.com/legal/commercial-terms) and [usage policy](https://www.anthropic.com/legal/aup). |
DeepPavlov/clinc_oos_ru | DeepPavlov | 2025-04-06T22:46:50Z | 18 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-24T14:15:18Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
- name: label_text_ru
dtype: string
splits:
- name: train
num_bytes: 2219572
num_examples: 15250
- name: test
num_bytes: 773568
num_examples: 5500
- name: validation
num_bytes: 773568
num_examples: 5500
download_size: 891013
dataset_size: 3766708
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
HappyAIUser/alpaca-sharegpt-data | HappyAIUser | 2024-10-21T02:34:38Z | 22 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-21T02:34:35Z | 0 | ---
dataset_info:
features:
- name: conversations
struct:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 20818233
num_examples: 52002
download_size: 11855874
dataset_size: 20818233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
oda-99/aozora_head_3_consonant_hira | oda-99 | 2025-01-05T06:00:06Z | 15 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-05T05:21:47Z | 0 | ---
dataset_info:
features:
- name: section
dtype: string
- name: rhyme
sequence: string
- name: new_text
sequence: string
splits:
- name: train
num_bytes: 160404723.0
num_examples: 1012032
- name: test
num_bytes: 17822747.0
num_examples: 112448
download_size: 105154968
dataset_size: 178227470.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
TAUR-dev/SIE_EVAL__BoN__rl-long-thinks__samples | TAUR-dev | 2025-06-09T21:50:27Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-09T21:50:26Z | 0 | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: resps
dtype: string
- name: filtered_resps
dtype: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: generation
dtype: string
- name: info
dtype: string
splits:
- name: train
num_bytes: 52054848
num_examples: 604
download_size: 8747942
dataset_size: 52054848
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
michsethowusu/akan-ewe_sentence-pairs | michsethowusu | 2025-04-03T13:09:18Z | 7 | 0 | [
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-03T13:09:14Z | 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Akan
dtype: string
- name: Ewe
dtype: string
splits:
- name: train
num_bytes: 6707696
num_examples: 64271
download_size: 6707696
dataset_size: 6707696
configs:
- config_name: default
data_files:
- split: train
path: Akan-Ewe_Sentence-Pairs.csv
---
# Akan-Ewe_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Akan-Ewe_Sentence-Pairs
- **Number of Rows**: 64271
- **Number of Columns**: 3
- **Columns**: score, Akan, Ewe
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Akan`: The first sentence in the pair (language 1).
3. `Ewe`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
Lansechen/details_Lansechen__Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192 | Lansechen | 2025-03-26T08:25:54Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-26T07:52:45Z | 0 | ---
pretty_name: Evaluation run of Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192](https://huggingface.co/Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192).\n\
\nThe dataset is composed of 2 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\"Lansechen/details_Lansechen__Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192\"\
,\n\t\"results\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the\
\ [latest results from run 2025-03-26T16:25:45.239515](https://huggingface.co/datasets/Lansechen/details_Lansechen__Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192/blob/main/results_2025-03-26T16-25-45.239515.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"extractive_match\": 0.642,\n\
\ \"extractive_match_stderr\": 0.021461434862859122\n },\n \"custom|math_500|0\"\
: {\n \"extractive_match\": 0.642,\n \"extractive_match_stderr\":\
\ 0.021461434862859122\n }\n}\n```"
repo_url: https://huggingface.co/Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192
configs:
- config_name: custom_aime24_0
data_files:
- split: 2025_03_26T16_00_59.718542
path:
- '**/details_custom|aime24|0_2025-03-26T16-00-59.718542.parquet'
- split: 2025_03_26T16_18_10.335628
path:
- '**/details_custom|aime24|0_2025-03-26T16-18-10.335628.parquet'
- split: latest
path:
- '**/details_custom|aime24|0_2025-03-26T16-18-10.335628.parquet'
- config_name: custom_math_500_0
data_files:
- split: 2025_03_26T15_52_43.426978
path:
- '**/details_custom|math_500|0_2025-03-26T15-52-43.426978.parquet'
- split: 2025_03_26T16_25_45.239515
path:
- '**/details_custom|math_500|0_2025-03-26T16-25-45.239515.parquet'
- split: latest
path:
- '**/details_custom|math_500|0_2025-03-26T16-25-45.239515.parquet'
- config_name: results
data_files:
- split: 2025_03_26T15_52_43.426978
path:
- results_2025-03-26T15-52-43.426978.parquet
- split: 2025_03_26T16_00_59.718542
path:
- results_2025-03-26T16-00-59.718542.parquet
- split: 2025_03_26T16_18_10.335628
path:
- results_2025-03-26T16-18-10.335628.parquet
- split: 2025_03_26T16_25_45.239515
path:
- results_2025-03-26T16-25-45.239515.parquet
- split: latest
path:
- results_2025-03-26T16-25-45.239515.parquet
---
# Dataset Card for Evaluation run of Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192](https://huggingface.co/Lansechen/Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192).
The dataset is composed of 2 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("Lansechen/details_Lansechen__Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192",
"results",
split="train")
```
## Latest results
These are the [latest results from run 2025-03-26T16:25:45.239515](https://huggingface.co/datasets/Lansechen/details_Lansechen__Qwen2.5-3B-Instruct-Distill-om220k-1k-simplified-fem8192-batch32-epoch1-8192/blob/main/results_2025-03-26T16-25-45.239515.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"extractive_match": 0.642,
"extractive_match_stderr": 0.021461434862859122
},
"custom|math_500|0": {
"extractive_match": 0.642,
"extractive_match_stderr": 0.021461434862859122
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Mingweipoppy/info7374-assignment4 | Mingweipoppy | 2025-04-14T05:55:40Z | 54 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T05:55:36Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 37397
num_examples: 50
download_size: 30839
dataset_size: 37397
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama3_rr40k_no_balanced_ep3_3e6_bz32tmp07 | 1231czx | 2024-12-21T07:26:20Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-21T07:26:19Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 50094724
num_examples: 15000
download_size: 16333134
dataset_size: 50094724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gupta-tanish/llama3-8b-instruct-on-policy-swepo-iteration3 | gupta-tanish | 2025-04-17T18:37:04Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-17T18:36:57Z | 0 | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: A0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A2
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A3
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_A0
dtype: float64
- name: score_A1
dtype: float64
- name: score_A2
dtype: float64
- name: score_A3
dtype: float64
splits:
- name: train
num_bytes: 233841405
num_examples: 19996
- name: test
num_bytes: 233841405
num_examples: 19996
download_size: 256676778
dataset_size: 467682810
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ahmedheakl/r1_90k_instruct | ahmedheakl | 2025-02-25T06:16:11Z | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-25T05:08:42Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 8740383946
num_examples: 93131
download_size: 7507294773
dataset_size: 8740383946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Daksh1/multiVarData | Daksh1 | 2025-02-15T17:39:26Z | 52 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-15T17:39:05Z | 0 | ---
dataset_info:
features:
- name: out
dtype: string
- name: class
dtype: string
- name: in
dtype: string
- name: inst
dtype: string
- name: reversed
dtype: string
splits:
- name: train
num_bytes: 139722954
num_examples: 251998
- name: test
num_bytes: 17494045
num_examples: 31500
- name: validation
num_bytes: 17474194
num_examples: 31500
download_size: 71363788
dataset_size: 174691193
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
Oussama1209/mnlp_rag_corpus | Oussama1209 | 2025-06-10T20:45:29Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-10T20:44:31Z | 0 | ---
dataset_info:
features:
- name: content
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 66418835
num_examples: 100000
download_size: 37554850
dataset_size: 66418835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
giulio98/synthetic_dataset-1024 | giulio98 | 2025-04-24T16:17:52Z | 37 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-24T16:08:50Z | 0 | ---
dataset_info:
- config_name: phase1
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 351727
num_examples: 50
download_size: 65652
dataset_size: 351727
- config_name: phase2
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 351355
num_examples: 50
download_size: 62859
dataset_size: 351355
- config_name: phase3
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 355428
num_examples: 50
download_size: 53091
dataset_size: 355428
- config_name: phase4
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 357843
num_examples: 50
download_size: 58688
dataset_size: 357843
- config_name: phase5
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 354695
num_examples: 50
download_size: 56489
dataset_size: 354695
- config_name: phase6
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 357317
num_examples: 50
download_size: 53696
dataset_size: 357317
- config_name: phase7
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 360348
num_examples: 50
download_size: 60090
dataset_size: 360348
- config_name: phase8
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: complexity
dtype: int32
- name: answer
dtype: string
- name: answer_prefix
dtype: string
- name: max_new_tokens
dtype: int64
- name: task_description
dtype: string
splits:
- name: test
num_bytes: 359024
num_examples: 50
download_size: 58608
dataset_size: 359024
configs:
- config_name: phase1
data_files:
- split: test
path: phase1/test-*
- config_name: phase2
data_files:
- split: test
path: phase2/test-*
- config_name: phase3
data_files:
- split: test
path: phase3/test-*
- config_name: phase4
data_files:
- split: test
path: phase4/test-*
- config_name: phase5
data_files:
- split: test
path: phase5/test-*
- config_name: phase6
data_files:
- split: test
path: phase6/test-*
- config_name: phase7
data_files:
- split: test
path: phase7/test-*
- config_name: phase8
data_files:
- split: test
path: phase8/test-*
---
|
CohereLabs/aya_evaluation_suite | CohereLabs | 2025-04-15T08:46:23Z | 1,321 | 51 | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"source_datasets:original",
"source_datasets:extended",
"language:afr",
"language:sqi",
"language:amh",
"language:ara",
"language:aze",
"language:bel",
"language:ben",
"language:bul",
"language:cat",
"language:ceb",
"language:ces",
"language:kur",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:epo",
"language:est",
"language:eus",
"language:fin",
"language:fra",
"language:gla",
"language:gle",
"language:glg",
"language:guj",
"language:hat",
"language:hau",
"language:heb",
"language:hin",
"language:hun",
"language:hye",
"language:ibo",
"language:ind",
"language:isl",
"language:ita",
"language:jav",
"language:jpn",
"language:kan",
"language:kat",
"language:kaz",
"language:mon",
"language:khm",
"language:kir",
"language:kor",
"language:lao",
"language:lit",
"language:ltz",
"language:lav",
"language:mal",
"language:mar",
"language:mkd",
"language:mlt",
"language:mri",
"language:mya",
"language:nld",
"language:nor",
"language:nep",
"language:sot",
"language:pus",
"language:pes",
"language:mlg",
"language:pol",
"language:por",
"language:ron",
"language:rus",
"language:sin",
"language:slk",
"language:slv",
"language:smo",
"language:sna",
"language:snd",
"language:som",
"language:spa",
"language:srp",
"language:sun",
"language:swe",
"language:swa",
"language:tam",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:ukr",
"language:urd",
"language:uzb",
"language:vie",
"language:xho",
"language:yid",
"language:yor",
"language:zho",
"language:msa",
"language:zul",
"language:ace",
"language:bjn",
"language:kas",
"language:kau",
"language:min",
"language:mni",
"language:taq",
"language:nso",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.06619",
"region:us"
] | [
"text-generation"
] | 2024-02-06T08:54:09Z | 1 | ---
language_creators:
- crowdsourced
- expert-generated
- machine-generated
language:
- afr
- sqi
- amh
- ara
- aze
- bel
- ben
- bul
- cat
- ceb
- ces
- kur
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fin
- fra
- gla
- gle
- glg
- guj
- hat
- hau
- heb
- hin
- hun
- hye
- ibo
- ind
- isl
- ita
- jav
- jpn
- kan
- kat
- kaz
- mon
- khm
- kir
- kor
- lao
- lit
- ltz
- lav
- mal
- mar
- mkd
- mlt
- mri
- mya
- nld
- nor
- nep
- sot
- pus
- pes
- mlg
- pol
- por
- ron
- rus
- sin
- slk
- slv
- smo
- sna
- snd
- som
- spa
- srp
- sun
- swe
- swa
- tam
- tel
- tgk
- tha
- tur
- ukr
- urd
- uzb
- vie
- xho
- yid
- yor
- zho
- msa
- zul
- ace
- bjn
- kas
- kau
- min
- mni
- taq
- nso
license: apache-2.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
- extended
task_categories:
- text-generation
pretty_name: Aya Evaluation Suite
dataset_info:
- config_name: aya_human_annotated
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1624958
num_examples: 1750
download_size: 974483
dataset_size: 1624958
- config_name: dolly_human_edited
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 1219111
num_examples: 1200
download_size: 602117
dataset_size: 1219111
- config_name: dolly_machine_translated
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 39679355
num_examples: 23800
download_size: 20100505
dataset_size: 39679355
configs:
- config_name: aya_human_annotated
data_files:
- split: test
path: aya_human_annotated/test-*
- config_name: dolly_human_edited
data_files:
- split: test
path: dolly_human_edited/test-*
- config_name: dolly_machine_translated
data_files:
- split: test
path: dolly_machine_translated/test-*
---

# Dataset Summary
`Aya Evaluation Suite` contains a total of 26,750 open-ended conversation-style prompts to evaluate multilingual open-ended generation quality.\
To strike a balance between language coverage and the quality that comes with human curation, we create an evaluation suite that includes:
1) human-curated examples in 7 languages (`tur, eng, yor, arb, zho, por, tel`) → `aya-human-annotated`.
2) machine-translations of handpicked examples into 101 languages → `dolly-machine-translated`.
3) human-post-edited translations into 6 languages (`hin, srp, rus, fra, arb, spa`) → `dolly-human-edited`.
---
- **Curated by:** Contributors of [Aya Open Science Intiative](https://aya.for.ai/), professional annotators, and synthetic generation
- **Language(s):** 101 languages
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
- **Aya Datasets Family:**
| Name | Explanation |
|------|--------------|
| [aya_dataset](https://huggingface.co/datasets/CohereLabs/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
| [aya_collection](https://huggingface.co/datasets/CohereLabs/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages, providing 513M instances for various tasks.|
| [aya_collection_language_split](https://huggingface.co/datasets/CohereLabs/aya_collection_language_split) | Aya Collection structured based on language level subsets. |
| [aya_evaluation_suite](https://huggingface.co/datasets/CohereLabs/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
| [aya_redteaming](https://huggingface.co/datasets/CohereLabs/aya_redteaming)| A red-teaming dataset consisting of harmful prompts in 8 languages across 9 different categories of harm with explicit labels for "global" and "local" harm.|
# Dataset
The `Aya Evaluation Suite` includes the following subsets:
1. **aya-human-annotated**: 250 original human-written prompts in 7 languages each.
2. **dolly-machine-translated**: 200 human-selected prompts from [databricks-dolly-15k](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm)
, automatically translated with the [NLLB model](https://ai.meta.com/research/no-language-left-behind/) from English into 101 languages (114 dialects in total).
3. **dolly-human-edited**: 200 dolly-machine-translated prompts post-edited by fluent speakers for 6 languages.
## Load with Datasets
To load this dataset consisting of prompt-completions with `datasets`, you just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
aya_eval = load_dataset("CohereLabs/aya_evaluation_suite", "aya_human_annotated")
```
## Data Fields
- `id`: Unique id of the data point.
- `inputs`: Prompt or input to the language model.
- `targets`: Completion or output of the language model. (Not applicable for `dolly-human-edited`)
- `language`: The language of the `prompt` and `completion.`
- `script`: The writing system of the language.
- `source_id`: Corresponding original row index from the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset (Field applicable only for subsets `dolly-machine-translated` & `dolly-human-edited`)
## Data Instances
Example data instances from the `Aya Evaluation Suite` subsets are listed in the toggled sections below.
<details>
<summary> <b>aya-human-annotated</b> </summary>
```json
{
"id": 42,
"inputs": "What day is known as Star Wars Day?",
"targets": "May 4th (May the 4th be with you!)",
"language": "eng",
"script": "Latn",
}
```
</details>
<b>Dolly-machine-translated and dolly-human-edited</b>
- These two subsets are parallel datasets (data instances can be mapped using their `id` column).
- Note that in the `dolly-machine-translated` subset, we also include the original English subset (`id 1-200`), which is translated into 101 languages. Furthermore, the field `id` can be used to match the translations of the same data instance across languages.
- The `source_id` field contains the corresponding original row index from the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset.
<details>
<summary> <b>dolly-machine-translated</b> </summary>
```json
{
"id": 2,
"inputs": "How to escape from a helicopter trapped in water ?",
"targets": "If you are ever trapped inside a helicopter while submerged in water, it’s best to try and remain calm until the cabin is completely underwater. It’s better to wait for pressure to be equalized, before you try to open the door or break the glass to escape.",
"language": "eng",
"script": "Latn",
"source_id": 6060,
}
```
</details>
<details>
<summary> <b>dolly-human-edited</b> </summary>
```json
{
"id": 2,
"inputs": "Comment peut-on s'échapper d'un hélicoptère piégé dans l'eau ?",
"targets": "-",
"language": "fra",
"script": "Latn",
"source_id": 6060,
}
```
</details>
## Statistics
The toggled table below lists the breakdown of languages in each subset.
### Languages
<details>
<summary> <b>aya-human-annotated</b> </summary>
| ISO Code | Language | Resources |
|----------|----------|---------------|
| `tel` | Telugu | Low |
| `yor` | Yorùbá | Low |
| `arb` | Arabic | High |
| `tur` | Turkish | High |
| `por` | Portuguese | High |
| `zho` | Chinese (Simplified) | High |
| `eng` | English | High |
</details>
<details>
<summary> <b>dolly-machine-translated</b> </summary>
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `ace` | Achinese | Low |
| `afr` | Afrikaans | Mid |
| `amh` | Amharic | Low |
| `ara` (`arb`, `acm`, `acq`, `aeb`, `ajp`, `apc`, `ars`, `ary` & `arz`) | Arabic (Standard, Gelet Iraqi, Ta'izzi-Adeni, Tunisian, South Levantine, North Levantine, Najdi, Moroccan & Egyptian) | High |
| `aze` (`azb` & `azj`) | Azerbaijani (South & North) | Low |
| `bel` | Belarusian | Mid |
| `ben` | Bengali | Mid |
| `bjn` | Banjar | Low |
| `bul` | Bulgarian | Mid |
| `cat` | Catalan | High |
| `ceb` | Cebuano | Mid |
| `ces` | Czech | High |
| `cym` | Welsh | Low |
| `dan` | Danish | Mid |
| `deu` | German | High |
| `ell` | Greek | Mid |
| `eng` | English | High |
| `epo` | Esperanto | Low |
| `est` | Estonian | Mid |
| `eus` | Basque | High |
| `fin` | Finnish | High |
| `fra` | French | High |
| `gla` | Scottish Gaelic | Low |
| `gle` | Irish | Low |
| `glg` | Galician | Mid |
| `guj` | Gujarati | Low |
| `hat` | Haitian Creole | Low |
| `hau` | Hausa | Low |
| `heb` | Hebrew | Mid |
| `hin` | Hindi | High |
| `hun` | Hungarian | High |
| `hye` | Armenian | Low |
| `ibo` | Igbo | Low |
| `ind` | Indonesian | Mid |
| `isl` | Icelandic | Low |
| `ita` | Italian | High |
| `jav` | Javanese | Low |
| `jpn` | Japanese | High |
| `kan` | Kannada | Low |
| `kas` | Kashmiri | Low |
| `kat` | Georgian | Mid |
| `kau` (`knc`) | Kanuri (Central) | Low |
| `kaz` | Kazakh | Mid |
| `khm` | Khmer | Low |
| `kir` | Kyrgyz | Low |
| `kor` | Korean | High |
| `kur` (`ckb` & `kmr`) | Kurdish (Central & Northern) | Low |
| `lao` | Lao | Low |
| `lav` (`lvs`) | Latvian (Standard) | Mid |
| `lit` | Lithuanian | Mid |
| `ltz` | Luxembourgish | Low |
| `mal` | Malayalam | Low |
| `mar` | Marathi | Low |
| `min` | Minangkabau | Low |
| `mkd` | Macedonian | Low |
| `mlg` (`plt`) | Malagasy (Plateau) | Low |
| `mlt` | Maltese | Low |
| `mni` | Manipuri | Low |
| `mon` (`khk`) | Mongolian (Khalkha) | Low |
| `mri` | Maori | Low |
| `msa` (`zsm`) | Malay (Standard) | Mid |
| `mya` | Burmese | Low |
| `nep` (`npi`) | Nepali | Low |
| `nld` | Dutch | High |
| `nor` (`nno` & `nob`) | Norwegian (Nynorsk & Bokmål) | Low |
| `nso` | Northern Sotho | Low |
| `pes` | Persian | High |
| `pol` | Polish | High |
| `por` | Portuguese | High |
| `pus` (`pbt`) | Pashto (Southern) | Low |
| `ron` | Romanian | Mid |
| `rus` | Russian | High |
| `sin` | Sinhala | Low |
| `slk` | Slovak | Mid |
| `slv` | Slovenian | Mid |
| `smo` | Samoan | Low |
| `sna` | Shona | Low |
| `snd` | Sindhi | Low |
| `som` | Somali | Low |
| `sot` | Southern Sotho | Low |
| `spa` | Spanish | High |
| `sqi` (`als`) | Albanian (Tosk) | Low |
| `srp` | Serbian | High |
| `sun` | Sundanese | Low |
| `swa` (`swh`) | Swahili (Coastal) | Low |
| `swe` | Swedish | High |
| `tam` | Tamil | Mid |
| `taq` | Tamasheq | Low |
| `tel` | Telugu | Low |
| `tgk` | Tajik | Low |
| `tha` | Thai | Mid |
| `tur` | Turkish | High |
| `ukr` | Ukrainian | Mid |
| `urd` | Urdu | Mid |
| `uzb` (`uzn`) | Uzbek (Nothern) | Mid |
| `vie` | Vietnamese | High |
| `xho` | Xhosa | Low |
| `yid` (`ydd`) | Yiddish (Eastern) | Low |
| `yor` | Yoruba | Low |
| `zho` (+ `yue`) | Chinese (Simplified & Cantonese) | High |
| `zul` | Zulu | Low |
</details>
<details>
<summary> <b>dolly-human-edited</b> </summary>
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `arb` | Arabic | High |
| `fra` | French | High |
| `hin` | Hindi | High |
| `rus` | Russian | High |
| `spa` | Spanish | High |
| `srp` | Serbian | High |
</details>
<br>
# Motivations & Intentions
- **Curation Rationale:** This evaluation suite is tailored to test the generation quality of multilingual models, with the aim of balancing language coverage and human-sourced quality.
It covers prompts originally written in each language, as well as English-centric translated, and manually curated or edited prompts for a linguistically broad, but rich testbed.
The list of languages was initially established from mT5 and aligned with the annotators’ language list and the NLLB translation model.
# Known Limitations
- **Translation Quality:** Note that the expressiveness of the `dolly-machine-translated` subset is limited by the quality of the translation model and may adversely impact an estimate of ability in languages where translations are not adequate. If this subset is used for testing, we recommend it be paired and reported with the professionally post-edited `dolly-human-edited` subset or the `aya-human-annotated` set, which, while covering only 7 languages, is entirely created by proficient target language speakers.
---
# Additional Information
## Provenance
- **Methods Used:** combination of original annotations by volunteers, automatic translation, and post-editing of translations by professional annotators.
- **Methodology Details:**
- *Source:* Original annotations from Aya dataset along with translations and post-edits of Dolly dataset
- *Platform:* [Aya Annotation Platform](https://aya.for.ai/)
- *Dates of Collection:* May 2023 - Dec 2023
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 02/2024
- *First Release:* 02/2024
- **Maintenance Plan:** No updates planned.
## Authorship
- **Publishing Organization:** [Cohere Labs](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
- **Contact Details:** https://aya.for.ai/
## Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Citation Information
```bibtex
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
SoftACE/StorySeek | SoftACE | 2025-06-16T07:53:41Z | 124 | 1 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"modality:text",
"arxiv:2503.13279",
"region:us",
"Requirements",
"Agile"
] | [] | 2025-03-11T02:54:33Z | 0 | ---
license: mit
language:
- en
tags:
- Requirements
- Agile
size_categories:
- 1K<n<10K
---
# Dataset: StorySeek-V1
[paper](https://arxiv.org/abs/2503.13279)
## Dataset Overview
The StorySeek dataset is initially designed for the evaluation of goal-driven requirements elicitation and our proposing method [Goal2Story](https://github.com/SoftACE-Lab/goal2story). It comprises 1,005 records, each containing parsed information from detailed **Impact Mapping-Result** and **Project-Info**, as previously described. These records are derived from 10 well-known agile projects on GitLab, which were curated by NEODATASET. The whole dataset is created by a Semi-automatic Dataset Construction method proposed by our paper. Each datapoint contains the IM-Result and project information. Specifically, IM-Result consists of five key components: **goal**, **actor**, **impact**, **deliverable**, and **user story**, where the user story explicitly details the actor, action, and expected outcome; and the project information includes **background**, **problems**, and **solutions**. The expected contribution is that this dataset can assist research and industry on requirements elicitation. **Though the StorySeek was initially designed for the evaluation of our Goal2Story system, it also includes elements relevant to other software engineering aspects, which makes it possible to explore new findings in other studies.**
You can find more details about our Goal2Story on this github link.[<https://github.com/SoftACE-Lab/goal2story>]
## Column Explanations
1. **IM-Result Columns:**
- **Goal:** The goal defines the purpose of the initiative by answering why we are doing this. It should be a single concise sentence.
- **Actor:** Actors are individuals or groups who can influence the outcome by enabling or obstructing the goal.
- **Impact:** Impacts describe the behavioral changes needed from the actors to achieve the goal or potential obstacles they might create. Impacts must be expressed as a change in behavior or action.
- **Deliverable:** Deliverables are the specific features or activities implemented to support the required impacts. The deliverable must directly resolve the problem described in the impact.
- **User Story:** A user story is a concise requirement that describes an actor’s action and the expected outcome within a system. Ensure there is a causal link between **Goal** and **User Story**.
2. **Project Information Columns:**
- **Background:** Project basic description.
- **Problems:** Current problems described in the issue.
- **Solutions:** Real-world solutions to these problems.
## Project Info
| # | ID | Project Name | Description | URL | Count |
|-----|----------|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------|-------|
| 1 | 250833 | GitLab Runner | GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline. | [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) | 12 |
| 2 | 3828396 | GitLab Charts | The gitlab chart is the best way to operate GitLab on Kubernetes. It contains all the required components to get started, and can scale to large deployments. | [GitLab Charts](https://gitlab.com/gitlab-org/charts/gitlab) | 13 |
| 3 | 6206924 | Tildes | Tildes is an open-source, community-driven social platform that fosters thoughtful discussions and quality content sharing while prioritizing user privacy and minimal distractions. | [Tildes](https://gitlab.com/tildes/tildes) | 33 |
| 4 | 28644964 | Free Pascal Compiler | Free Pascal is a mature, versatile, open source Pascal compiler. It can target many processor architectures: Intel x86 (16 and 32 bit), AMD64/x86-64, PowerPC, PowerPC64, SPARC, SPARC64, ARM, AArch64, MIPS, Motorola 68k, AVR, and the JVM. | [Free Pascal Compiler](https://gitlab.com/freepascal.org/fpc/source) | 102 |
| 5 | 5261717 | GitLab VSCode Extension | The GitLab Workflow extension integrates GitLab into Visual Studio Code. | [GitLab VSCode Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension) | 104 |
| 6 | 734943 | GitLab Pages | GitLab Pages publishes static websites directly from a repository in GitLab. | [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages) | 112 |
| 7 | 28419588 | Lazarus | Lazarus is a Rapid Application Development Tool for Free Pascal. It comes with the LCL – Lazarus component library, which contains platform-independent visual components like buttons, windows, checkboxes, treeviews, and many more. | [Lazarus](https://gitlab.com/freepascal.org/lazarus/lazarus) | 140 |
| 8 | 2009901 | Gitaly | Gitaly is a Git RPC service for handling all the Git calls made by GitLab. | [Gitaly](https://gitlab.com/gitlab-org/gitaly) | 160 |
| 9 | 14052249 | Mythic Table | Mythic Table is a virtual tabletop application for playing games with your friends online. | [Mythic Table](https://gitlab.com/mythicteam/mythictable) | 163 |
| 10 | 12584701 | StackGres | StackGres is a full-stack PostgreSQL distribution for Kubernetes, packed into an easy deployment unit. It comes with a carefully selected and tuned set of surrounding PostgreSQL components. | [StackGres](https://gitlab.com/ongresinc/stackgres) | 166 |
```
@misc{zou2025goal2storymultiagentfleetbased,
title={Goal2Story: A Multi-Agent Fleet based on Privately Enabled sLLMs for Impacting Mapping on Requirements Elicitation},
author={Xinkai Zou and Yan Liu and Xiongbo Shi and Chen Yang},
year={2025},
eprint={2503.13279},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2503.13279},
}
``` |
abhinav302019/olympiad_data_295 | abhinav302019 | 2025-03-05T15:27:05Z | 13 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T15:27:01Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: Known_Solution
dtype: string
- name: Known_Answer
dtype: string
- name: Generated_Solution
dtype: string
- name: Generated_Answer
dtype: string
- name: Judge_Evaluation
dtype: string
- name: Judge_Rating
dtype: string
- name: Judge_Justification
dtype: string
splits:
- name: train
num_bytes: 38966
num_examples: 10
download_size: 34167
dataset_size: 38966
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TTA01/redteaming-attack-type | TTA01 | 2025-05-27T05:46:44Z | 0 | 0 | [
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-13T08:57:12Z | 0 | ---
language:
- en
---
# Annotated version of DEFCON 31 Generative AI Red Teaming dataset with additional labels for attack types.
This dataset is an extended version of the [DEFCON31 Generative AI Red Teaming dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data), released by Humane Intelligence.
Our team conducted additional labeling on the accepted attack samples to annotate:
- **Attack Targets** (e.g., gender, race, age, political orientation) → tta01/redteaming-attack-target
- **Attack Types** (e.g., question, request, build-up, scenario assumption, misinformation injection)
The purpose of this extended annotation is to better understand:
- Which types of individuals or groups are most vulnerable to LLM attacks
- What kinds of prompting strategies are most effective in eliciting harmful outputs
> ⚠️ This dataset is shared for non-commercial, academic research purposes only.
>
## 📊 Dataset Contents
- 2,673 attack samples (from accepted DEFCON31 entries)
- 2 sets of annotations per sample:
- `attack_target`: 7 superclasses and 19 subclasses
- `attack_type`: 10 binary-labeled prompting strategies
### 📄 Related Report
This dataset was analyzed in the following technical report (Korean), to be published on TTA
## 📄 License
This dataset is distributed under the [Mozilla Public License v2.0](https://www.mozilla.org/en-US/MPL/2.0/), in accordance with the original dataset license. All annotations are provided under the same terms.
### 🏢 Research Institution and Contributors
This dataset was developed by the **Center for Trustworthy AI** at the **Telecommunications Technology Association (TTA)**, South Korea.
**Lead Researcher**
- Dr. Yeajin Shin (Center for Trustworthy AI, TTA)
**Collaborating Researchers**
- Prof. Kyungsik Han (Hanyang University)
- Taehyung Noh (Hanyang University)
- Mingon Jeong (Hanyang University)
## 🙏 Acknowledgements
This work was supported by the Ministry of Science and ICT (MSIT) of Korea, as part of the “Establishing the Foundation of AI Trustworthiness” project, conducted by TTA.
We gratefully acknowledge the original organizers of the Generative AI Red Teaming Challenge:
- Dr. Rumman Chowdhury and Jutta Williams (Humane Intelligence)
- Sven Cattell (AI Village)
- Austin Carson (Seed AI) |
srpone/gensmo-product-images | srpone | 2025-02-10T11:45:20Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-24T11:01:41Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: images
dtype: image
splits:
- name: train
num_bytes: 207343378.0
num_examples: 979
download_size: 202485100
dataset_size: 207343378.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FlippyDora/amc23_Qwen2-7B-Instruct_n8 | FlippyDora | 2025-02-09T17:51:02Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T17:51:00Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: float64
- name: outputs
list:
- name: label
dtype: int64
- name: output
dtype: string
- name: result
dtype: string
splits:
- name: train
num_bytes: 684746
num_examples: 40
download_size: 235405
dataset_size: 684746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SalzanoAl/dataset_solana | SalzanoAl | 2025-02-06T16:53:29Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-06T16:53:26Z | 0 | ---
dataset_info:
features:
- name: vulnerability
dtype: string
- name: smart_contract
dtype: string
splits:
- name: train
num_bytes: 45185
num_examples: 30
download_size: 17338
dataset_size: 45185
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
InLegal-AI-EXP/PredEx_Instruction-Tuning_Pred-Exp | InLegal-AI-EXP | 2025-04-29T09:30:16Z | 27 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T09:29:26Z | 0 | ---
dataset_info:
features:
- name: Case Name
dtype: string
- name: Input
dtype: string
- name: Output
dtype: string
- name: Label
dtype: int64
- name: Count
dtype: int64
- name: Decision_Count
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 232437960
num_examples: 10961
- name: test
num_bytes: 25910733
num_examples: 1217
download_size: 114980123
dataset_size: 258348693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Tricoteuses/declaration_des_droits_de_l_homme_et_du_citoyen | Tricoteuses | 2025-02-10T13:21:58Z | 21 | 1 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:fr",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"legal",
"text-generation",
"conditional-text-generation"
] | [
"text-generation",
"text2text-generation"
] | 2025-02-10T12:12:41Z | 0 | ---
pretty_name: Tricoteuses Déclaration des droits de l'homme et du citoyen Training Dataset
configs:
- config_name: default
data_files:
- path: data/*parquet
split: train
language:
- fr
task_categories:
- text-generation
- text2text-generation
tags:
- legal
- text-generation
- conditional-text-generation
size_categories:
- <1K
license: cc-by-4.0
---
# Légifrance Legislative Text Dataset
## Dataset Description
The Légifrance Legislative Text Dataset is a structured collection of French legislative and regulatory texts extracted from the [Légifrance platform](https://www.legifrance.gouv.fr/).
This dataset provides machine-readable access to consolidated legal codes, with a particular focus on maintaining the integrity of French linguistic features while providing additional metadata and quality signals.
The data in this dataset comes from the Git repository [Git Tricoteuses — La loi sous git - Déclaration du 26 août 1789 des droits de l'homme et du citoyen](https://git.tricoteuses.fr/declarations/declaration_du_26_aout_1789_des_droits_de_l_homme_et_du_citoyen)
### Languages
French (fr)
## Intended Uses & Limitations
### Intended Uses
- Legal text analysis and research
- Natural Language Processing tasks on French legislative documents
- Legal information retrieval systems
- Analysis of French regulatory frameworks
### Limitations
- Limited to French legislative texts
- Dependent on the structure of source Légifrance documents
- Quality of text extraction depends on the consistency of source markdown formatting
## Dataset Structure
### Data Fields
- `source`: string - Source of the text (e.g., "Code de la sécurité sociale")
- `id`: string - Unique identifier of the legislative text
- `date_debut`: string - Corresponds to the effective date of the article.
- `date_fin`: string - Indicates the date on which the article will be deleted or replaced.
- `url`: string - Direct link to the text on Légifrance
- `extra`: JSON string containing:
- `État`: string - Status of the text
- `Type`: string - Type of legislative text
- `quality_signals`: JSON string containing:
- `character_count`: Total number of characters
- `word_count`: Total number of words
- `text`: string - The main content of the legislative text
### Data Splits
The dataset is provided as a single split without train/validation/test partitioning.
## Dataset Creation
### Source Data
The data comes from the availability of legal texts as open data, retrieved by the [tricoteuses-legifrance](https://git.tricoteuses.fr/logiciels/tricoteuses-legifrance) project.
The dataset is created from Markdown files containing French legislative texts, each with YAML front matter metadata.
## Considerations for Using the Data
### Social Impact
- Improves accessibility to French legislative texts
- Enables computational analysis of legal documents
- Supports transparency in legal research
### Legal Considerations
- Usage should comply with Légifrance terms of service
- Attribution should be provided to the original source
- Users should verify the current validity of legal texts
## Additional Information
### Dataset Curators
This dataset is programmatically curated from official Légifrance sources.
### Licensing Information
Users should refer to Légifrance's licensing terms for the original content.
### Citation Information
When using this dataset, please cite both:
1. The original Légifrance source
2. This dataset processing implementation
### Contributions
Contributions to improve the dataset processing can be made through the repository's issue tracker or pull requests.
|
maiurilorenzo/divina-commedia | maiurilorenzo | 2024-12-28T14:08:40Z | 102 | 1 | [
"language:it",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-05T15:04:30Z | 0 | ---
language:
- it
dataset_info:
features:
- name: volume
dtype: string
- name: canto
dtype: string
- name: tercet
dtype: int64
- name: verse_number
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1182296
num_examples: 14233
download_size: 458870
dataset_size: 1182296
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Divina Commedia Dataset
## Overview
The **Divina Commedia** (Divine Comedy) is an epic poem by Dante Alighieri, widely considered one of the greatest works of world literature. This dataset contains the text of the poem, organized into volumes, cantos, and verses. It is suitable for various natural language processing (NLP) tasks, such as text analysis, machine learning, and linguistic research.
## Dataset Structure
The dataset is structured in a hierarchical format, with the following attributes:
- **volume**: The name of the volume (e.g., Inferno, Purgatorio, Paradiso).
- **canto**: The name of the canto (e.g., Canto I).
- **tercet**: The number of the tercet (a group of three verses).
- **verse_number**: The number of the verse within the tercet.
- **text**: The actual text of the verse.
- **text_length**: The length of the verse text in characters.
### Example Entry
| volume | canto | tercet | verse_number | text
|----------|-----------|---------|--------------|-----------------------------------------
| Inferno | Canto I | 1 | 1 | Nel mezzo del cammin di nostra vita
| Inferno | Canto I | 1 | 2 | mi ritrovai per una selva oscura,
| Inferno | Canto I | 1 | 3 | ché la diritta via era smarrita.
## Usage
To load the dataset using the `datasets` library from Hugging Face, you can use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("maiurilorenzo/divina-commedia")
# Display the first few entries
print(dataset["train"].to_pandas().head())
|
mbodiai/ABB_PandG_20250401_154039 | mbodiai | 2025-04-17T06:31:19Z | 55 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-17T06:31:17Z | 0 | ---
dataset_info:
features:
- name: observation
struct:
- name: image
dtype: image
- name: instruction
dtype: string
- name: prompt
dtype: string
- name: action
struct:
- name: pose
struct:
- name: x
dtype: float32
- name: y
dtype: float32
- name: z
dtype: float32
- name: roll
dtype: float32
- name: pitch
dtype: float32
- name: yaw
dtype: float32
- name: grasp
dtype: float32
- name: gripper_force
dtype: int64
- name: state
struct:
- name: images
struct:
- name: camF
dtype: image
- name: camT
dtype: image
- name: depths
struct:
- name: camF
dtype: image
- name: camT
dtype: image
- name: world_objects
dtype: string
- name: gripper_pose
sequence: float32
- name: reward
dtype: float32
- name: metadata
struct:
- name: episode_idx
dtype: int64
- name: step_idx
dtype: int64
splits:
- name: train
num_bytes: 52481027.0
num_examples: 5
download_size: 29153109
dataset_size: 52481027.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lscpku/RefCOCOg_rec | lscpku | 2025-04-12T08:11:10Z | 25 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-12T08:09:15Z | 0 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: question_id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: segmentation
sequence: float64
- name: bbox
sequence: float64
- name: iscrowd
dtype: int64
- name: file_name
dtype: string
- name: image_width
dtype: int64
- name: image_height
dtype: int64
splits:
- name: val
num_bytes: 774015265.0
num_examples: 14432
- name: test
num_bytes: 516800873.75
num_examples: 9602
download_size: 673226043
dataset_size: 1290816138.75
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
neelabh17/star-graph-deg-7-path-7-nodes-300_out_of_the_box_num_gen_100_Qwen2.5-3B-Instruct | neelabh17 | 2025-05-07T21:35:21Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-07T21:35:19Z | 0 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
- name: question
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
- name: response_10
dtype: string
- name: answer_10
dtype: string
- name: correct_10
dtype: int64
- name: response_11
dtype: string
- name: answer_11
dtype: string
- name: correct_11
dtype: int64
- name: response_12
dtype: string
- name: answer_12
dtype: string
- name: correct_12
dtype: int64
- name: response_13
dtype: string
- name: answer_13
dtype: string
- name: correct_13
dtype: int64
- name: response_14
dtype: string
- name: answer_14
dtype: string
- name: correct_14
dtype: int64
- name: response_15
dtype: string
- name: answer_15
dtype: string
- name: correct_15
dtype: int64
- name: response_16
dtype: string
- name: answer_16
dtype: string
- name: correct_16
dtype: int64
- name: response_17
dtype: string
- name: answer_17
dtype: string
- name: correct_17
dtype: int64
- name: response_18
dtype: string
- name: answer_18
dtype: string
- name: correct_18
dtype: int64
- name: response_19
dtype: string
- name: answer_19
dtype: string
- name: correct_19
dtype: int64
- name: response_20
dtype: string
- name: answer_20
dtype: string
- name: correct_20
dtype: int64
- name: response_21
dtype: string
- name: answer_21
dtype: string
- name: correct_21
dtype: int64
- name: response_22
dtype: string
- name: answer_22
dtype: string
- name: correct_22
dtype: int64
- name: response_23
dtype: string
- name: answer_23
dtype: string
- name: correct_23
dtype: int64
- name: response_24
dtype: string
- name: answer_24
dtype: string
- name: correct_24
dtype: int64
- name: response_25
dtype: string
- name: answer_25
dtype: string
- name: correct_25
dtype: int64
- name: response_26
dtype: string
- name: answer_26
dtype: string
- name: correct_26
dtype: int64
- name: response_27
dtype: string
- name: answer_27
dtype: string
- name: correct_27
dtype: int64
- name: response_28
dtype: string
- name: answer_28
dtype: string
- name: correct_28
dtype: int64
- name: response_29
dtype: string
- name: answer_29
dtype: string
- name: correct_29
dtype: int64
- name: response_30
dtype: string
- name: answer_30
dtype: string
- name: correct_30
dtype: int64
- name: response_31
dtype: string
- name: answer_31
dtype: string
- name: correct_31
dtype: int64
- name: response_32
dtype: string
- name: answer_32
dtype: string
- name: correct_32
dtype: int64
- name: response_33
dtype: string
- name: answer_33
dtype: string
- name: correct_33
dtype: int64
- name: response_34
dtype: string
- name: answer_34
dtype: string
- name: correct_34
dtype: int64
- name: response_35
dtype: string
- name: answer_35
dtype: string
- name: correct_35
dtype: int64
- name: response_36
dtype: string
- name: answer_36
dtype: string
- name: correct_36
dtype: int64
- name: response_37
dtype: string
- name: answer_37
dtype: string
- name: correct_37
dtype: int64
- name: response_38
dtype: string
- name: answer_38
dtype: string
- name: correct_38
dtype: int64
- name: response_39
dtype: string
- name: answer_39
dtype: string
- name: correct_39
dtype: int64
- name: response_40
dtype: string
- name: answer_40
dtype: string
- name: correct_40
dtype: int64
- name: response_41
dtype: string
- name: answer_41
dtype: string
- name: correct_41
dtype: int64
- name: response_42
dtype: string
- name: answer_42
dtype: string
- name: correct_42
dtype: int64
- name: response_43
dtype: string
- name: answer_43
dtype: string
- name: correct_43
dtype: int64
- name: response_44
dtype: string
- name: answer_44
dtype: string
- name: correct_44
dtype: int64
- name: response_45
dtype: string
- name: answer_45
dtype: string
- name: correct_45
dtype: int64
- name: response_46
dtype: string
- name: answer_46
dtype: string
- name: correct_46
dtype: int64
- name: response_47
dtype: string
- name: answer_47
dtype: string
- name: correct_47
dtype: int64
- name: response_48
dtype: string
- name: answer_48
dtype: string
- name: correct_48
dtype: int64
- name: response_49
dtype: string
- name: answer_49
dtype: string
- name: correct_49
dtype: int64
- name: response_50
dtype: string
- name: answer_50
dtype: string
- name: correct_50
dtype: int64
- name: response_51
dtype: string
- name: answer_51
dtype: string
- name: correct_51
dtype: int64
- name: response_52
dtype: string
- name: answer_52
dtype: string
- name: correct_52
dtype: int64
- name: response_53
dtype: string
- name: answer_53
dtype: string
- name: correct_53
dtype: int64
- name: response_54
dtype: string
- name: answer_54
dtype: string
- name: correct_54
dtype: int64
- name: response_55
dtype: string
- name: answer_55
dtype: string
- name: correct_55
dtype: int64
- name: response_56
dtype: string
- name: answer_56
dtype: string
- name: correct_56
dtype: int64
- name: response_57
dtype: string
- name: answer_57
dtype: string
- name: correct_57
dtype: int64
- name: response_58
dtype: string
- name: answer_58
dtype: string
- name: correct_58
dtype: int64
- name: response_59
dtype: string
- name: answer_59
dtype: string
- name: correct_59
dtype: int64
- name: response_60
dtype: string
- name: answer_60
dtype: string
- name: correct_60
dtype: int64
- name: response_61
dtype: string
- name: answer_61
dtype: string
- name: correct_61
dtype: int64
- name: response_62
dtype: string
- name: answer_62
dtype: string
- name: correct_62
dtype: int64
- name: response_63
dtype: string
- name: answer_63
dtype: string
- name: correct_63
dtype: int64
- name: response_64
dtype: string
- name: answer_64
dtype: string
- name: correct_64
dtype: int64
- name: response_65
dtype: string
- name: answer_65
dtype: string
- name: correct_65
dtype: int64
- name: response_66
dtype: string
- name: answer_66
dtype: string
- name: correct_66
dtype: int64
- name: response_67
dtype: string
- name: answer_67
dtype: string
- name: correct_67
dtype: int64
- name: response_68
dtype: string
- name: answer_68
dtype: string
- name: correct_68
dtype: int64
- name: response_69
dtype: string
- name: answer_69
dtype: string
- name: correct_69
dtype: int64
- name: response_70
dtype: string
- name: answer_70
dtype: string
- name: correct_70
dtype: int64
- name: response_71
dtype: string
- name: answer_71
dtype: string
- name: correct_71
dtype: int64
- name: response_72
dtype: string
- name: answer_72
dtype: string
- name: correct_72
dtype: int64
- name: response_73
dtype: string
- name: answer_73
dtype: string
- name: correct_73
dtype: int64
- name: response_74
dtype: string
- name: answer_74
dtype: string
- name: correct_74
dtype: int64
- name: response_75
dtype: string
- name: answer_75
dtype: string
- name: correct_75
dtype: int64
- name: response_76
dtype: string
- name: answer_76
dtype: string
- name: correct_76
dtype: int64
- name: response_77
dtype: string
- name: answer_77
dtype: string
- name: correct_77
dtype: int64
- name: response_78
dtype: string
- name: answer_78
dtype: string
- name: correct_78
dtype: int64
- name: response_79
dtype: string
- name: answer_79
dtype: string
- name: correct_79
dtype: int64
- name: response_80
dtype: string
- name: answer_80
dtype: string
- name: correct_80
dtype: int64
- name: response_81
dtype: string
- name: answer_81
dtype: string
- name: correct_81
dtype: int64
- name: response_82
dtype: string
- name: answer_82
dtype: string
- name: correct_82
dtype: int64
- name: response_83
dtype: string
- name: answer_83
dtype: string
- name: correct_83
dtype: int64
- name: response_84
dtype: string
- name: answer_84
dtype: string
- name: correct_84
dtype: int64
- name: response_85
dtype: string
- name: answer_85
dtype: string
- name: correct_85
dtype: int64
- name: response_86
dtype: string
- name: answer_86
dtype: string
- name: correct_86
dtype: int64
- name: response_87
dtype: string
- name: answer_87
dtype: string
- name: correct_87
dtype: int64
- name: response_88
dtype: string
- name: answer_88
dtype: string
- name: correct_88
dtype: int64
- name: response_89
dtype: string
- name: answer_89
dtype: string
- name: correct_89
dtype: int64
- name: response_90
dtype: string
- name: answer_90
dtype: string
- name: correct_90
dtype: int64
- name: response_91
dtype: string
- name: answer_91
dtype: string
- name: correct_91
dtype: int64
- name: response_92
dtype: string
- name: answer_92
dtype: string
- name: correct_92
dtype: int64
- name: response_93
dtype: string
- name: answer_93
dtype: string
- name: correct_93
dtype: int64
- name: response_94
dtype: string
- name: answer_94
dtype: string
- name: correct_94
dtype: int64
- name: response_95
dtype: string
- name: answer_95
dtype: string
- name: correct_95
dtype: int64
- name: response_96
dtype: string
- name: answer_96
dtype: string
- name: correct_96
dtype: int64
- name: response_97
dtype: string
- name: answer_97
dtype: string
- name: correct_97
dtype: int64
- name: response_98
dtype: string
- name: answer_98
dtype: string
- name: correct_98
dtype: int64
- name: response_99
dtype: string
- name: answer_99
dtype: string
- name: correct_99
dtype: int64
splits:
- name: train
num_bytes: 24165684
num_examples: 100
download_size: 8393730
dataset_size: 24165684
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tevien/ConsoleScreenshots_PS2Wii | tevien | 2025-02-09T23:31:22Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T23:28:42Z | 0 | ---
dataset_info:
features:
- name: __key__
dtype: string
- name: __url__
dtype: string
- name: game.txt
dtype: string
- name: jpg
dtype: image
- name: platform.txt
dtype: string
splits:
- name: train
num_bytes: 599043883.9118131
num_examples: 14919
download_size: 557075325
dataset_size: 599043883.9118131
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
suul999922/x_dataset_10 | suul999922 | 2025-01-26T23:06:18Z | 53 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T19:35:29Z | 0 | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** suul999922/x_dataset_10
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5FA9GTvGdN2CB2jRmRKMaczcoXiNRYuHwYaHABaW5y65o7ae
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{suul9999222025datauniversex_dataset_10,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={suul999922},
year={2025},
url={https://huggingface.co/datasets/suul999922/x_dataset_10},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 8601669
- **Date Range:** 2025-01-24T21:00:00Z to 2025-01-25T15:37:47Z
- **Last Updated:** 2025-01-26T23:06:16Z
### Data Distribution
- Tweets with hashtags: 17.97%
- Tweets without hashtags: 82.03%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
1. #riyadh (70609)
2. #linglingkwonginqingdao (24914)
3. #welcomeormkorntoqd (21143)
4. #zelena (19714)
5. #thameposeriesep7 (16694)
6. #tiktok (14355)
7. #bbb25 (10914)
8. #republicday (10303)
9. #zonauang (8456)
10. #yoko1stfmxchangchun (8429)
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T23:06:16Z | 8601669 | 17203338 |
|
visheratin/laion-coco-nllb | visheratin | 2024-04-11T16:36:31Z | 629 | 43 | [
"task_categories:image-to-text",
"task_categories:translation",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:af",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ar",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:be",
"language:bem",
"language:bn",
"language:bho",
"language:bjn",
"language:bo",
"language:bs",
"language:bug",
"language:bg",
"language:ca",
"language:ceb",
"language:cs",
"language:cjk",
"language:ckb",
"language:crh",
"language:cy",
"language:da",
"language:de",
"language:dik",
"language:dyu",
"language:dz",
"language:el",
"language:en",
"language:eo",
"language:et",
"language:eu",
"language:ee",
"language:fo",
"language:fj",
"language:fi",
"language:fon",
"language:fr",
"language:fur",
"language:fuv",
"language:gaz",
"language:gd",
"language:ga",
"language:gl",
"language:gn",
"language:gu",
"language:ht",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hu",
"language:hy",
"language:ig",
"language:ilo",
"language:id",
"language:is",
"language:it",
"language:jv",
"language:ja",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:ka",
"language:kk",
"language:kbp",
"language:kea",
"language:khk",
"language:km",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:ko",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lt",
"language:lmo",
"language:ltg",
"language:lb",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:ml",
"language:mar",
"language:min",
"language:mk",
"language:mt",
"language:mni",
"language:mos",
"language:mi",
"language:my",
"language:nl",
"language:nn",
"language:nb",
"language:npi",
"language:nso",
"language:nus",
"language:ny",
"language:oc",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:pl",
"language:pt",
"language:prs",
"language:quy",
"language:ro",
"language:rn",
"language:ru",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:es",
"language:sc",
"language:sr",
"language:ss",
"language:su",
"language:sv",
"language:swh",
"language:szl",
"language:ta",
"language:taq",
"language:tt",
"language:te",
"language:tg",
"language:tl",
"language:th",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tk",
"language:tum",
"language:tr",
"language:tw",
"language:tzm",
"language:ug",
"language:uk",
"language:umb",
"language:ur",
"language:uzn",
"language:vec",
"language:vi",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zh",
"language:zsm",
"language:zu",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2309.01859",
"doi:10.57967/hf/1006",
"region:us"
] | [
"image-to-text",
"translation"
] | 2023-06-18T06:58:28Z | 1 | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
license: cc-by-nc-4.0
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
- translation
pretty_name: LAION-COCO translated to 200 languages
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: eng_caption
dtype: string
- name: captions
sequence:
sequence: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 271360114
num_examples: 14906
- name: train
num_bytes: 15986931307
num_examples: 878978
download_size: 10358151216
dataset_size: 16258291421
language_details: ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng,
ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl,
bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn,
bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn,
dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn,
est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn,
fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn,
ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn,
kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn,
kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn,
kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn,
lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn,
mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn,
nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya,
pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn,
ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr,
sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn,
spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn,
szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn,
twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn,
vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans,
zho_Hant, zul_Latn
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# LAION COCO translated into 200 languages
This dataset contains the samples of the [LAION-COCO](https://huggingface.co/datasets/laion/laion-coco) dataset translated to 200 languages using
the largest [NLLB-200 model](https://huggingface.co/facebook/nllb-200-3.3B) (3.3B parameters).
## Fields description
1. `id` - unique ID of the image.
2. `url` - original URL of the image from the LAION-COCO dataset.
3. `eng_caption` - original English caption from the LAION-COCO dataset.
4. `captions` - a list of captions translated to the languages from the Flores 200 dataset. Every item in the list is a list where the first element is a BCP-47 language code, and the second one is a caption in this language. The list of all language codes for the Flores 200 dataset can be found [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
5. `score` - aesthetic score generated using [LAION aesthetic predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor/). The images in the dataset have the score of 4.5+.
## Images
The dataset was filtered to contain only working image URLs. However, the availability may change in the future. Because of that, all images from this dataset are available at [https://nllb-data.com/](https://nllb-data.com/).
To get the image, use the following format:
```
https://nllb-data.com/{id}.jpg
```
## Paper
The dataset was used to train the models in the paper: "[NLLB-CLIP - train performant multilingual image retrieval model on a budget](https://arxiv.org/abs/2309.01859)". |
knarayan/cloud_posture_checks_name_and_desc | knarayan | 2025-02-14T21:14:50Z | 16 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-14T21:14:36Z | 0 | ---
license: apache-2.0
---
|
paolordls/crosslg-contaminated-benchmark-qa-en-og-sm-1 | paolordls | 2024-11-26T16:42:35Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-26T14:48:46Z | 0 | ---
dataset_info:
features:
- name: fake_news
dtype: string
- name: scenario_id
dtype: int64
- name: real_news
dtype: string
- name: fake_keyword
dtype: string
- name: real_question
dtype: string
- name: fake_question
dtype: string
- name: real_answer
dtype: string
- name: fake_answer
dtype: string
splits:
- name: train
num_bytes: 80307
num_examples: 10
download_size: 99991
dataset_size: 80307
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
raresense/SAKS | raresense | 2025-06-22T08:54:47Z | 25 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-22T08:51:58Z | 0 | ---
dataset_info:
features:
- name: target
dtype:
image:
decode: false
- name: ghost_image
dtype:
image:
decode: false
- name: mask
dtype:
image:
decode: false
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 123092492.327
num_examples: 2009
download_size: 94394140
dataset_size: 123092492.327
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
OwkinZero/MOSAIC-bladder-SICER-allpop-split | OwkinZero | 2025-06-23T14:07:34Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T14:07:30Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
dtype: string
- name: correct_cell_type
dtype: string
splits:
- name: train
num_bytes: 322849.3532557408
num_examples: 2034
- name: test
num_bytes: 80791.70147845235
num_examples: 509
download_size: 46857
dataset_size: 403641.05473419314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
taln-ls2n/termith-eval | taln-ls2n | 2022-09-23T07:49:04Z | 83 | 1 | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:multilingual",
"language:fr",
"license:cc-by-4.0",
"size_categories:n<1K",
"region:us"
] | [
"text-mining",
"text-generation"
] | 2022-04-22T09:09:23Z | 0 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
license: cc-by-4.0
multilinguality:
- multilingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- n<1K
pretty_name: TermITH-Eval
---
# TermITH-Eval Benchmark Dataset for Keyphrase Generation
## About
TermITH-Eval is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 400 abstracts of scientific papers in French collected from the FRANCIS and PASCAL databases of the French [Institute for Scientific and Technical Information (Inist)](https://www.inist.fr/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the dataset can be found in the original paper [(Bougouin et al., 2016)][bougouin-2016].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- |------------:|-----------:|-------------:|----------:|------------:|--------:|---------:|
| Test | 399 | 156.9 | 11.81 | 40.60 | 7.32 | 19.28 | 32.80 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **category**: category of the document, i.e. chimie (chemistry), archeologie (archeology), linguistique (linguistics) and scienceInfo (information sciences).
## References
- (Bougouin et al., 2016) Adrien Bougouin, Sabine Barreaux, Laurent Romary, Florian Boudin, and Béatrice Daille. 2016.
[TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation][bougouin-2016].
In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1924–1927, Portorož, Slovenia. European Language Resources Association (ELRA).Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[bougouin-2016]: https://aclanthology.org/L16-1304/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
espressovi/VALUED | espressovi | 2025-01-28T19:38:00Z | 31 | 1 | [
"license:cc-by-4.0",
"region:us"
] | [] | 2025-01-28T07:44:47Z | 0 | ---
license: cc-by-4.0
---
# VALUED - Vision and Logical Understanding Evaluation Dataset.
---
This repository contains dataset associated with the paper at (https://data.mlr.press/assets/pdf/v01-13.pdf).
View samples from the dataset at the [dataset page](https://espressovi.github.io/VALUED).
## Authors
- [Soumadeep Saha](https://www.isical.ac.in/~soumadeep.saha_r)
- [Saptarshi Saha](https://openreview.net/profile?id=~Saptarshi_Saha1)
- [Utpal Garain](https://www.isical.ac.in/~utpal).
## Abstract
Starting with early successes in computer vision tasks, deep learning based techniques have since overtaken state of the art approaches in a multitude of domains. However, it has been demonstrated time and again that these techniques fail to capture semantic context and logical constraints, instead often relying on spurious correlations to arrive at the answer. Since application of deep learning techniques to critical scenarios are dependent on adherence to domain specific constraints, several attempts have been made to address this issue. One limitation holding back a thorough exploration of this area, is a lack of suitable datasets which feature a rich set of rules. In order to address this, we present the VALUE (Vision And Logical Understanding Evaluation) Dataset, consisting of 200,000+ annotated images and an associated rule set, based on the popular board game - chess. The curated rule set considerably constrains the set of allowable predictions, and are designed to probe key semantic abilities like localization and enumeration. Alongside standard metrics, additional metrics to measure performance with regards to logical consistency is presented. We analyze several popular and state of the art vision models on this task, and show that, although their performance on standard metrics are laudable, they produce a plethora of incoherent results, indicating that this dataset presents a significant challenge for future works.
---
## Usage
### Download data
- The generated train/test set along with all labels can be found [here](https://zenodo.org/records/10607059).
- The DOI for the dataset is 10.5281/zenodo.8278014.
## Cite
If you find our work useful, please cite:
```
@article{saha2024valued,
title={{VALUED} - Vision and Logical Understanding Evaluation Dataset},
author={Soumadeep Saha and Saptarshi Saha and Utpal Garain},
journal={Journal of Data-centric Machine Learning Research},
year={2024},
url={https://openreview.net/forum?id=nS9oxKyy9u}
}
```
|
raphus/clinical_trials_gov_COMP631_project | raphus | 2025-03-20T22:32:54Z | 60 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-20T20:59:48Z | 0 | ---
dataset_info:
features:
- name: nctId
dtype: string
- name: relatedConditions
dtype: string
- name: officialTitle
dtype: string
- name: overallStatus
dtype: string
- name: briefSummary
dtype: string
- name: eligibilityCriteria
dtype: string
- name: sponsor
dtype: string
- name: studyType
dtype: string
- name: allocation
dtype: string
- name: interventionModel
dtype: string
- name: acronym
dtype: string
- name: centralContacts
dtype: string
splits:
- name: train
num_bytes: 54578663
num_examples: 18995
download_size: 28857143
dataset_size: 54578663
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Akshitha-M/colloquial-dataset | Akshitha-M | 2025-02-19T18:20:21Z | 13 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T18:18:02Z | 0 | ---
dataset_info:
features:
- name: English Text
dtype: string
- name: Colloquial Text
dtype: string
splits:
- name: train
num_bytes: 705.3333333333334
num_examples: 8
- name: test
num_bytes: 352.6666666666667
num_examples: 4
download_size: 4359
dataset_size: 1058.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
saatwiksy/PR-Reports | saatwiksy | 2024-11-16T03:23:05Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-16T03:14:32Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: report
dtype: string
splits:
- name: train
num_bytes: 28371914.0
num_examples: 121
download_size: 28313863
dataset_size: 28371914.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Overview
This dataset is a curated collection of 121 panoramic dental radiographs, each paired with a clinically validated report prepared by a maxillofacial radiologist. The dataset captures a wide range of dental conditions, making it highly specialized for training vision-language models in dental diagnostics.
Link to the model: [[Model](https://huggingface.co/saatwiksy/PR-LLaVA-34b)]
|
autoevaluate/autoeval-staging-eval-project-17e9fcc1-7454810 | autoevaluate | 2022-06-25T09:35:01Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"autotrain",
"evaluation"
] | [] | 2022-06-25T09:34:17Z | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- ag_news
eval_info:
task: multi_class_classification
model: mrm8488/distilroberta-finetuned-age_news-classification
metrics: []
dataset_name: ag_news
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: mrm8488/distilroberta-finetuned-age_news-classification
* Dataset: ag_news
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
copycat-project/rag_assets_10292024_filtered_v1 | copycat-project | 2024-10-29T20:07:58Z | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-29T20:06:15Z | 0 | ---
dataset_info:
features:
- name: character_name
dtype: string
- name: character_id
dtype: int32
- name: prompt
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: cropped_image
dtype: image
- name: cropped_masked_image
dtype: image
- name: masked_image
dtype: image
- name: mask
dtype:
array2_d:
shape:
- 512
- 512
dtype: int32
- name: hqsam_cropped_masked_image
dtype: image
- name: hqsam_masked_image
dtype: image
- name: hqsam_mask
dtype:
array2_d:
shape:
- 512
- 512
dtype: int32
splits:
- name: train
num_bytes: 4796847466.0
num_examples: 1464
download_size: 1759402849
dataset_size: 4796847466.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SKIML-ICL/astro_qa_nli_entity_t | SKIML-ICL | 2025-06-19T03:53:23Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-19T03:23:15Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: context
dtype: string
- name: answers
sequence: string
- name: answer_sentence
dtype: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: nli
dtype: string
- name: pid
dtype: int64
- name: rank
dtype: int64
- name: score
dtype: float64
- name: text
dtype: string
- name: hasanswer
dtype: bool
- name: answerable
dtype: string
- name: entity_type
dtype: string
- name: entity_text
dtype: string
- name: entity_vector
sequence: float64
- name: similar_entity
dtype: string
- name: random_entity
dtype: string
splits:
- name: test
num_bytes: 60307433
num_examples: 2173
download_size: 33354954
dataset_size: 60307433
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
shylee/eval_BIDtest1 | shylee | 2025-05-13T19:34:51Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-13T14:39:23Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 52,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
linrany/gsm_sample_90 | linrany | 2025-06-03T03:29:55Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T03:29:44Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: origin_question
dtype: string
- name: correct_answer
dtype: string
splits:
- name: train
num_bytes: 23966
num_examples: 90
download_size: 16964
dataset_size: 23966
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
french-datasets/rntc_tmp-multitask-fr-clinical | french-datasets | 2025-06-04T10:46:07Z | 0 | 0 | [
"task_categories:translation",
"task_categories:text-classification",
"language:fra",
"region:us"
] | [
"translation",
"text-classification"
] | 2025-06-04T10:45:46Z | 0 | ---
language:
- fra
viewer: false
task_categories:
- translation
- text-classification
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [rntc/tmp-multitask-fr-clinical](https://huggingface.co/datasets/rntc/tmp-multitask-fr-clinical). |
DeepNLP/ai-docs-agent | DeepNLP | 2025-03-31T16:59:39Z | 27 | 1 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T16:58:57Z | 0 | ---
license: mit
---
# AI Docs Agent Meta and Traffic Dataset in AI Agent Marketplace | AI Agent Directory | AI Agent Index from DeepNLP
This dataset is collected from AI Agent Marketplace Index and Directory at http://www.deepnlp.org, which contains AI Agents's meta information such as agent's name, website, description, as well as the monthly updated Web performance metrics, including Google,Bing average search ranking positions, Github Stars, Arxiv References, etc.
The dataset is helpful for AI researchers and practitioners to do analysis, write reports, monitor the trends and track the exponentially growth (though flat) of number of AI Agents in all fields.
To add and list your agent to the Agent Index, please visit [AI Agent Directory](http://www.deepnlp.org/store/ai-agent), use the the workspace [http://www.deepnlp.org/workspace/my_ai_services](http://www.deepnlp.org/workspace/my_ai_services) or using python API
To Search all agents, please use AI Agent Search Engine [Agent Search Engine](http://www.deepnlp.org/search/agent)
[Github AI Agent Marketplace](https://github.com/AI-Agent-Hub/AI-Agent-Marketplace)
[Pypi AI Agent Marketplace](https://pypi.org/project/ai-agent-marketplace/)

# Data Samples
## [stately ai](https://stately.ai/docs/agents)
<details>
### website
https://stately.ai/docs/agents
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-stately-ai/stately-ai
### description
An AI agent is an autonomous entity that observes an environment, decides what to do (based on its internal policy), and performs actions towards achieving ...
### category
AI Docs
### tags
</details>
## [docsai app](https://docsai.app/)
<details>
### website
https://docsai.app/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-docsai-app/docsai-app
### description
The AI Docs Companion you always wanted. Train your documents, chat with your documents, and create chatbots that solves queries for you and your users.
### category
AI Docs
### tags
</details>
## [echobase ai](https://echobase.ai/docs/ai-agents)
<details>
### website
https://echobase.ai/docs/ai-agents
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-echobase-ai/echobase-ai
### description
AI Agents are at the heart of the Echobase Platform. They allow users to create agents trained to accurately produce and analyze material using artificial ...
### category
AI Docs
### tags
</details>
## [docanalyzer ai](https://docanalyzer.ai/)
<details>
### website
https://docanalyzer.ai/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-docanalyzer-ai/docanalyzer-ai
### description
Set up intelligent agents to streamline complex document handling tasks. From automatic document sorting and data extraction to custom workflow integrations, ...
### category
AI Docs
### tags
</details>
## [play ai api](https://docs.play.ai/documentation/get-started/introduction)
<details>
### website
https://docs.play.ai/documentation/get-started/introduction
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-play-ai-api/play-ai-api
### description
Our API enables every business, every developer, every tinkerer to easily build capable and useful conversational AI voice Solutions.
### category
AI Docs
### tags
</details>
## [mistral ai](https://docs.mistral.ai/capabilities/agents/)
<details>
### website
https://docs.mistral.ai/capabilities/agents/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-mistral-ai/mistral-ai
### description
AI agents are autonomous systems powered by large language models (LLMs) that, given high-level instructions, can plan, use tools, carry out steps of processing ...
### category
AI Docs
### tags
</details>
## [anythingllm com](https://docs.anythingllm.com/agent/overview)
<details>
### website
https://docs.anythingllm.com/agent/overview
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-anythingllm-com/anythingllm-com
### description
AnythingLLM Products Resources Community The all-in-one AI application Everything great about AI in one desktop application. Chat with docs, use AI Agents, and more - full locally and offline. Downloa
### category
AI Docs
### tags
</details>
## [omniagentai org](https://docs.omniagentai.org/)
<details>
### website
https://docs.omniagentai.org/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-omniagentai-org/omniagentai-org
### description
These agents are designed to execute tasks, make decisions, and generate value autonomously within a decentralized ecosystem. Whether you need an AI agent to represent you online, …
### category
AI Docs
### tags
</details>
## [microsoft com](https://learn.microsoft.com/en-us/azure/ai-services/agents/)
<details>
### website
https://learn.microsoft.com/en-us/azure/ai-services/agents/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-microsoft-com/microsoft-com
### description
Trace Id is missing Microsoft Microsoft 365 Microsoft 365 Getting started Products Role-based agents For departments For industries Search Search Microsoft 365 No results Cancel All Microsoft Softwa
### category
AI Docs
### tags
</details>
## [bizagi](https://help.bizagi.com/platform/en/ai_agents_files_as_inputs.htm)
<details>
### website
https://help.bizagi.com/platform/en/ai_agents_files_as_inputs.htm
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-bizagi/bizagi
### description
You can upload files to AI Agents, enabling Bizagi's AI to provide quick insights and solutions based on their content.
### category
AI Docs
### tags
</details>
## [reworkd](https://docs.reworkd.ai/introduction)
<details>
### website
https://docs.reworkd.ai/introduction
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-reworkd/reworkd
### description
AgentGPT is an autonomous AI Agent platform that empowers users to create and deploy customizable autonomous AI agents directly in the browser.
### category
AI Docs
### tags
</details>
## [aws documentation](https://docs.aws.amazon.com/nova/latest/userguide/agents.html)
<details>
### website
https://docs.aws.amazon.com/nova/latest/userguide/agents.html
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-aws-documentation/aws-documentation
### description
An AI agent helps your end-users complete actions based on organization data and user input. Agents orchestrate interactions between foundation models (FMs), data sources, software …
### category
AI Docs
### tags
</details>
## [oneai com](https://docs.oneai.com/docs/one-agent)
<details>
### website
https://docs.oneai.com/docs/one-agent
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-oneai-com/oneai-com
### description
→ → → This website uses cookies Essential Cookies (0) (0) Cookies required to enable basic website functionality. Functional Cookies
### category
AI Docs
### tags
</details>
## [agent ai](https://docs.agent.ai/welcome)
<details>
### website
https://docs.agent.ai/welcome
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-agent-ai/agent-ai
### description
Build advanced AI agents using an easy, extensible, no-code platform with data tools and access to frontier LLMS.
### category
AI Docs
### tags
</details>
## [kanverse ai](https://www.kanverse.ai/)
<details>
### website
https://www.kanverse.ai/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-kanverse-ai/kanverse-ai
### description
With Kanverse AI Agents, enterprises can seamlessly transition to "Zero-Touch" document processing workflows. By automating the processing, validation, and ...
### category
AI Docs
### tags
</details>
## [neon](https://neon.tech/docs/use-cases/ai-agents)
<details>
### website
https://neon.tech/docs/use-cases/ai-agents
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-neon/neon
### description
AI agents can now provision infrastructure, including databases. With AI agents already creating databases every few seconds, they are poised to manage a significant portion of the web's infrastructur
### category
AI Docs
### tags
</details>
## [agents land](https://docs.agents.land/mesh-by-distilled-ai)
<details>
### website
https://docs.agents.land/mesh-by-distilled-ai
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-agents-land/agents-land
### description
As the platform matures, AI Agents will have the ability to interact autonomously with a wide range of Web 2.0 and Web3 platforms via API. Provide API keys for preset and custom integrations; Experime
### category
AI Docs
### tags
</details>
## [powerdocs ai](https://www.powerdocs.ai/)
<details>
### website
https://www.powerdocs.ai/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-powerdocs-ai/powerdocs-ai
### description
Power Docs ai leverages advanced AI to search and organize your documents, unlocking valuable insights and boosting your productivity. Transform your workflow today! ... AI Agents; Advanced Document O
### category
AI Docs
### tags
</details>
## [librechat ai](https://www.librechat.ai/docs/features/agents)
<details>
### website
https://www.librechat.ai/docs/features/agents
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-librechat-ai/librechat-ai
### description
LibreChat’s AI Agents feature provides a flexible framework for creating custom AI assistants powered by various model providers. This feature is similar to OpenAI’s Assistants API and ChatGPT’s GPTs,
### category
AI Docs
### tags
</details>
## [tray ai](https://tray.ai/documentation/templates/ai-agent-tools/at-internal-knowledge-service-via-google-docs/)
<details>
### website
https://tray.ai/documentation/templates/ai-agent-tools/at-internal-knowledge-service-via-google-docs/
### Agent Page
http://www.deepnlp.org/store/ai-agent/ai-docs/pub-tray-ai/tray-ai
### description
Internal Knowledge Service via Google Docs. Workflow. AI Agent Tools. Beginner. Use Template.
### category
AI Docs
### tags
</details>
|
dogtooth/uf_tulu_3_uf_iter3_small_beta_3 | dogtooth | 2024-12-29T22:48:16Z | 18 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-29T22:48:00Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_completion
dtype: string
- name: reference_completion
dtype: string
splits:
- name: train
num_bytes: 1663080916
num_examples: 183405
download_size: 499171007
dataset_size: 1663080916
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
huanyuqingming/Machine-Learning-Project | huanyuqingming | 2024-11-27T03:30:55Z | 48 | 1 | [
"task_categories:image-to-3d",
"language:en",
"language:zh",
"license:other",
"size_categories:100B<n<1T",
"modality:3d",
"region:us",
"architecture",
"3d",
"sketchfab"
] | [
"image-to-3d"
] | 2024-11-26T08:57:21Z | 0 | ---
license: other
task_categories:
- image-to-3d
language:
- en
- zh
tags:
- architecture
- 3d
- sketchfab
pretty_name: Datasets of AI2612
size_categories:
- 100B<n<1T
---
# Datasets of AI2612
3D models(.glb) and rendergraphs about architecture.
The source data are from [sketchfab](https://sketchfab.com). |
Asap7772/Asap7772open_web_math_raw_700001_733334 | Asap7772 | 2025-02-12T03:48:07Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-11T08:28:24Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
splits:
- name: train
num_bytes: 307729901
num_examples: 25000
download_size: 136289815
dataset_size: 307729901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
french-datasets/Nexdata_French_Speech_Data_by_Mobile_Phone_Reading | french-datasets | 2025-05-20T09:42:26Z | 0 | 0 | [
"language:fra",
"region:us"
] | [] | 2025-05-20T09:38:55Z | 0 | ---
language:
- fra
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [Nexdata/French_Speech_Data_by_Mobile_Phone_Reading](https://huggingface.co/datasets/Nexdata/French_Speech_Data_by_Mobile_Phone_Reading).
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_1_for_gen_19 | HungVu2003 | 2025-04-08T00:03:56Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-08T00:03:54Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6619719
num_examples: 12500
download_size: 3373397
dataset_size: 6619719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qwselfcorr/numia_step80 | qwselfcorr | 2025-02-05T19:51:33Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-05T19:51:31Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: prompt
dtype: string
- name: answers
sequence: string
- name: gt
dtype: string
splits:
- name: train
num_bytes: 1035913
num_examples: 496
download_size: 393730
dataset_size: 1035913
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
varunjasti/DocumentIDEFICS_VQA | varunjasti | 2025-03-04T07:00:42Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-27T05:59:32Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answers
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 256079458.0
num_examples: 398
- name: test
num_bytes: 41691119.0
num_examples: 82
download_size: 259660810
dataset_size: 297770577.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
pe-nlp/DeepMath-20K-25K-filteredv2-difficulty | pe-nlp | 2025-06-09T17:20:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-09T17:20:25Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: final_answer
dtype: string
- name: difficulty
dtype: int32
- name: topic
dtype: string
- name: model_responses
sequence: string
- name: model_scores
sequence: float64
- name: failed_count
dtype: float64
- name: processing_success
dtype: bool
splits:
- name: train
num_bytes: 161907348
num_examples: 5000
download_size: 45037747
dataset_size: 161907348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tingtingou/gpt4o_full_edit_inst | tingtingou | 2025-03-02T11:55:17Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-24T18:51:52Z | 0 | ---
dataset_info:
features:
- name: Artifact heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment token label
dtype: string
- name: revised_image
dtype: binary
- name: prompt
dtype: string
- name: feedback
dtype: string
- name: inst_llava
dtype: string
- name: inst_rlhf
dtype: string
- name: inst_gpt_1step
dtype: string
- name: inst_gpt_2step
dtype: string
- name: inst_rlhf_new
dtype: string
splits:
- name: train
num_bytes: 9058467869
num_examples: 1401
download_size: 211077168
dataset_size: 9058467869
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.