lisabdunlap commited on
Commit
78ef2da
·
verified ·
1 Parent(s): 8a79857

Upload readme

Browse files
Files changed (3) hide show
  1. README.md +189 -3
  2. direct_chat_ui.png +3 -0
  3. vision_arena_questions_fig.png +3 -0
README.md CHANGED
@@ -1,3 +1,189 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ size_categories:
3
+ - 100K<n<1M
4
+ task_categories:
5
+ - visual-question-answering
6
+ dataset_info:
7
+ features:
8
+ - name: images
9
+ sequence:
10
+ image:
11
+ decode: false
12
+ - name: conversation_id
13
+ dtype: string
14
+ - name: model
15
+ dtype: string
16
+ - name: num_turns
17
+ dtype: int64
18
+ - name: conversation
19
+ list:
20
+ list:
21
+ - name: content
22
+ dtype: string
23
+ - name: role
24
+ dtype: string
25
+ - name: language
26
+ dtype: string
27
+ - name: user_id
28
+ dtype: int64
29
+ - name: tstamp
30
+ dtype: float64
31
+ - name: is_preset
32
+ dtype: bool
33
+ - name: preset_dataset
34
+ dtype: string
35
+ - name: categories
36
+ struct:
37
+ - name: captioning
38
+ dtype: bool
39
+ - name: code
40
+ dtype: bool
41
+ - name: creative_writing
42
+ dtype: bool
43
+ - name: diagram
44
+ dtype: bool
45
+ - name: entity_recognition
46
+ dtype: bool
47
+ - name: homework
48
+ dtype: bool
49
+ - name: humor
50
+ dtype: bool
51
+ - name: is_code
52
+ dtype: bool
53
+ - name: ocr
54
+ dtype: bool
55
+ splits:
56
+ - name: train
57
+ num_bytes: 85442332092
58
+ num_examples: 200000
59
+ download_size: 85131882032
60
+ dataset_size: 85442332092
61
+ configs:
62
+ - config_name: default
63
+ data_files:
64
+ - split: train
65
+ path: data/train-*
66
+ ---
67
+
68
+ ![Vision Arena Questions](vision_arena_questions_fig.png)
69
+
70
+ # VisionArena-Battle: 30K Real-World Image Conversations with Pairwise Preference Votes
71
+
72
+ 200k single and multi-turn chats between users and VLM's collected on [Chatbot Arena](https://lmarena.ai/).
73
+
74
+ **WARNING:** Images may contain inappropriate content.
75
+
76
+ ![Vision Arena Direct Chat UI](direct_chat_ui.png)
77
+
78
+ ## Dataset Details
79
+
80
+ * 200K conversations
81
+ * 45 VLM's
82
+ * 138 languages
83
+ * ~43k unique images
84
+ * Question Category Tags (Captioning, OCR, Entity Recognition, Coding, Homework, Diagram, Humor, Creative Writing, Refusal)
85
+
86
+ ### Dataset Description
87
+
88
+ 200,000 conversations where users interact with two anonymized VLMs,collected through the open-source platform [Chatbot Arena](https://lmarena.ai/), where users chat with LLMs and VLMs through direct chat, side-by-side, or anonymous side-by-side chats. Users provide preference votes for responses, which are aggregated using the Bradley-Terry model to compute [leaderboard rankings](https://lmarena.ai/?leaderboard). Data for anonymous side-by-side chats can be found [here](https://huggingface.co/datasets/lmarena-ai/VisionArena-Battle).
89
+
90
+ The dataset includes conversations from February 2024 to September 2024. Users explicitly agree to have their conversations shared before chatting. We apply an [NSFW](https://learn.microsoft.com/en-us/azure/ai-services/content-moderator/image-moderation-api), [CSAM](https://www.microsoft.com/en-us/photodna?oneroute=true), PII ([text](https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-pii-detection), and face detectors ([1](https://cloud.google.com/vision/docs/detecting-faces), [2](https://github.com/ageitgey/face_recognition)) to remove any inappropriate images, personally identifiable images/text, or images with human faces. These detectors are not perfect, so such images may still exist in the dataset.
91
+
92
+ ### Dataset Sources
93
+
94
+ - **Repository:** https://github.com/lm-sys/FastChat
95
+ - **Paper:** https://arxiv.org/abs/2412.08687
96
+ - **Chat with the lastest VLMs and contribute your vote!** https://lmarena.ai/
97
+
98
+ Images are stored in byte format, you can decode with `Image.open(BytesIO(img["bytes"]))`
99
+
100
+ ## Dataset Structure
101
+
102
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
103
+
104
+ * **model** - model identity.
105
+ * **images** - image (note: these are single image conversations only)
106
+ * **conversation**- conversation with each model
107
+ * **user_id** - hash of user id, based on IP address
108
+ * **categories** - category labels (note: a question can belong to multiple categories)
109
+ * **num_turns** - number of conversation turns
110
+ * **tstamp** timestamp of when the conversation took place
111
+ * **is_preset** - if the image is from the "random image button"
112
+ * **dataset_preset** - which dataset the preset image is from. This can be either [NewYorker](https://huggingface.co/datasets/jmhessel/newyorker_caption_contest), [WikiArt](https://huggingface.co/datasets/huggan/wikiart), [TextVQA](https://huggingface.co/datasets/facebook/textvqa), [ChartQA](https://huggingface.co/datasets/lmms-lab/ChartQA), [DocQA](https://huggingface.co/datasets/lmms-lab/DocVQA), or [realworldqa](https://x.ai/blog/grok-1.5v)
113
+
114
+
115
+ ## Download Locally
116
+
117
+ To download the dataset into a local directory, the code below will download the images into the VisionArena-Chat folder
118
+ ```
119
+ from datasets import load_dataset
120
+ from PIL import Image
121
+ from io import BytesIO
122
+ import json
123
+ import os
124
+ from tqdm import tqdm
125
+ from multiprocessing import Pool, cpu_count
126
+
127
+ def download_dataset(num_workers=None):
128
+ base_dir, images_dir = "VisionArena-Chat", os.path.join("VisionArena-Chat", "images")
129
+ os.makedirs(images_dir, exist_ok=True)
130
+ ds = load_dataset("lmarena-ai/VisionArena-Chat", split="train")
131
+ samples = list(ds)
132
+ num_workers = num_workers or min(cpu_count(), 8)
133
+ print(f"Processing samples using {num_workers} workers...")
134
+
135
+ def process_sample(idx_sample):
136
+ idx, sample = idx_sample
137
+ processed_images = []
138
+ for img_idx, img in enumerate(sample.get("images", [])):
139
+ img_filename = f"image_{idx}_{img_idx}.png"
140
+ img_path = os.path.join(images_dir, img_filename)
141
+ if not os.path.exists(img_path):
142
+ Image.open(BytesIO(img["bytes"])).save(img_path)
143
+ processed_images.append(os.path.join("images", img_filename))
144
+ sample["images"] = processed_images
145
+ return sample
146
+
147
+ with Pool(num_workers) as pool:
148
+ processed_data = list(tqdm(pool.imap(process_sample, enumerate(samples)), total=len(samples), desc="Processing samples"))
149
+ with open(os.path.join(base_dir, "data.json"), 'w', encoding='utf-8') as f:
150
+ json.dump(processed_data, f, ensure_ascii=False, indent=2)
151
+ print(f"Dataset downloaded and processed successfully!\nImages saved in: {images_dir}\nData saved in: {os.path.join(base_dir, 'data.json')}")
152
+
153
+ if __name__ == "__main__":
154
+ download_dataset()
155
+ ```
156
+
157
+ ## Bias, Risks, and Limitations
158
+
159
+ This dataset contains a large amount of STEM related questions, OCR tasks, and general problems like captioning. This dataset contains less questions which relate to specialized domains outside of stem.
160
+
161
+ **If you find your face or personal information in this dataset and wish to have it removed, or if you find hateful or inappropriate content,** please contact us at [email protected] or [email protected]. See licensing agreement below for more details.
162
+
163
+ **BibTeX:**
164
+
165
+ ```
166
+ @article{chou2024visionarena,
167
+ title={VisionArena: 230K Real World User-VLM Conversations with Preference Labels},
168
+ author={Christopher Chou and Lisa Dunlap and Koki Mashita and Krishna Mandal and Trevor Darrell and Ion Stoica and Joseph E. Gonzalez and Wei-Lin Chiang},
169
+ year={2024},
170
+ eprint={2412.08687},
171
+ archivePrefix={arXiv},
172
+ primaryClass={cs.LG},
173
+ url={https://arxiv.org/abs/2412.08687},
174
+ }
175
+ ```
176
+
177
+ ## LMArena VisionArena dataset License Agreement
178
+ This Agreement contains the terms and conditions that govern your access and use of the LMArena VisionArena dataset (as defined above). You may not use the LMArena VisionArena dataset if you do not accept this Agreement. By clicking to accept, accessing the LMArena VisionArena dataset, or both, you hereby agree to the terms of the Agreement. If you are agreeing to be bound by the Agreement on behalf of your employer or another entity, you represent and warrant that you have full legal authority to bind your employer or such entity to this Agreement. If you do not have the requisite authority, you may not accept the Agreement or access the LMArena VisionArena dataset on behalf of your employer or another entity.
179
+
180
+ * Safety and Moderation: This dataset contains unsafe conversations that may be perceived as offensive or unsettling. User should apply appropriate filters and safety measures before utilizing this dataset for training dialogue agents.
181
+ * Non-Endorsement: The views and opinions depicted in this dataset do not reflect the perspectives of the researchers or affiliated institutions engaged in the data collection process.
182
+ * Legal Compliance: You are mandated to use it in adherence with all pertinent laws and regulations.
183
+ * Model Specific Terms: When leveraging direct outputs of a specific model, users must adhere to its corresponding terms of use.
184
+ * Non-Identification: You must not attempt to identify the identities of individuals or infer any sensitive personal data encompassed in this dataset.
185
+ * Prohibited Transfers: You should not distribute, copy, disclose, assign, sublicense, embed, host, or otherwise transfer the dataset to any third party.
186
+ * Right to Request Deletion: At any time, we may require you to delete all copies of the conversation dataset (in whole or in part) in your possession and control. You will promptly comply with any and all such requests. Upon our request, you shall provide us with written confirmation of your compliance with such requirement.
187
+ * Termination: We may, at any time, for any reason or for no reason, terminate this Agreement, effective immediately upon notice to you. Upon termination, the license granted to you hereunder will immediately terminate, and you will immediately stop using the LMArena VisionArena dataset and destroy all copies of the LMArena VisionArena dataset and related materials in your possession or control.
188
+ * Limitation of Liability: IN NO EVENT WILL WE BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, PUNITIVE, SPECIAL, OR INDIRECT DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF OR RELATING TO THIS AGREEMENT OR ITS SUBJECT MATTER, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
189
+ * Subject to your compliance with the terms and conditions of this Agreement, we grant to you, a limited, non-exclusive, non-transferable, non-sublicensable license to use the LMArena VisionArena dataset, including the conversation data and annotations, to research, develop, and improve software, algorithms, machine learning models, techniques, and technologies for both research and commercial purposes.
direct_chat_ui.png ADDED

Git LFS Details

  • SHA256: 86c78a4c1710082521889c82d796e3dc9359f1f60beff296a775fd4f764f7e21
  • Pointer size: 131 Bytes
  • Size of remote file: 351 kB
vision_arena_questions_fig.png ADDED

Git LFS Details

  • SHA256: c30e12f124e460ac1a5135356af2f7918192e71d2048b289f634b097a9abd0b6
  • Pointer size: 133 Bytes
  • Size of remote file: 22 MB