File size: 14,002 Bytes
272a30c
 
 
 
 
 
 
 
 
 
 
 
 
008d504
 
 
 
272a30c
 
 
 
008d504
 
272a30c
 
 
 
 
008d504
272a30c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
008d504
272a30c
008d504
baef19a
272a30c
 
 
 
 
 
 
008d504
272a30c
 
008d504
272a30c
008d504
272a30c
008d504
272a30c
008d504
272a30c
008d504
272a30c
008d504
baef19a
008d504
 
 
272a30c
008d504
272a30c
baef19a
008d504
 
 
272a30c
008d504
272a30c
008d504
272a30c
 
 
 
 
008d504
baef19a
272a30c
008d504
272a30c
 
 
008d504
272a30c
008d504
272a30c
008d504
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
272a30c
008d504
272a30c
008d504
 
272a30c
008d504
 
 
272a30c
008d504
 
272a30c
008d504
 
 
 
272a30c
 
 
 
 
008d504
272a30c
008d504
 
 
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
272a30c
008d504
272a30c
008d504
 
 
272a30c
008d504
272a30c
008d504
 
 
 
 
272a30c
008d504
272a30c
008d504
272a30c
008d504
 
 
 
 
 
 
 
 
 
272a30c
008d504
 
 
 
 
 
272a30c
008d504
 
 
272a30c
008d504
272a30c
008d504
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
 
 
 
 
 
 
 
 
 
 
 
272a30c
008d504
272a30c
008d504
272a30c
008d504
272a30c
008d504
 
 
 
272a30c
008d504
272a30c
008d504
272a30c
008d504
 
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
 
272a30c
008d504
 
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
272a30c
008d504
 
 
 
 
272a30c
 
 
008d504
272a30c
008d504
 
 
 
 
272a30c
008d504
 
 
 
272a30c
008d504
 
 
 
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
 
272a30c
008d504
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
 
 
272a30c
008d504
 
 
 
 
272a30c
008d504
 
 
 
272a30c
008d504
272a30c
008d504
 
 
 
 
 
 
baef19a
008d504
 
 
 
 
 
 
 
 
 
 
272a30c
008d504
 
272a30c
008d504
272a30c
008d504
272a30c
 
 
008d504
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
---
annotations_creators: []
language: en
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids: []
pretty_name: FiftyOne-GUI-Grounding-Train-with-Synthetic
tags:
- fiftyone
- image
- object-detection
- visual-agents,
- gui-grounding
- os-agents,
dataset_summary: >




  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4036
  samples.


  ## Installation


  If you haven't already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  from fiftyone.utils.huggingface import load_from_hub


  # Load the dataset

  # Note: other available arguments include 'max_samples', etc

  dataset =
  load_from_hub("Voxel51/FiftyOne-GUI-Grounding-Train-with-Synthetic")


  # Launch the App

  session = fo.launch_app(dataset)

  ```
license: apache-2.0
---

# Dataset Card for FiftyOne GUI Grounding Training Set with Synthetic Augmentation

## Dataset Details

### Dataset Description

This dataset represents a significant expansion of the original FiftyOne GUI Grounding Training Set, growing from 739 real GUI screenshots to 4,036 total samples through systematic synthetic data generation. The dataset combines authentic GUI interactions with carefully crafted synthetic variants designed to improve model robustness, accessibility awareness, and cross-platform performance.

The synthetic samples were generated using the specialized [Synthetic GUI Samples Plugin for FiftyOne](https://github.com/harpreetsahota204/synthetic_gui_samples_plugins), which applies computer vision transformations while preserving annotation accuracy and spatial relationships.

- **Curated by:** Harpreet Sahota
- **Funded by:** Voxel51
- **Shared by:** Harpreet Sahota
- **Language(s):** English (en)
- **License:** Apache-2.0

### Dataset Sources

- **Original Repository:** [GUI Annotation Tool](https://github.com/harpreetsahota204/gui_annotation_tool)
- **COCO4GUI FiftyOne Integration:** [COCO4GUI FiftyOne](https://github.com/harpreetsahota204/coco4gui_fiftyone)
- **Synthetic Generation Plugin:** [Synthetic GUI Samples Plugin](https://github.com/harpreetsahota204/synthetic_gui_samples_plugins)
- **Generation Notebook:** [Using Synthetic GUI Samples Plugin via SDK](https://github.com/harpreetsahota204/visual_agents_workshop/blob/main/session_2/Using_Synthetic_GUI_Samples_Plugin_via_SDK.ipynb)

## Loading into FiftyOne

### Quick Start with Hugging Face Hub

```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the augmented dataset directly from Hugging Face Hub
dataset = load_from_hub("Voxel51/FiftyOne-GUI-Grounding-Train-with-Synthetic")

# Launch the FiftyOne App
session = fo.launch_app(dataset)
```

### Loading with COCO4GUI Dataset Type

For enhanced metadata and provenance tracking:

```python
import fiftyone as fo
from coco4gui import COCO4GUIDataset

# Load with full COCO4GUI features including synthetic provenance
dataset = fo.Dataset.from_dir(
    dataset_dir="/path/to/your/augmented_gui_dataset",
    dataset_type=COCO4GUIDataset,
    name="gui_dataset_with_synthetic",
    data_path="data",
    labels_path="annotations_coco.json",
    include_sequence_info=True,
    include_gui_metadata=True,
    extra_attrs=True,
    persistent=True,
)

# Launch FiftyOne app
session = fo.launch_app(dataset)
```

### Analyzing Synthetic vs Real Samples

```python
from fiftyone import ViewField as F

# Separate real and synthetic samples
real_samples = dataset.match(~F("transform_record").exists())
synthetic_samples = dataset.match(F("transform_record").exists())

print(f"Real samples: {len(real_samples)}")
print(f"Synthetic samples: {len(synthetic_samples)}")

# Analyze transformation types
transform_types = synthetic_samples.distinct("transform_record.transforms.name")
print(f"Transformation types: {transform_types}")
```

## Uses

### Direct Use

This augmented dataset is designed for:

- **Robust GUI Element Detection**: Training models that work across diverse visual conditions
- **Accessibility-Aware AI**: Models that understand GUI accessibility challenges (colorblind simulation)
- **Multi-Resolution GUI Understanding**: Training on various screen sizes and device types
- **Visual Robustness Testing**: Models that handle inverted colors, grayscale interfaces, and visual variations
- **Cross-Platform GUI Analysis**: Enhanced diversity for better generalization
- **Multilingual GUI Interaction**: With text augmentation variants for global applications

### Enhanced Use Cases

- **Accessibility Research**: Study GUI perception across different visual conditions using colorblind simulations
- **Robustness Evaluation**: Test model performance on visually challenging interfaces
- **Data Efficiency Studies**: Compare model performance with and without synthetic augmentation
- **Cross-Device Training**: Prepare models for deployment across different screen resolutions

### Out-of-Scope Use

- **Production Deployment Without Validation**: Synthetic data should be validated on real-world scenarios
- **Privacy-Sensitive Applications**: Original privacy considerations still apply
- **Real-Time Systems**: Performance characteristics may differ between real and synthetic samples

## Dataset Structure

### Composition
- **Total Samples**: 4,036
- **Real Samples**: 739 (original dataset)
- **Synthetic Samples**: 3,297 (generated variants)
- **Augmentation Ratio**: ~4.5x expansion

### Synthetic Augmentation Types

Based on the [Synthetic GUI Samples Plugin](https://github.com/harpreetsahota204/synthetic_gui_samples_plugins), the dataset includes:

#### 1. **Visual Accessibility Augmentations**
- **Grayscale Conversion**: 3-channel grayscale variants for testing color-independent recognition
- **Color Inversion**: High-contrast and dark mode interface variants
- **Colorblind Simulation**: Six types of color vision deficiency simulation:
  - Deuteranopia (green-blind)
  - Protanopia (red-blind)
  - Tritanopia (blue-blind)
  - Deuteranomaly (green-weak)
  - Protanomaly (red-weak)
  - Tritanomaly (blue-weak)

#### 2. **Resolution Scaling**
- **Multi-Device Variants**: Screenshots scaled to common device resolutions:
  - Mobile/Tablet: 1024×768, 1280×800
  - Laptop/Desktop: 1366×768, 1920×1080, 1440×900
  - High-End: 2560×1440, 3840×2160 (4K)
  - Ultrawide: 2560×1080, 3440×1440

#### 3. **Text Augmentation** (if applied)
- **Task Description Rephrasing**: LLM-generated alternative descriptions
- **Multilingual Variants**: Translated task descriptions for global applications

### Annotation Preservation

All synthetic samples maintain:
- **Spatial Accuracy**: Bounding boxes and keypoints scaled proportionally
- **Annotation Completeness**: All original attributes and metadata preserved
- **Provenance Tracking**: Complete transformation history in `transform_record` field

### Enhanced Metadata Schema

```python
# Original fields plus synthetic-specific metadata
sample.transform_record = {
    "transforms": [{"name": "grayscale", "params": {}}],
    "source_sample_id": "original_sample_id",
    "timestamp": "2025-01-15T10:30:00Z",
    "plugin": "synthetic_gui_samples_plugins"
}

# Preserved original metadata
sample.application         # "Chrome", "Arc Browser", etc.
sample.platform           # "macOS", "Windows", etc.
sample.date_captured       # Original capture timestamp
sample.sequence_id         # Workflow sequence information
```

## Dataset Creation

### Curation Rationale

The synthetic augmentation was designed to address several key limitations in GUI understanding models:

1. **Visual Robustness**: Many GUI models fail on visually challenging interfaces (dark mode, high contrast, etc.)
2. **Accessibility Blindness**: Models often ignore how interfaces appear to users with visual impairments
3. **Resolution Sensitivity**: Training on single-resolution data leads to poor cross-device performance
4. **Data Scarcity**: Manual GUI annotation is expensive and time-consuming

### Synthetic Generation Process

The augmentation process used the [Synthetic GUI Samples Plugin](https://github.com/harpreetsahota204/synthetic_gui_samples_plugins) with the following pipeline:

1. **Source Data**: 739 manually annotated GUI screenshots
2. **Transformation Selection**: Systematic application of visual augmentations
3. **Quality Validation**: Automated verification of annotation accuracy
4. **Provenance Tracking**: Complete transformation history preservation
5. **Dataset Integration**: Seamless combination with original samples

### Source Data

#### Original Data Collection
- **Method**: Real GUI screenshots from various applications
- **Time Period**: July-August 2025
- **Platform**: Primarily macOS with various browsers and applications
- **Annotation Process**: Manual annotation using specialized GUI annotation tool

#### Synthetic Data Generation
- **Tool**: [Synthetic GUI Samples Plugin for FiftyOne](https://github.com/harpreetsahota204/synthetic_gui_samples_plugins)
- **Transformations**: Computer vision and accessibility-focused augmentations
- **Validation**: Automated annotation consistency checks
- **Quality Control**: Systematic verification of spatial relationships

### Annotations

#### Original Annotation Process
- **Tool**: Specialized web-based GUI annotation tool
- **Annotators**: Expert annotation by dataset curator
- **Quality**: Manual verification and consistency checking

#### Synthetic Annotation Handling
- **Preservation**: All original annotations automatically preserved
- **Scaling**: Spatial coordinates proportionally adjusted for resolution changes
- **Validation**: Automated verification of annotation accuracy post-transformation
- **Provenance**: Complete transformation history tracked

## Bias, Risks, and Limitations

### Enhanced Considerations for Synthetic Data

#### Technical Limitations
- **Synthetic Realism**: Generated variants may not capture all real-world visual variations
- **Transformation Artifacts**: Some augmentations may introduce visual artifacts not present in real interfaces
- **Limited Diversity**: Synthetic samples are constrained by the diversity of the original dataset
- **Platform Bias**: Still primarily macOS-based despite augmentation

#### Synthetic-Specific Biases
- **Augmentation Bias**: Over-representation of certain visual transformations
- **Quality Variation**: Synthetic samples may have different quality characteristics than real samples
- **Edge Case Handling**: Synthetic transformations may not handle all annotation edge cases perfectly

#### Risks and Mitigations
- **Overfitting to Synthetic Data**: Models may learn synthetic artifacts rather than real patterns
  - *Mitigation*: Maintain clear real/synthetic sample identification for balanced training
- **False Confidence**: Large dataset size may mask underlying diversity limitations
  - *Mitigation*: Regular validation on held-out real data
- **Annotation Drift**: Repeated transformations may introduce cumulative annotation errors
  - *Mitigation*: Direct transformation from original samples only

### Recommendations

#### For Model Training
- **Balanced Sampling**: Use both real and synthetic samples in training
- **Validation Strategy**: Always validate on real, held-out data
- **Progressive Training**: Start with real data, gradually introduce synthetic variants
- **Transformation Awareness**: Consider transformation type as a training signal

#### For Evaluation
- **Separate Evaluation**: Test on real and synthetic data separately
- **Robustness Testing**: Use synthetic variants to test specific robustness aspects
- **Accessibility Evaluation**: Leverage colorblind simulations for accessibility testing

## Technical Details

### Synthetic Generation Statistics
- **Original Dataset Size**: 739 samples
- **Augmentation Factor**: ~4.5x
- **Total Synthetic Samples**: 3,297
- **Transformation Types**: 5+ different augmentation categories
- **Quality Validation**: 100% automated annotation verification

### FiftyOne Integration Features
- **Advanced Brain Embeddings**: CLIP and image similarity indices for both real and synthetic samples
- **Provenance Tracking**: Complete transformation history in metadata
- **Filtering Capabilities**: Easy separation of real vs synthetic samples
- **Visualization Support**: UMAP embeddings showing real/synthetic sample distribution

### Performance Characteristics
- **Storage Efficiency**: Optimized image formats and metadata storage
- **Loading Speed**: Efficient batch loading with FiftyOne integration
- **Memory Usage**: Scalable handling of large augmented datasets

## Citation

**BibTeX:**
```bibtex
@dataset{fiftyone_gui_grounding_synthetic_2025,
  title={FiftyOne GUI Grounding Training Set with Synthetic Augmentation},
  author={Sahota, Harpreet},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/Voxel51/FiftyOne-GUI-Grounding-Train-with-Synthetic},
  note={Augmented using Synthetic GUI Samples Plugin for FiftyOne}
}

@software{synthetic_gui_plugin_2025,
  title={Synthetic GUI Samples Plugin for FiftyOne},
  author={Sahota, Harpreet},
  year={2025},
  url={https://github.com/harpreetsahota204/synthetic_gui_samples_plugins},
  license={Apache-2.0}
}
```

**APA:**
Sahota, H. (2025). FiftyOne GUI Grounding Training Set with Synthetic Augmentation [Dataset]. Hugging Face. https://huggingface.co/datasets/harpreetsahota/FiftyOne-GUI-Grounding-Train-with-Synthetic

## Dataset Card Authors

Harpreet Sahota

## Dataset Card Contact

For questions about this dataset or the synthetic generation process, please contact the dataset author through:
- [Hugging Face dataset repository](https://huggingface.co/datasets/harpreetsahota/FiftyOne-GUI-Grounding-Train-with-Synthetic)
- [Synthetic GUI Samples Plugin repository](https://github.com/harpreetsahota204/synthetic_gui_samples_plugins)
- [COCO4GUI FiftyOne integration repository](https://github.com/harpreetsahota204/coco4gui_fiftyone)