File size: 9,099 Bytes
c92db1b
 
 
 
 
 
 
 
 
 
 
 
ac39d5f
 
 
c92db1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
affa89b
c92db1b
 
 
 
 
 
 
 
 
 
 
b760563
c92db1b
71cddc6
 
affa89b
c92db1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
affa89b
c92db1b
 
 
 
 
 
affa89b
 
c92db1b
affa89b
c92db1b
affa89b
c92db1b
affa89b
c92db1b
14865af
c92db1b
affa89b
14865af
affa89b
c92db1b
affa89b
c92db1b
affa89b
c92db1b
14865af
 
c92db1b
 
affa89b
 
 
 
 
 
 
c92db1b
 
affa89b
 
 
 
 
c92db1b
 
affa89b
 
 
 
c92db1b
affa89b
 
 
 
c92db1b
a675f84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c92db1b
 
affa89b
c92db1b
 
 
affa89b
 
 
 
 
 
c92db1b
 
affa89b
c92db1b
affa89b
c92db1b
affa89b
 
 
 
 
 
c92db1b
 
affa89b
c92db1b
affa89b
 
c92db1b
 
affa89b
 
 
 
 
 
 
 
 
 
 
 
 
 
c92db1b
affa89b
 
c92db1b
 
affa89b
c92db1b
affa89b
 
 
 
 
 
 
 
 
c92db1b
affa89b
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- object-detection
task_ids: []
pretty_name: rico_dataset
tags:
- fiftyone
- image
- visual-agents
- os-agents
- gui-grounding
- object-detection
dataset_summary: '




  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 66261 samples.


  ## Installation


  If you haven''t already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  from fiftyone.utils.huggingface import load_from_hub


  # Load the dataset

  # Note: other available arguments include ''max_samples'', etc

  dataset = load_from_hub("Voxel51/rico")


  # Launch the App

  session = fo.launch_app(dataset)

  ```

  '
---

# Dataset Card for Rico Semantic Dataset

![image/png](rico.gif)

Note: This dataset uses UI screenshots from the original Rico dataset with semantic annotations and embeddings introduced in [Learning Design Semantics for Mobile Apps](https://www.ranjithakumar.net/resources/mobile-semantics.pdf). This dataset **does not** include the segmentation masks which were introduced in the aforementioned paper.

This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 66261 samples.

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/rico")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Description
**Curated by:** Thomas F. Liu, Mark Craft, Jason Situ, Ersin Yumer, Radomir Mech, and Ranjitha Kumar (University of Illinois at Urbana-Champaign and Adobe Systems Inc.)

**Funded by:** Adobe research donation, Google Faculty Research Award, and NSF Grant IIS-1750563

**Shared by:** The Interaction Mining Group at University of Illinois at Urbana-Champaign

**Language(s):** English (en)

**License:** CC BY 4.0

## Dataset Sources

**Website:** http://interactionmining.org/rico

**Paper:** Liu, T. F., Craft, M., Situ, J., Yumer, E., Mech, R., & Kumar, R. (2018). [Learning Design Semantics for Mobile Apps](https://www.ranjithakumar.net/resources/mobile-semantics.pdf). In The 31st Annual ACM Symposium on User Interface Software and Technology (UIST '18).

**Repo:** https://github.com/datadrivendesign/semantic-icon-classifier (Semantic icon classifier implementation)

**Code to parse dataset into FiftyOne format:** https://github.com/harpreetsahota204/rico_to_fiftyone/

## Uses
### Direct Use
- Training interface design search engines to find similar UI designs
- Developing generative AI models for UI layout creation
- Training models that automatically generate code from UI designs
- Building tools that understand the meaning and function of UI elements
- Analyzing design patterns across different app categories
- Teaching design systems to recognize component types and their usage patterns
- Researching relationships between visual design and interface functionality

### Out-of-Scope Use
- Extracting personal or identifiable user information
- Generating deceptive interfaces that might mislead users
- Making judgments about individual users based on interaction patterns
- Commercial redistribution without appropriate permissions
- Creating applications that violate app store design guidelines or terms of service

## Dataset Structure
The semantic subset contains:
- 66,000+ annotated UI screens from mobile applications
- Semantic screenshots with UI elements color-coded by type (components, buttons, icons)
- Detailed hierarchies where each element includes semantic annotations

Each UI element in the hierarchy includes:
- Component classification (e.g., Icon, Text Button, List Item)
- Functional classification (e.g., "login" for text buttons, "cart" for icons)
- Original properties (bounds, class, resource-id, etc.)

# Rico FiftyOne Dataset Structure

**Core Fields:**
- `metadata`: EmbeddedDocumentField - Image properties (size, dimensions)
- `ui_vector`: ListField(FloatField) - UI embedding representation
- `ui_viz`: ListField(FloatField) - Visualization parameters
- `detections`: EmbeddedDocumentField(Detections) containing multiple Detection objects:
  - `label`: UI element type (Icon, Text, Image, Toolbar, List Item)
  - `bounding_box`: Coordinates [x, y, width, height]
  - `content_or_function`: Text content or function name
  - `clickable`: Boolean indicating interactivity
  - `type`: Android widget type
  - `resource_id`: Android resource identifier

The dataset provides comprehensive annotations of mobile UI elements with detailed information about their appearance, functionality, and interactive properties for machine learning applications.

## Dataset Creation
### Curation Rationale
The dataset was created to expose the semantic meaning of mobile UI elements - what they represent and how they function. While prior datasets captured visual design, this semantic layer enables deeper understanding of interface functionality across applications, supporting more advanced design tools and research.

### Source Data
#### Data Collection and Processing
1. Started with the Rico dataset of 9.3k Android apps spanning 27 categories
2. Created a lexical database through iterative open coding of 73,000+ UI elements and 720 screens
3. Developed code-based patterns to detect different component types
4. Trained a convolutional neural network (94% accuracy) to classify icons
5. Implemented anomaly detection to distinguish between icons and general images
6. Applied these techniques to generate semantic annotations for 78% of visible elements in the dataset

#### Who are the source data producers?
The original UI designs were created by Android app developers whose applications were available on the Google Play Store. The applications spanned 27 categories and had an average user rating of 4.1.

### Annotations
#### Annotation process
1. Referenced design libraries (e.g., Material Design) to establish initial vocabulary
2. Performed iterative open coding of 720 screens to identify 24 UI component categories
3. Extracted and clustered 20,386 unique button text strings to identify 197 text button concepts
4. Analyzed 73,449 potential icons to determine 97 icon classes
5. Used machine learning and heuristic approaches to scale annotation to the full dataset
6. Generated color-coded semantic screenshots to visualize element types

#### Who are the annotators?
The annotation framework was developed by researchers from University of Illinois at Urbana-Champaign and Adobe Systems Inc. The initial coding involved three researchers using a consensus-driven approach, with subsequent scaling through machine learning techniques.

### Personal and Sensitive Information
The dataset focuses on UI design elements rather than user data. Screenshots were captured during controlled exploration sessions rather than from real user data. While some screens might contain placeholder text mimicking personal information, the dataset does not appear to contain actual personal or sensitive information from real users.

## Bias, Risks, and Limitations
- Limited to Android mobile applications (no iOS or web interfaces)
- Represents design practices from around 2017-2018 (may not reflect current trends)
- Biased toward successful apps (average rating 4.1)
- Incomplete coverage (78% of visible elements receive semantic annotations)
- Icon classifier has lower accuracy for underrepresented classes
- May not adequately represent cultural or regional UI design variations
- Limited to apps with English-language interfaces

## Recommendations
Users should be aware that:
- The dataset represents a specific snapshot in time of mobile design practices
- Not all UI elements receive semantic annotations
- The dataset might exhibit biases toward certain design patterns popular in high-rated apps
- Applications built on this dataset should validate results against contemporary design standards

## Dataset Card Contact
The dataset is maintained by the Interaction Mining Group at the University of Illinois at Urbana-Champaign. Contact information can be found at: http://interactionmining.org/


# Citation

```bibtex
@inproceedings{deka2017rico,
  title     = {Rico: A mobile app dataset for building data-driven design applications},
  author    = {Deka, Biplab and Huang, Zifeng and Franzen, Chad and Hibschman, Joshua and Afergan, Daniel and Li, Yang and Nichols, Jeffrey and Kumar, Ranjitha},
  booktitle = {Proceedings of the 30th annual ACM symposium on user interface software and technology},
  pages     = {845--854},
  year      = {2017}
}
```

```bibtex
@inproceedings{liu2018learning,
  title={Learning Design Semantics for Mobile Apps},
  author={Liu, Thomas F and Craft, Mark and Situ, Jason and Yumer, Ersin and Mech, Radomir and Kumar, Ranjitha},
  booktitle={The 31st Annual ACM Symposium on User Interface Software and Technology},
  year={2018}
}
```