--- language: - en license: - gpl size_categories: - 100M", "", ""] } ``` ## Data Splits - `train`: 12,232 episodes across 275 Parquet shards (1 row group per file) - `test`: 3,051 episodes across 67 Parquet shards (1 row group per file) Total compressed size on the Hub is approximately 67.4 GB (train ≈ 54.3 GB, test ≈ 13.1 GB). The `screenshots_b64` column contributes the majority of the size. Typical per-shard stats (example shard): - ~45 episodes per shard - ~6–7 screenshots per episode on average - ~5–6 actions per episode on average - ~5–6 step instructions per episode on average ## Usage ### Load with Datasets (streaming to avoid full download) ```python from datasets import load_dataset ds = load_dataset( "parquet", data_files="hf://datasets//@~parquet/default/train/*.parquet", streaming=True, )["train"] for i, ex in enumerate(ds): ex.pop("screenshots_b64", None) # skip large images for lightweight inspection print(ex["episode_id"], ex["goal"]) if i >= 4: break ``` ### Materialize a small slice without streaming ```python from datasets import load_dataset small = load_dataset( "parquet", data_files="hf://datasets//@~parquet/default/train/*.parquet", split="train[:1%]", ) print(len(small)) ``` ### DuckDB: schema preview and lightweight sampling ```python import duckdb # Peek schema of one shard duckdb.sql(""" DESCRIBE SELECT * FROM 'hf://datasets//@~parquet/default/train/0000.parquet' """).show() # Count rows via metadata only (no full scan) duckdb.sql(""" SELECT SUM(row_group_num_rows) AS total_rows FROM parquet_metadata('hf://datasets//@~parquet/default/train/*.parquet') """).show() # Sample a few rows excluding heavy images duckdb.sql(""" SELECT episode_id, goal, list_length(actions) AS num_actions, list_length(step_instructions) AS num_steps FROM 'hf://datasets//@~parquet/default/train/*.parquet' LIMIT 10 """).show() ``` ### PyArrow: footer-only metadata or row-group reads ```python from huggingface_hub import HfFileSystem import pyarrow.parquet as pq fs = HfFileSystem() path = "hf://datasets//@~parquet/default/train/0000.parquet" # Metadata-only: schema & row groups with fs.open(path, "rb") as f: pf = pq.ParquetFile(f) print(pf.schema_arrow) print(pf.metadata.num_rows, pf.num_row_groups) # Read a single row group without images with fs.open(path, "rb") as f: pf = pq.ParquetFile(f) cols = [c for c in pf.schema_arrow.names if c != "screenshots_b64"] tbl = pf.read_row_group(0, columns=cols) print(tbl.slice(0, 3).to_pydict()) ``` ### Dask: predicate/projection pushdown ```python import dask.dataframe as dd ddf = dd.read_parquet( "hf://datasets//@~parquet/default/train/*.parquet", columns=["episode_id", "goal", "actions", "step_instructions"], ) print(ddf.head()) ``` ## Efficiency Tips - Prefer streaming or column selection to avoid downloading `screenshots_b64` unless needed. - Use DuckDB `parquet_metadata(...)` or PyArrow `ParquetFile(...).metadata` to inspect sizes/counts without reading data pages. - Each file has one row group; shard-level parallelism is straightforward. ## Licensing [More Information Needed] ## Citation If you use this dataset in your work, please cite the source dataset/creators as appropriate and this repository. Example placeholder: ```bibtex @misc{android_control_episodes, title = {Android Control Episodes Dataset}, year = {2025}, url = {https://huggingface.co/datasets/smolagents/android-control} } ``` ## Limitations and Risks - Screenshots are stored as base64 strings and can be large; consider storage and memory implications. - Some action fields (e.g., `app_name`, `direction`, `text`) may be null for many steps. - Visual UI elements may vary across Android versions/devices. ## Maintainers [more information needed]