Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,28 +1,22 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
|
|
10 |
language: en
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
-
|
15 |
-
Each source shard is processed to a standalone Parquet at `data/train/train-*.parquet` and pushed in batches.
|
16 |
-
|
17 |
-
### Load with 🤗 Datasets
|
18 |
-
|
19 |
-
```python
|
20 |
-
from datasets import load_dataset
|
21 |
-
ds = load_dataset(
|
22 |
-
"parquet",
|
23 |
-
data_files={"train": "hf://datasets/gdurkin/naip-16d-city-cubes/data/train/train-*.parquet"},
|
24 |
-
)["train"]
|
25 |
-
print(ds)
|
26 |
-
```
|
27 |
|
28 |
-
|
|
|
1 |
---
|
2 |
+
dataset_info:
|
3 |
+
features: ['tile_id', 'city', 'bbox', 'chip_px', 'split', 'meta_json', 'naip_rgb', 's2_rgb', 's2_pseudo_rgb', 'dem_rgb', 'naip_ndvi', 'labels', 'landfire_family', 'cdl', 's2_NDVI', 's2_scl']
|
4 |
+
splits:
|
5 |
+
- name: train
|
6 |
+
num_bytes: null
|
7 |
+
num_examples: null
|
8 |
+
download_size: null
|
9 |
+
dataset_size: null
|
10 |
+
license: apache-2.0
|
11 |
language: en
|
12 |
+
tags:
|
13 |
+
- satellite
|
14 |
+
- remote-sensing
|
15 |
+
- naip
|
16 |
+
- sentinel-2
|
17 |
+
- dem
|
18 |
---
|
19 |
|
20 |
+
# gdurkin/naip-16d-city-cubes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
+
Incremental uploads of per-shard Parquet files. This README is kept minimal during export and can be enriched later.
|