klamike commited on
Commit
e89c621
·
verified ·
1 Parent(s): 6a697af

Convert dataset to Parquet (part 00004-of-00005) (#5)

Browse files

- Convert dataset to Parquet (part 00004-of-00005) (95c00c5f4ff2013c57b51594341bcd2d294738a6)
- Delete data file (8428b74bc411eda914cdb299fc2e500c782d1897)
- Delete loading script (12067e54b54a840e2eb74270c4e56f362fe04b06)
- Delete data file (221ca29582835f74d987aa12fb596424fc98aa49)
- Delete data file (002ecb2b204d1292d33c6c507b1a72d7918d2f2d)
- Delete data file (5875475082909245bbdeb85c4c086ee054a49777)
- Delete data file (d7abf090aec42b6c002dceedc483e2d38874234c)
- Delete data file (4f55e35a039d5fd2863070cba16d5eb7f5aa9312)
- Delete data file (f4ecfecc9ffc27ac1a18303ebcba060f0e3ffb5d)
- Delete data file (ece9f9f3cb73bd3859e521fda1d7a2256db9563c)
- Delete data file (e3a9b7f472c7df6c28444096d35ce1893f2979bd)
- Delete data file (cdc1c4b1e16e53b2c63a0221f6e2202c1b3f0b71)
- Delete data file (c952e379e44e7b3feac9f75033c99de9a2b658c6)
- Delete data file (7fbdb89826fc8c5ef1020a155f4191c9744b5f85)
- Delete data file (047a803a555e83d622af5a227431189f4453f5d1)
- Delete data file (9527fc2bb913dbaeee280de323bc9bab16f58c68)
- Delete data file (7048415ad5202ff8ca01437c3ff33899452eed6b)
- Delete data file (9c15dd7c83f330c5ae818611bf40fd7ddec89f54)
- Delete data file (436a46080ceba180c4bb883ae954ac647c8df987)
- Delete data file (3c1a438328631bce0c2f9c7618249a16f459bae2)
- Delete data file (8d91708940b45818496968a808317687e5e52d58)
- Delete data file (c0dbee235c405459a60cb3355c713d0b701b12f8)
- Delete data file (f080343178a2a35e3804b13f7fe7286688b5dec6)
- Delete data file (461af307be72b9e7ad0360ed84c02259ccd3ee7b)
- Delete data file (5f4990e38802df14a9d2c7e5a42308961fced041)

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. infeasible/ACOPF/meta.h5.gz → NewYork2030/test-00002-of-00050.parquet +2 -2
  2. case.json.gz → NewYork2030/test-00003-of-00050.parquet +2 -2
  3. infeasible/ACOPF/primal.h5.gz → NewYork2030/test-00004-of-00050.parquet +2 -2
  4. infeasible/ACOPF/dual.h5.gz → NewYork2030/test-00005-of-00050.parquet +2 -2
  5. NewYork2030/test-00006-of-00050.parquet +3 -0
  6. NewYork2030/test-00007-of-00050.parquet +3 -0
  7. NewYork2030/test-00008-of-00050.parquet +3 -0
  8. NewYork2030/test-00009-of-00050.parquet +3 -0
  9. NewYork2030/test-00010-of-00050.parquet +3 -0
  10. NewYork2030/test-00011-of-00050.parquet +3 -0
  11. NewYork2030/test-00012-of-00050.parquet +3 -0
  12. NewYork2030/test-00013-of-00050.parquet +3 -0
  13. NewYork2030/test-00014-of-00050.parquet +3 -0
  14. NewYork2030/test-00015-of-00050.parquet +3 -0
  15. NewYork2030/test-00016-of-00050.parquet +3 -0
  16. NewYork2030/test-00017-of-00050.parquet +3 -0
  17. NewYork2030/test-00018-of-00050.parquet +3 -0
  18. NewYork2030/test-00019-of-00050.parquet +3 -0
  19. NewYork2030/test-00020-of-00050.parquet +3 -0
  20. NewYork2030/test-00021-of-00050.parquet +3 -0
  21. NewYork2030/test-00022-of-00050.parquet +3 -0
  22. NewYork2030/test-00023-of-00050.parquet +3 -0
  23. NewYork2030/test-00024-of-00050.parquet +3 -0
  24. NewYork2030/test-00025-of-00050.parquet +3 -0
  25. NewYork2030/test-00026-of-00050.parquet +3 -0
  26. NewYork2030/test-00027-of-00050.parquet +3 -0
  27. NewYork2030/test-00028-of-00050.parquet +3 -0
  28. NewYork2030/test-00029-of-00050.parquet +3 -0
  29. NewYork2030/test-00030-of-00050.parquet +3 -0
  30. NewYork2030/test-00031-of-00050.parquet +3 -0
  31. NewYork2030/test-00032-of-00050.parquet +3 -0
  32. NewYork2030/test-00033-of-00050.parquet +3 -0
  33. NewYork2030/test-00034-of-00050.parquet +3 -0
  34. NewYork2030/test-00035-of-00050.parquet +3 -0
  35. NewYork2030/test-00036-of-00050.parquet +3 -0
  36. NewYork2030/test-00037-of-00050.parquet +3 -0
  37. NewYork2030/test-00038-of-00050.parquet +3 -0
  38. NewYork2030/test-00039-of-00050.parquet +3 -0
  39. NewYork2030/test-00040-of-00050.parquet +3 -0
  40. NewYork2030/test-00041-of-00050.parquet +3 -0
  41. NewYork2030/test-00042-of-00050.parquet +3 -0
  42. NewYork2030/test-00043-of-00050.parquet +3 -0
  43. NewYork2030/test-00044-of-00050.parquet +3 -0
  44. NewYork2030/test-00045-of-00050.parquet +3 -0
  45. NewYork2030/test-00046-of-00050.parquet +3 -0
  46. NewYork2030/test-00047-of-00050.parquet +3 -0
  47. NewYork2030/test-00048-of-00050.parquet +3 -0
  48. NewYork2030/test-00049-of-00050.parquet +3 -0
  49. PGLearn-Medium-NewYork2030.py +0 -427
  50. README.md +9 -1
infeasible/ACOPF/meta.h5.gz → NewYork2030/test-00002-of-00050.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3cdf08c3afbc922a4197343395c60575514249f27b610104a8de2bef9eee4d53
3
- size 89051
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0ac052a462f29753a4bc0d94052ca3165685f94830b1a86321e7071a47f0bae
3
+ size 437132897
case.json.gz → NewYork2030/test-00003-of-00050.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85d6c19409208d7c432505c3f2835bb3fe11d164c9b06d785162951b059eba49
3
- size 1321609
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e53297644bc499c00435ab80a618a8e7c0a324ecfb0d5283e5afeb1b323f5fa6
3
+ size 437487581
infeasible/ACOPF/primal.h5.gz → NewYork2030/test-00004-of-00050.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:553326e640b0ddb2a8b6b5d5c2135237a516d46e8410b73b6c750e407246b8da
3
- size 115067273
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98954e671e51335d113b67a9b0729204131a43c819d43ae88dd64cab674ef3eb
3
+ size 437531820
infeasible/ACOPF/dual.h5.gz → NewYork2030/test-00005-of-00050.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:10351c35b04245ac5de1332d8406bae0a669bd8c656a495bdaa60270602227e3
3
- size 272619148
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:791151eb89706307facbd1bc5a816bd1c1b1225f04af8bc46cb71c367e6d813d
3
+ size 436977872
NewYork2030/test-00006-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64d29e14295995cb277c94572715d0604b303c6f09be1aec9f85c62c552495b5
3
+ size 437054672
NewYork2030/test-00007-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db549375c2513eac5a860d3829289f446ea5bbf686124266b78b6c7e700d9660
3
+ size 437343862
NewYork2030/test-00008-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf2dd6cc3d15112993bfff40b95df527859c247162afd68500beca091a037385
3
+ size 436962873
NewYork2030/test-00009-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e5287546e3ccc8d08dbc524fd24d4fb74474e58c36cc6f90e5da89b1d4a0ab7
3
+ size 436815762
NewYork2030/test-00010-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8fe55e0ebfd4240b5796c7509cb74c5d1e87f2952bfc1580ae70d55042036ec
3
+ size 437400293
NewYork2030/test-00011-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a4677b928a1755d76cd95651e0b0488230b5a60dab57a708dbd41742cfee87a
3
+ size 436831995
NewYork2030/test-00012-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fbfdcb481bc3c43491ba1d45118417dd773029bcb8902ab084a11a1bb94c43b
3
+ size 436919390
NewYork2030/test-00013-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a412e94c52b4419815a28e94b16761f5b72874e0df85dd28a4125510e5cd7be9
3
+ size 436775882
NewYork2030/test-00014-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94c8e3ab469d72863701bea2ce88325aec3dcb68c0d43fdf768d59da7baa6fb6
3
+ size 437104648
NewYork2030/test-00015-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2afcc522e4d40c8f274bcbf9e99d858a38770d19deb04d0118b7d8145bb94015
3
+ size 437590867
NewYork2030/test-00016-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cee78778c6be8ddf24e0f3b1e13f6b860b1d893a659fb9be53c607bc575a952
3
+ size 437199690
NewYork2030/test-00017-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f1d1d34ec5ae56a4957cc168917dc140f63b8fc33f538f879ae5b658464cdb4
3
+ size 437207438
NewYork2030/test-00018-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c1081289466915c555453a9b1d879b6dbe26ddcaeb046cb71249061cd1aa468
3
+ size 437237891
NewYork2030/test-00019-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d10a9ee5a9c584149e92fbaf86b36b11f80d095623b2ffa69f4df6864416661f
3
+ size 437293259
NewYork2030/test-00020-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9562016acc4e1de296b71351f1af53aaca065f54c80c3497beebb220121de36b
3
+ size 437305698
NewYork2030/test-00021-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29c8e7819576d5bc63d63fac9b03153b8576a8416c3e35db9d1831cbd788b1e9
3
+ size 437231698
NewYork2030/test-00022-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e29bd43c11e0c80a740e6746a94e6762a865baf09e3d92d1278aa890e1d8d22b
3
+ size 437293654
NewYork2030/test-00023-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3559a3d17452a43e8b992e029d791884d924048813849977bc14c79cfc9dd155
3
+ size 437454054
NewYork2030/test-00024-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21f116a59e338ded05552e9fac9b9d4022a78bb5833e5f22c518e61dc1668cb6
3
+ size 437148617
NewYork2030/test-00025-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08ac88fc55fe52b287c0051ff0a8d9b5917aa7d80f272e0b9a565fd8c4484ddd
3
+ size 437023560
NewYork2030/test-00026-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9d0aba99dbf67650786ffe1333ddb65f4a93bbcab06561003dd4f722ed6140d
3
+ size 437069751
NewYork2030/test-00027-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d24f701b24d41371418e287fb3cad59f39ce744ca6a3033b054ef2c3afc11dcd
3
+ size 437294736
NewYork2030/test-00028-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbf004fec4f2d3e0af01de2a56be7461ed6f3814b643b7ff61857a9b17e2ac30
3
+ size 437247761
NewYork2030/test-00029-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bd5f08ba547cdcb0b6c6d941918d71085611e60310703eba67cffa72af77c56
3
+ size 437333375
NewYork2030/test-00030-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e2074c700d80f31cb3b250db4a1a872c5ef115c4c254a52edcfbd072e99ca13
3
+ size 437312748
NewYork2030/test-00031-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9af79e7e421e3dea654f1918a6cd7b9c24a5a016d1569dc57dcbfdf4575820e5
3
+ size 437108844
NewYork2030/test-00032-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b887fe895d2a319ad3594a37d7a0249b10d50a9a30dc986326b76bc458d3c58
3
+ size 436871598
NewYork2030/test-00033-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12230735d5a26add23b91b437eab21e9648215de89ed36578c245f2d10ceec79
3
+ size 437438127
NewYork2030/test-00034-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a84f0d6858bf8a28579b059ddeaaba17c84d48b60f6d59951e20e002568bfd8a
3
+ size 437125707
NewYork2030/test-00035-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc621cfbdf0c7ddcabe1a538188f6798fe2e2360f7847948092a40a9d167870e
3
+ size 437623065
NewYork2030/test-00036-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b649fe01330a37efa2be24a860c39755616e44de21fca285c31e8068f0803b71
3
+ size 437062270
NewYork2030/test-00037-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:729af41b9013c694893bc24edbe63516ed750c77b758e8f5c4176565f357e632
3
+ size 437223490
NewYork2030/test-00038-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d208f6bc9903217af29bf8b81a3499841baf24255911e60e3480dcb19b0302fe
3
+ size 437009299
NewYork2030/test-00039-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7337385c1955367aa11c3b3a4861c7424a54513195a92f0c4cebf9879e3fcd93
3
+ size 437187006
NewYork2030/test-00040-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4645ea8997ec2af62809098a9942305edf0582bb65d2c1eae3d96ef85f950590
3
+ size 437359356
NewYork2030/test-00041-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ef507e91d614b169771901ad0cc3e0ab0dea9bb3045fb522d39f2c553fc34f9
3
+ size 437111104
NewYork2030/test-00042-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9923c188f6e719235a810abbc51c1cd2df9e7dbb10bb10654fb0fcc782706735
3
+ size 437129187
NewYork2030/test-00043-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38c93f246a3f3e1e1b32e6325ea356341c515d728d7cf082b4711b63d0f59c1b
3
+ size 436905611
NewYork2030/test-00044-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35a51ae7e3156ba0e63d25eeb6a801751ba2b8c719e0f5d14a3848432214c2e1
3
+ size 437430438
NewYork2030/test-00045-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:145294e84338a60e98be88e4d8e9f774829a40e713a04865446795a0932b1f51
3
+ size 437084249
NewYork2030/test-00046-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:529dfbd5bde8734832b9a49b8839b6e4099b257ba09bc1a9e7c397cf76492350
3
+ size 437313503
NewYork2030/test-00047-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72893cab6d454f00b33d311bafce32e530f37a89880d70eda16d87ab8f0f2bc5
3
+ size 437329202
NewYork2030/test-00048-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:278e67d1f90b1b95fb922cc1278acfd025a42eb895e9322e18c9e309525b5545
3
+ size 437194660
NewYork2030/test-00049-of-00050.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b47a788ea0c947489a1c3b612c98d9e0383d9977d35c69a513d766fdbd2ef24c
3
+ size 437360348
PGLearn-Medium-NewYork2030.py DELETED
@@ -1,427 +0,0 @@
1
- from __future__ import annotations
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- import json
5
- import shutil
6
-
7
- import datasets as hfd
8
- import h5py
9
- import pgzip as gzip
10
- import pyarrow as pa
11
-
12
- # ┌──────────────┐
13
- # │ Metadata │
14
- # └──────────────┘
15
-
16
- @dataclass
17
- class CaseSizes:
18
- n_bus: int
19
- n_load: int
20
- n_gen: int
21
- n_branch: int
22
-
23
- CASENAME = "NewYork2030"
24
- SIZES = CaseSizes(n_bus=1576, n_load=1446, n_gen=323, n_branch=2427)
25
- NUM_TRAIN = 398000
26
- NUM_TEST = 99501
27
- NUM_INFEASIBLE = 2499
28
- SPLITFILES = {}
29
-
30
- URL = "https://huggingface.co/datasets/PGLearn/PGLearn-Medium-NewYork2030"
31
- DESCRIPTION = """\
32
- The NewYork2030 PGLearn optimal power flow dataset, part of the PGLearn-Medium collection. \
33
- """
34
- VERSION = hfd.Version("1.0.0")
35
- DEFAULT_CONFIG_DESCRIPTION="""\
36
- This configuration contains feasible input, primal solution, and dual solution data \
37
- for the ACOPF and DCOPF formulations on the {case} system. For case data, \
38
- download the case.json.gz file from the `script` branch of the repository. \
39
- https://huggingface.co/datasets/PGLearn/PGLearn-Medium-NewYork2030/blob/script/case.json.gz
40
- """
41
- USE_ML4OPF_WARNING = """
42
- ================================================================================================
43
- Loading PGLearn-Medium-NewYork2030 through the `datasets.load_dataset` function may be slow.
44
-
45
- Consider using ML4OPF to directly convert to `torch.Tensor`; for more info see:
46
- https://github.com/AI4OPT/ML4OPF?tab=readme-ov-file#manually-loading-data
47
-
48
- Or, use `huggingface_hub.snapshot_download` and an HDF5 reader; for more info see:
49
- https://huggingface.co/datasets/PGLearn/PGLearn-Medium-NewYork2030#downloading-individual-files
50
- ================================================================================================
51
- """
52
- CITATION = """\
53
- @article{klamkinpglearn,
54
- title={{PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow}},
55
- author={Klamkin, Michael and Tanneau, Mathieu and Van Hentenryck, Pascal},
56
- year={2025},
57
- }\
58
- """
59
-
60
- IS_COMPRESSED = True
61
-
62
- # ┌──────────────────┐
63
- # │ Formulations │
64
- # └──────────────────┘
65
-
66
- def acopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
67
- features = {}
68
- if primal: features.update(acopf_primal_features(sizes))
69
- if dual: features.update(acopf_dual_features(sizes))
70
- if meta: features.update({f"ACOPF/{k}": v for k, v in META_FEATURES.items()})
71
- return features
72
-
73
- def dcopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
74
- features = {}
75
- if primal: features.update(dcopf_primal_features(sizes))
76
- if dual: features.update(dcopf_dual_features(sizes))
77
- if meta: features.update({f"DCOPF/{k}": v for k, v in META_FEATURES.items()})
78
- return features
79
-
80
- def socopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
81
- features = {}
82
- if primal: features.update(socopf_primal_features(sizes))
83
- if dual: features.update(socopf_dual_features(sizes))
84
- if meta: features.update({f"SOCOPF/{k}": v for k, v in META_FEATURES.items()})
85
- return features
86
-
87
- FORMULATIONS_TO_FEATURES = {
88
- "ACOPF": acopf_features,
89
- "DCOPF": dcopf_features,
90
- # "SOCOPF": socopf_features,
91
- }
92
-
93
- # ┌───────────────────┐
94
- # │ BuilderConfig │
95
- # └───────────────────┘
96
-
97
- class PGLearnMediumNewYork2030Config(hfd.BuilderConfig):
98
- """BuilderConfig for PGLearn-Medium-NewYork2030.
99
- By default, primal solution data, metadata, input, casejson, are included for the train and test splits.
100
-
101
- To modify the default configuration, pass attributes of this class to `datasets.load_dataset`:
102
-
103
- Attributes:
104
- formulations (list[str]): The formulation(s) to include, e.g. ["ACOPF", "DCOPF"]
105
- primal (bool, optional): Include primal solution data. Defaults to True.
106
- dual (bool, optional): Include dual solution data. Defaults to False.
107
- meta (bool, optional): Include metadata. Defaults to True.
108
- input (bool, optional): Include input data. Defaults to True.
109
- casejson (bool, optional): Include case.json data. Defaults to True.
110
- train (bool, optional): Include training samples. Defaults to True.
111
- test (bool, optional): Include testing samples. Defaults to True.
112
- infeasible (bool, optional): Include infeasible samples. Defaults to False.
113
- """
114
- def __init__(self,
115
- formulations: list[str],
116
- primal: bool=True, dual: bool=False, meta: bool=True, input: bool = True, casejson: bool=True,
117
- train: bool=True, test: bool=True, infeasible: bool=False,
118
- compressed: bool=IS_COMPRESSED, **kwargs
119
- ):
120
- super(PGLearnMediumNewYork2030Config, self).__init__(version=VERSION, **kwargs)
121
-
122
- self.case = CASENAME
123
- self.formulations = formulations
124
-
125
- self.primal = primal
126
- self.dual = dual
127
- self.meta = meta
128
- self.input = input
129
- self.casejson = casejson
130
-
131
- self.train = train
132
- self.test = test
133
- self.infeasible = infeasible
134
-
135
- self.gz_ext = ".gz" if compressed else ""
136
-
137
- @property
138
- def size(self):
139
- return SIZES
140
-
141
- @property
142
- def features(self):
143
- features = {}
144
- if self.casejson: features.update(case_features())
145
- if self.input: features.update(input_features(SIZES))
146
- for formulation in self.formulations:
147
- features.update(FORMULATIONS_TO_FEATURES[formulation](SIZES, self.primal, self.dual, self.meta))
148
- return hfd.Features(features)
149
-
150
- @property
151
- def splits(self):
152
- splits: dict[hfd.Split, dict[str, str | int]] = {}
153
- if self.train:
154
- splits[hfd.Split.TRAIN] = {
155
- "name": "train",
156
- "num_examples": NUM_TRAIN
157
- }
158
- if self.test:
159
- splits[hfd.Split.TEST] = {
160
- "name": "test",
161
- "num_examples": NUM_TEST
162
- }
163
- if self.infeasible:
164
- splits[hfd.Split("infeasible")] = {
165
- "name": "infeasible",
166
- "num_examples": NUM_INFEASIBLE
167
- }
168
- return splits
169
-
170
- @property
171
- def urls(self):
172
- urls: dict[str, None | str | list] = {
173
- "case": None, "train": [], "test": [], "infeasible": [],
174
- }
175
-
176
- if self.casejson:
177
- urls["case"] = f"case.json" + self.gz_ext
178
- else:
179
- urls.pop("case")
180
-
181
- split_names = []
182
- if self.train: split_names.append("train")
183
- if self.test: split_names.append("test")
184
- if self.infeasible: split_names.append("infeasible")
185
-
186
- for split in split_names:
187
- if self.input: urls[split].append(f"{split}/input.h5" + self.gz_ext)
188
- for formulation in self.formulations:
189
- if self.primal:
190
- filename = f"{split}/{formulation}/primal.h5" + self.gz_ext
191
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
192
- else: urls[split].append(filename)
193
- if self.dual:
194
- filename = f"{split}/{formulation}/dual.h5" + self.gz_ext
195
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
196
- else: urls[split].append(filename)
197
- if self.meta:
198
- filename = f"{split}/{formulation}/meta.h5" + self.gz_ext
199
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
200
- else: urls[split].append(filename)
201
- return urls
202
-
203
- # ┌────────────────────┐
204
- # │ DatasetBuilder │
205
- # └────────────────────┘
206
-
207
- class PGLearnMediumNewYork2030(hfd.ArrowBasedBuilder):
208
- """DatasetBuilder for PGLearn-Medium-NewYork2030.
209
- The main interface is `datasets.load_dataset` with `trust_remote_code=True`, e.g.
210
-
211
- ```python
212
- from datasets import load_dataset
213
- ds = load_dataset("PGLearn/PGLearn-Medium-NewYork2030", trust_remote_code=True,
214
- # modify the default configuration by passing kwargs
215
- formulations=["DCOPF"],
216
- dual=False,
217
- meta=False,
218
- )
219
- ```
220
- """
221
-
222
- DEFAULT_WRITER_BATCH_SIZE = 10000
223
- BUILDER_CONFIG_CLASS = PGLearnMediumNewYork2030Config
224
- DEFAULT_CONFIG_NAME=CASENAME
225
- BUILDER_CONFIGS = [
226
- PGLearnMediumNewYork2030Config(
227
- name=CASENAME, description=DEFAULT_CONFIG_DESCRIPTION.format(case=CASENAME),
228
- formulations=list(FORMULATIONS_TO_FEATURES.keys()),
229
- primal=True, dual=True, meta=True, input=True, casejson=False,
230
- train=True, test=True, infeasible=False,
231
- )
232
- ]
233
-
234
- def _info(self):
235
- return hfd.DatasetInfo(
236
- features=self.config.features, splits=self.config.splits,
237
- description=DESCRIPTION + self.config.description,
238
- homepage=URL, citation=CITATION,
239
- )
240
-
241
- def _split_generators(self, dl_manager: hfd.DownloadManager):
242
- hfd.logging.get_logger().warning(USE_ML4OPF_WARNING)
243
-
244
- filepaths = dl_manager.download_and_extract(self.config.urls)
245
-
246
- splits: list[hfd.SplitGenerator] = []
247
- if self.config.train:
248
- splits.append(hfd.SplitGenerator(
249
- name=hfd.Split.TRAIN,
250
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["train"]), n_samples=NUM_TRAIN),
251
- ))
252
- if self.config.test:
253
- splits.append(hfd.SplitGenerator(
254
- name=hfd.Split.TEST,
255
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["test"]), n_samples=NUM_TEST),
256
- ))
257
- if self.config.infeasible:
258
- splits.append(hfd.SplitGenerator(
259
- name=hfd.Split("infeasible"),
260
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["infeasible"]), n_samples=NUM_INFEASIBLE),
261
- ))
262
- return splits
263
-
264
- def _generate_tables(self, case_file: str | None, data_files: tuple[hfd.utils.track.tracked_str | list[hfd.utils.track.tracked_str]], n_samples: int):
265
- case_data: str | None = json.dumps(json.load(open_maybe_gzip_cat(case_file))) if case_file is not None else None
266
- data: dict[str, h5py.File] = {}
267
- for file in data_files:
268
- v = h5py.File(open_maybe_gzip_cat(file), "r")
269
- if isinstance(file, list):
270
- k = "/".join(Path(file[0].get_origin()).parts[-3:-1]).split(".")[0]
271
- else:
272
- k = "/".join(Path(file.get_origin()).parts[-2:]).split(".")[0]
273
- data[k] = v
274
- for k in list(data.keys()):
275
- if "/input" in k: data[k.split("/", 1)[1]] = data.pop(k)
276
-
277
- batch_size = self._writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
278
- for i in range(0, n_samples, batch_size):
279
- effective_batch_size = min(batch_size, n_samples - i)
280
-
281
- sample_data = {
282
- f"{dk}/{k}":
283
- hfd.features.features.numpy_to_pyarrow_listarray(v[i:i + effective_batch_size, ...])
284
- for dk, d in data.items() for k, v in d.items() if f"{dk}/{k}" in self.config.features
285
- }
286
-
287
- if case_data is not None:
288
- sample_data["case/json"] = pa.array([case_data] * effective_batch_size)
289
-
290
- yield i, pa.Table.from_pydict(sample_data)
291
-
292
- for f in data.values():
293
- f.close()
294
-
295
- # ┌──────────────┐
296
- # │ Features │
297
- # └──────────────┘
298
-
299
- FLOAT_TYPE = "float32"
300
- INT_TYPE = "int64"
301
- BOOL_TYPE = "bool"
302
- STRING_TYPE = "string"
303
-
304
- def case_features():
305
- # FIXME: better way to share schema of case data -- need to treat jagged arrays
306
- return {
307
- "case/json": hfd.Value(STRING_TYPE),
308
- }
309
-
310
- META_FEATURES = {
311
- "meta/seed": hfd.Value(dtype=INT_TYPE),
312
- "meta/formulation": hfd.Value(dtype=STRING_TYPE),
313
- "meta/primal_objective_value": hfd.Value(dtype=FLOAT_TYPE),
314
- "meta/dual_objective_value": hfd.Value(dtype=FLOAT_TYPE),
315
- "meta/primal_status": hfd.Value(dtype=STRING_TYPE),
316
- "meta/dual_status": hfd.Value(dtype=STRING_TYPE),
317
- "meta/termination_status": hfd.Value(dtype=STRING_TYPE),
318
- "meta/build_time": hfd.Value(dtype=FLOAT_TYPE),
319
- "meta/extract_time": hfd.Value(dtype=FLOAT_TYPE),
320
- "meta/solve_time": hfd.Value(dtype=FLOAT_TYPE),
321
- }
322
-
323
- def input_features(sizes: CaseSizes):
324
- return {
325
- "input/pd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
326
- "input/qd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
327
- "input/gen_status": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=BOOL_TYPE)),
328
- "input/branch_status": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=BOOL_TYPE)),
329
- "input/seed": hfd.Value(dtype=INT_TYPE),
330
- }
331
-
332
- def acopf_primal_features(sizes: CaseSizes):
333
- return {
334
- "ACOPF/primal/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
335
- "ACOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
336
- "ACOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
337
- "ACOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
338
- "ACOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
339
- "ACOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
340
- "ACOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
341
- "ACOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
342
- }
343
- def acopf_dual_features(sizes: CaseSizes):
344
- return {
345
- "ACOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
346
- "ACOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
347
- "ACOPF/dual/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
348
- "ACOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
349
- "ACOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
350
- "ACOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
351
- "ACOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
352
- "ACOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
353
- "ACOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
354
- "ACOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
355
- "ACOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
356
- "ACOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
357
- "ACOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
358
- "ACOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
359
- "ACOPF/dual/sm_fr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
360
- "ACOPF/dual/sm_to": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
361
- "ACOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
362
- }
363
- def dcopf_primal_features(sizes: CaseSizes):
364
- return {
365
- "DCOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
366
- "DCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
367
- "DCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
368
- }
369
- def dcopf_dual_features(sizes: CaseSizes):
370
- return {
371
- "DCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
372
- "DCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
373
- "DCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
374
- "DCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
375
- "DCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
376
- "DCOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
377
- }
378
- def socopf_primal_features(sizes: CaseSizes):
379
- return {
380
- "SOCOPF/primal/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
381
- "SOCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
382
- "SOCOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
383
- "SOCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
384
- "SOCOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
385
- "SOCOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
386
- "SOCOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
387
- "SOCOPF/primal/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
388
- "SOCOPF/primal/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
389
- }
390
- def socopf_dual_features(sizes: CaseSizes):
391
- return {
392
- "SOCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
393
- "SOCOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
394
- "SOCOPF/dual/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
395
- "SOCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
396
- "SOCOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
397
- "SOCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
398
- "SOCOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
399
- "SOCOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
400
- "SOCOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
401
- "SOCOPF/dual/jabr": hfd.Array2D(shape=(sizes.n_branch, 4), dtype=FLOAT_TYPE),
402
- "SOCOPF/dual/sm_fr": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
403
- "SOCOPF/dual/sm_to": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
404
- "SOCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
405
- "SOCOPF/dual/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
406
- "SOCOPF/dual/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
407
- "SOCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
408
- "SOCOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
409
- "SOCOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
410
- "SOCOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
411
- }
412
-
413
- # ┌───────────────┐
414
- # │ Utilities │
415
- # └───────────────┘
416
-
417
- def open_maybe_gzip_cat(path: str | list):
418
- if isinstance(path, list):
419
- dest = Path(path[0]).parent.with_suffix(".h5")
420
- if not dest.exists():
421
- with open(dest, "wb") as dest_f:
422
- for piece in path:
423
- with open(piece, "rb") as piece_f:
424
- shutil.copyfileobj(piece_f, dest_f)
425
- shutil.rmtree(Path(piece).parent)
426
- path = dest.as_posix()
427
- return gzip.open(path, "rb") if path.endswith(".gz") else open(path, "rb")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -172,6 +172,14 @@ dataset_info:
172
  - name: test
173
  num_bytes: 24721098076
174
  num_examples: 99501
175
- download_size: 86922797611
176
  dataset_size: 123604496576
 
 
 
 
 
 
 
 
177
  ---
 
172
  - name: test
173
  num_bytes: 24721098076
174
  num_examples: 99501
175
+ download_size: 109388692790
176
  dataset_size: 123604496576
177
+ configs:
178
+ - config_name: NewYork2030
179
+ data_files:
180
+ - split: train
181
+ path: NewYork2030/train-*
182
+ - split: test
183
+ path: NewYork2030/test-*
184
+ default: true
185
  ---