Update README.md
Browse files
README.md
CHANGED
@@ -20,4 +20,49 @@ configs:
|
|
20 |
data_files:
|
21 |
- split: train
|
22 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
data_files:
|
21 |
- split: train
|
22 |
path: data/train-*
|
23 |
+
license: mit
|
24 |
+
language:
|
25 |
+
- en
|
26 |
+
pretty_name: NYT Connections Answers
|
27 |
+
size_categories:
|
28 |
+
- n<1K
|
29 |
---
|
30 |
+
|
31 |
+
Made with the following script using a local copy of [this website](https://word.tips/todays-nyt-connections-answers/):
|
32 |
+
|
33 |
+
```python
|
34 |
+
import re
|
35 |
+
|
36 |
+
from bs4 import BeautifulSoup
|
37 |
+
from datasets import Dataset
|
38 |
+
|
39 |
+
with open("Today’s NYT Connections Answers Jan 5, #574 - Daily Updates & Hints - Word Tips.htm", encoding="utf-8") as f:
|
40 |
+
html = f.read()
|
41 |
+
|
42 |
+
soup = BeautifulSoup(html, "html.parser")
|
43 |
+
texts = re.findall(r'"([^"]*)"', "".join(soup.find_all("script")[9]))
|
44 |
+
texts = [" ".join(text.split()).replace(" ,", ", ") for text in texts if ":" in text and (text.startswith("🟡") or text.startswith("🟢") or text.startswith("🔵") or text.startswith("🟣"))]
|
45 |
+
levels = {
|
46 |
+
"🟡": 1,
|
47 |
+
"🟢": 2,
|
48 |
+
"🔵": 3,
|
49 |
+
"🟣": 4
|
50 |
+
}
|
51 |
+
|
52 |
+
def gen():
|
53 |
+
for group in texts:
|
54 |
+
level_id = group[:1]
|
55 |
+
group = group[2:]
|
56 |
+
category, group = group.split(":")
|
57 |
+
entry = {
|
58 |
+
"level": levels[level_id],
|
59 |
+
"level_id": level_id,
|
60 |
+
"category": category,
|
61 |
+
"words": [word.strip() for word in group.split(",")]
|
62 |
+
}
|
63 |
+
#pprint(entry)
|
64 |
+
yield entry
|
65 |
+
|
66 |
+
dataset = Dataset.from_generator(gen)
|
67 |
+
dataset.push_to_hub("T145/connections")
|
68 |
+
```
|