Update README.md
Browse files
README.md
CHANGED
@@ -37,11 +37,23 @@ If you use the TOMT-KIS dataset, please cite the corresponding paper:
|
|
37 |
|
38 |
# Use of TOMT-KIS
|
39 |
|
|
|
|
|
40 |
```
|
41 |
from datasets import load_dataset
|
42 |
tomt_kis = load_dataset("webis/tip-of-my-tongue-known-item-search")
|
43 |
```
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
# Dataset Structure
|
47 |
|
|
|
37 |
|
38 |
# Use of TOMT-KIS
|
39 |
|
40 |
+
You can download our full dataset using the `datasets` library.
|
41 |
+
|
42 |
```
|
43 |
from datasets import load_dataset
|
44 |
tomt_kis = load_dataset("webis/tip-of-my-tongue-known-item-search")
|
45 |
```
|
46 |
|
47 |
+
For better handling in terms of RAM usage, we splitted our dataset in two splits:
|
48 |
+
- `reddit-tomt-submissions-1.jsonl.gz`, 500,000 records, 503MB
|
49 |
+
- `reddit-tomt-submissions-2.jsonl.gz`, 779,425 records, 786MB
|
50 |
+
|
51 |
+
You can load them using
|
52 |
+
```
|
53 |
+
tomt_kis1 = load_dataset("webis/tip-of-my-tongue-known-item-search", data_files='reddit-tomt-submissions-1.jsonl.gz')
|
54 |
+
tomt_kis2 = load_dataset("webis/tip-of-my-tongue-known-item-search", data_files='reddit-tomt-submissions-2.jsonl.gz')
|
55 |
+
```
|
56 |
+
|
57 |
|
58 |
# Dataset Structure
|
59 |
|