Update README.md
Browse files
README.md
CHANGED
@@ -2,58 +2,36 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
# TOMT-KIS Dataset
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
In the following section, we describe the process of generating our TOMT-KIS dataset.
|
17 |
-
|
18 |
-
### Source Data
|
19 |
-
|
20 |
-
To create our TOMT-KIS dataset, we have crawled all questions and discussions from the [tip-of-my-tongue subreddit](https://www.reddit.com/r/tipofmytongue/?rdt=47529). Note that answers are not explicitly tagged in the subreddit discussions, but the guidelines state that the asker should reply with "Solved!" to a post with the correct answer. However, analyzing different questions from the subreddit lead to the observation that askers often do not reply to the correct answer with "Solved!", but with other posts (e.g., "This was it!"). In addition to such replies by the author of the question in the comment thread of the question, there is a further metadata field that indicates whether a question is solved.
|
21 |
-
|
22 |
-
### Answer Identification
|
23 |
-
|
24 |
-
For questions that the moderator metadata indicates as solved, we use four manually created lexical patterns as a rule-based answer identification method: if a post from the asker in reply to a post contains ‘solved’, ‘thank’, ‘yes’, or ‘amazing’, we view this post as the answer. On 50 random questions and 50 questions for which this heuristic identified an answer (none of them used to develop the rules), the achieved precision is 92 % at a recall of 78 %.
|
25 |
-
|
26 |
-
|
27 |
-
### Dataset Analysis
|
28 |
-
|
29 |
-
Many questions in the tip-of-the-tongue subreddit have assigned tags that roughly describe an assumed category of the known item. Besides \[tomt\] (all questions have this tag), a question has between 0 and 14 tags (average: 0.89), some of which directly express uncertainty (e.g., \[2010s?\] vs. \[2010s\]). For further analyses, we manually merged the original tags to form larger categories (e.g., combining \[song\], \[music\], etc. to a “Music” category).
|
30 |
-
|
31 |
-
The ten most frequent tags are:
|
32 |
-
1. \[song\]
|
33 |
-
2. \[movie\]
|
34 |
-
3. \[video\]
|
35 |
-
4. \[music\]
|
36 |
-
5. \[book\]
|
37 |
-
6. \[2000s\]
|
38 |
-
7. \[game\]
|
39 |
-
8. \[2010s\]
|
40 |
-
9. \[tv show\]
|
41 |
-
10. \[music video\]
|
42 |
-
|
43 |
-
As a proxy for the "performance" of a known-item question, we consider the time elapsed until the solution was posted.
|
44 |
-
|
45 |
-
Overall, the titles of questions are much shorter (14 words on average across all categories) than the explanations in the content field (81 words on average), while the identified answers again are rather short (12 words on average). There are some notable differences between categories like the titles for movie questions being longer (15 words on average) or the answers in the book category being longer (16 words on average).
|
46 |
-
|
47 |
-
|
48 |
-
## Use of TOMT-KIS
|
49 |
-
|
50 |
|
|
|
51 |
|
52 |
-
|
53 |
|
54 |
Our TOMT-KIS dataset is available in JSONL format. For each question, we include all the attributes available in the crawled data and add the chosen answer when our heuristic could extract it.
|
55 |
|
56 |
-
|
57 |
|
58 |
```jsonl
|
59 |
{
|
@@ -98,7 +76,7 @@ Our TOMT-KIS dataset is available in JSONL format. For each question, we include
|
|
98 |
```
|
99 |
|
100 |
|
101 |
-
|
102 |
|
103 |
Overall, TOMT-KIS includes 128 attributes for each question, such as
|
104 |
|
@@ -116,18 +94,3 @@ We run our precision-oriented answer identification heuristic on questions tagge
|
|
116 |
- `chosen_answer`(string): the extracted answer
|
117 |
- `links_on_answer_path`(list of strings): contains all Reddit-external pages that were found in posts between the question and the post with the answer.
|
118 |
|
119 |
-
|
120 |
-
## Citation Information
|
121 |
-
|
122 |
-
```
|
123 |
-
@InProceedings{froebe:2023c,
|
124 |
-
author = {Maik Fr{\"o}be and Eric Oliver Schmidt and Matthias Hagen},
|
125 |
-
booktitle = {QPP++ 2023: Query Performance Prediction and Its Evaluation in New Tasks},
|
126 |
-
month = apr,
|
127 |
-
publisher = {CEUR-WS.org},
|
128 |
-
series = {CEUR Workshop Proceedings},
|
129 |
-
site = {Dublin, Irland},
|
130 |
-
title = {{A Large-Scale Dataset for Known-Item Question Performance Prediction}},
|
131 |
-
year = 2023
|
132 |
-
}
|
133 |
-
```
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
# The TOMT-KIS (tip-of-my-tongue-known-item-search) Dataset
|
6 |
|
7 |
+
Searchers who cannot resolve a known-item information need using a search engine might post a respective question on a question answering platform, hoping that the discussion with other people can help to identify the item. TOMT-KIS is a large-scale dataset of 1.28 million known-item questions from the r/tipofmytongue subreddit in the [QPP++@ECIR'23](https://qpp.dei.unipd.it/) paper on [A Large-Scale Dataset for Known-Item Question Performance Prediction](https://downloads.webis.de/publications/papers/froebe_2023c.pdf).
|
8 |
|
9 |
+
# Citation
|
10 |
|
11 |
+
If you use the TOMT-KIS dataset, please cite the corresponding paper:
|
12 |
|
13 |
+
```
|
14 |
+
@InProceedings{froebe:2023c,
|
15 |
+
author = {Maik Fr{\"o}be and Eric Oliver Schmidt and Matthias Hagen},
|
16 |
+
booktitle = {QPP++ 2023: Query Performance Prediction and Its Evaluation in New Tasks},
|
17 |
+
month = apr,
|
18 |
+
publisher = {CEUR-WS.org},
|
19 |
+
series = {CEUR Workshop Proceedings},
|
20 |
+
site = {Dublin, Irland},
|
21 |
+
title = {{A Large-Scale Dataset for Known-Item Question Performance Prediction}},
|
22 |
+
year = 2023
|
23 |
+
}
|
24 |
+
```
|
25 |
|
26 |
+
# Use of TOMT-KIS
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
ToDo Example with huggingface dataset API.
|
29 |
|
30 |
+
# Dataset Structure
|
31 |
|
32 |
Our TOMT-KIS dataset is available in JSONL format. For each question, we include all the attributes available in the crawled data and add the chosen answer when our heuristic could extract it.
|
33 |
|
34 |
+
## Data Instances
|
35 |
|
36 |
```jsonl
|
37 |
{
|
|
|
76 |
```
|
77 |
|
78 |
|
79 |
+
## Data Fields
|
80 |
|
81 |
Overall, TOMT-KIS includes 128 attributes for each question, such as
|
82 |
|
|
|
94 |
- `chosen_answer`(string): the extracted answer
|
95 |
- `links_on_answer_path`(list of strings): contains all Reddit-external pages that were found in posts between the question and the post with the answer.
|
96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|