Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,133 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# TOMT-KIS Dataset
|
6 |
+
|
7 |
+
## Description
|
8 |
+
|
9 |
+
Searchers who cannot resolve a known-item information need using a search engine (e.g., as the searcher only remembers vague details about a movie from some years ago) might post a respective question on a question answering platform, hoping that the discussion with other people can help to identify the item.
|
10 |
+
|
11 |
+
TOMT-KIS is a large-scale dataset of 1.28 million known-item questions (47 % have an identified answer) from the r/tipofmytongue subreddit we proposed in [A Large-Scale Dataset for Known-Item Question Performance Prediction](https://downloads.webis.de/publications/papers/froebe_2023c.pdf). As the "performance" of a known-item question, we use the time it took the community to solve the question (or the absence of a solution).
|
12 |
+
|
13 |
+
|
14 |
+
## Dataset Creation
|
15 |
+
|
16 |
+
In the following section, we describe the process of generating our TOMT-KIS dataset.
|
17 |
+
|
18 |
+
### Source Data
|
19 |
+
|
20 |
+
To create our TOMT-KIS dataset, we have crawled all questions and discussions from the [tip-of-my-tongue subreddit](https://www.reddit.com/r/tipofmytongue/?rdt=47529). Note that answers are not explicitly tagged in the subreddit discussions, but the guidelines state that the asker should reply with "Solved!" to a post with the correct answer. However, analyzing different questions from the subreddit lead to the observation that askers often do not reply to the correct answer with "Solved!", but with other posts (e.g., "This was it!"). In addition to such replies by the author of the question in the comment thread of the question, there is a further metadata field that indicates whether a question is solved.
|
21 |
+
|
22 |
+
### Answer Identification
|
23 |
+
|
24 |
+
For questions that the moderator metadata indicates as solved, we use four manually created lexical patterns as a rule-based answer identification method: if a post from the asker in reply to a post contains ‘solved’, ‘thank’, ‘yes’, or ‘amazing’, we view this post as the answer. On 50 random questions and 50 questions for which this heuristic identified an answer (none of them used to develop the rules), the achieved precision is 92 % at a recall of 78 %.
|
25 |
+
|
26 |
+
|
27 |
+
### Dataset Analysis
|
28 |
+
|
29 |
+
Many questions in the tip-of-the-tongue subreddit have assigned tags that roughly describe an assumed category of the known item. Besides \[tomt\] (all questions have this tag), a question has between 0 and 14 tags (average: 0.89), some of which directly express uncertainty (e.g., \[2010s?\] vs. \[2010s\]). For further analyses, we manually merged the original tags to form larger categories (e.g., combining \[song\], \[music\], etc. to a “Music” category).
|
30 |
+
|
31 |
+
The ten most frequent tags are:
|
32 |
+
1. \[song\]
|
33 |
+
2. \[movie\]
|
34 |
+
3. \[video\]
|
35 |
+
4. \[music\]
|
36 |
+
5. \[book\]
|
37 |
+
6. \[2000s\]
|
38 |
+
7. \[game\]
|
39 |
+
8. \[2010s\]
|
40 |
+
9. \[tv show\]
|
41 |
+
10. \[music video\]
|
42 |
+
|
43 |
+
As a proxy for the "performance" of a known-item question, we consider the time elapsed until the solution was posted.
|
44 |
+
|
45 |
+
Overall, the titles of questions are much shorter (14 words on average across all categories) than the explanations in the content field (81 words on average), while the identified answers again are rather short (12 words on average). There are some notable differences between categories like the titles for movie questions being longer (15 words on average) or the answers in the book category being longer (16 words on average).
|
46 |
+
|
47 |
+
|
48 |
+
## Use of TOMT-KIS
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
## Dataset Structure
|
53 |
+
|
54 |
+
Our TOMT-KIS dataset is available in JSONL format. For each question, we include all the attributes available in the crawled data and add the chosen answer when our heuristic could extract it.
|
55 |
+
|
56 |
+
### Data Instances
|
57 |
+
|
58 |
+
```jsonl
|
59 |
+
{
|
60 |
+
"id": "2gbnla",
|
61 |
+
"author": "alany611",
|
62 |
+
"url": "http://www.reddit.com/r/tipofmytongue/comments/2gbnla/tomt_1990s_educational_cartoon_for_kids_to_learn/",
|
63 |
+
"permalink": "/r/tipofmytongue/comments/2gbnla/tomt_1990s_educational_cartoon_for_kids_to_learn/",
|
64 |
+
"title": "[TOMT] 1990's Educational Cartoon for kids to learn French",
|
65 |
+
"content": "Hi all,\n\nWhen I was really young, 3-5, I remember watching a cartoon that I think was supposed to teach kids French. I would guess it was made from 1990-1995, but possibly earlier.\n\nIt was in color and the episodes I remember featured a guy with a long, narrow, and crooked nose and greenish skin teaching kids how to count? There was also a scene that had some character running up a clock tower to change the time.\n\nOverall, it was a pretty gloomy feel, iirc, and I'd love to see it again if possible.",
|
66 |
+
"created_utc": "1410647042",
|
67 |
+
"link_flair_text": "Solved",
|
68 |
+
"comments": [
|
69 |
+
{
|
70 |
+
"author": "scarpoochi",
|
71 |
+
"body": "Muzzy?\n\nhttps://www.youtube.com/watch?v=mD9i39GENWU",
|
72 |
+
"created_utc": "1410649099",
|
73 |
+
"score": 11,
|
74 |
+
"comments": [
|
75 |
+
{
|
76 |
+
"author": "alany611",
|
77 |
+
"body": "thank you!!!",
|
78 |
+
"created_utc": "1410666273",
|
79 |
+
"score": 1
|
80 |
+
}
|
81 |
+
]
|
82 |
+
},
|
83 |
+
{
|
84 |
+
"author": "pepitica",
|
85 |
+
"body": "Muzzy! It's been driving me crazy for a while now!",
|
86 |
+
"created_utc": "1410649896",
|
87 |
+
"score": 6
|
88 |
+
}
|
89 |
+
],
|
90 |
+
"answer_detected": True,
|
91 |
+
"solved_utc": "1410649099",
|
92 |
+
"chosen_answer": "Muzzy?\n\nhttps://www.youtube.com/watch?v=mD9i39GENWU",
|
93 |
+
"links_on_answer_path": [
|
94 |
+
"https://www.youtube.com/watch?v=mD9i39GENWU"
|
95 |
+
]
|
96 |
+
}
|
97 |
+
|
98 |
+
```
|
99 |
+
|
100 |
+
|
101 |
+
### Data Fields
|
102 |
+
|
103 |
+
Overall, TOMT-KIS includes 127 attributes for each question, such as
|
104 |
+
|
105 |
+
- `id` (int): unique Reddit-identifier of a question
|
106 |
+
- `title` (string): the title of a question
|
107 |
+
- `content` (string): the main text content of a question
|
108 |
+
- `created_utc` (date): the posted question's timestamp
|
109 |
+
- `link_flair_text` (string): indicates whether the question is solved; set by moderators
|
110 |
+
- `comments` (string, json): the complete tree of the discussion on each question
|
111 |
+
|
112 |
+
We run our precision-oriented answer identification heuristic on questions tagged as solved by a moderator and add four "new" attributes when the heuristic could identify an answer:
|
113 |
+
|
114 |
+
- `answer_detected` (boolean): indicates whether our heuristic could extract an answer
|
115 |
+
- `solved_utc`(date): the timestamp when the identified answer was posted
|
116 |
+
- `chosen_answer`(string): the extracted answer
|
117 |
+
- `links_on_answer_path`(list of strings): contains all Reddit-external pages that were found in posts between the question and the post with the answer.
|
118 |
+
|
119 |
+
|
120 |
+
## Citation Information
|
121 |
+
|
122 |
+
```
|
123 |
+
@InProceedings{froebe:2023c,
|
124 |
+
author = {Maik Fr{\"o}be and Eric Oliver Schmidt and Matthias Hagen},
|
125 |
+
booktitle = {QPP++ 2023: Query Performance Prediction and Its Evaluation in New Tasks},
|
126 |
+
month = apr,
|
127 |
+
publisher = {CEUR-WS.org},
|
128 |
+
series = {CEUR Workshop Proceedings},
|
129 |
+
site = {Dublin, Irland},
|
130 |
+
title = {{A Large-Scale Dataset for Known-Item Question Performance Prediction}},
|
131 |
+
year = 2023
|
132 |
+
}
|
133 |
+
```
|