File size: 3,061 Bytes
b7327a1
 
 
 
35597ec
b7327a1
572f80e
 
 
b7327a1
 
965f757
d7bd96f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
598fcde
d7bd96f
 
 
 
 
 
 
 
 
 
 
598fcde
 
890a446
33353d8
890a446
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33353d8
890a446
598fcde
d7bd96f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# ai-msgbot GPT-2 M Conversational

A GPT-2 M 355M parameter model for usage with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create a chatbot-like tool.

This model was fine-tuned on a parsed version of [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 10,000 steps. 20/24 layers were frozen for the fine-tuning process. 

## conversation data

The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.

`script_speaker_name` = `person alpha`

`script_responder_name` = `person beta`

## usage

### in ai-msgbot

```
python ai_single_response.py --model GPT2_conversational_355M_WoW10k --prompt "hi! what are your hobbies?"

... generating...

finished!

'i like to read.'

```

### examples with Inference API
The model training (and the ai-msgbot scripts) "force" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:

```
person alpha:
hi! what are your hobbies?
```

then model will respond, ideally with person beta: "response text"

---

- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations 
```
@inproceedings{dinan2019wizard,
  author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
  title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
  booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
  year={2019},
}

@inproceedings{li-etal-2017-dailydialog,
    title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
    author = "Li, Yanran  and
      Su, Hui  and
      Shen, Xiaoyu  and
      Li, Wenjie  and
      Cao, Ziqiang  and
      Niu, Shuzi",
    booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = nov,
    year = "2017",
    address = "Taipei, Taiwan",
    publisher = "Asian Federation of Natural Language Processing",
    url = "https://aclanthology.org/I17-1099",
    pages = "986--995",
    abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```