boris commited on
Commit
233d46b
·
1 Parent(s): b2ba4d5

New model from https://wandb.ai/wandb/huggingtweets/runs/orhs8gfb

Browse files
Files changed (5) hide show
  1. README.md +7 -7
  2. merges.txt +1 -1
  3. pytorch_model.bin +1 -1
  4. tokenizer.json +1 -0
  5. training_args.bin +1 -1
README.md CHANGED
@@ -42,20 +42,20 @@ The model was trained on tweets from tommy he/him🤙.
42
 
43
  | Data | tommy he/him🤙 |
44
  | --- | --- |
45
- | Tweets downloaded | 3243 |
46
- | Retweets | 152 |
47
- | Short tweets | 771 |
48
- | Tweets kept | 2320 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/57i09oih/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tommyboytwt's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3feqvmrm) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3feqvmrm/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
42
 
43
  | Data | tommy he/him🤙 |
44
  | --- | --- |
45
+ | Tweets downloaded | 3242 |
46
+ | Retweets | 154 |
47
+ | Short tweets | 770 |
48
+ | Tweets kept | 2318 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/0z94othz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tommyboytwt's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/orhs8gfb) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/orhs8gfb/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
merges.txt CHANGED
@@ -1,4 +1,4 @@
1
- #version: 0.2 - Trained by `huggingface/tokenizers`
2
  Ġ t
3
  Ġ a
4
  h e
 
1
+ #version: 0.2
2
  Ġ t
3
  Ġ a
4
  h e
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:861c73e6cbfe2d9a7a499db499b94c63b224a0e60f3bf2b01be52efeecda8a8e
3
  size 510398013
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8dd5f72e48f534c4958468a055e1a413c068c62359d7dd09d93b1e9f427b32c
3
  size 510398013
tokenizer.json CHANGED
@@ -39,6 +39,7 @@
39
  "continuing_subword_prefix": "",
40
  "end_of_word_suffix": "",
41
  "fuse_unk": false,
 
42
  "vocab": {
43
  "!": 0,
44
  "\"": 1,
 
39
  "continuing_subword_prefix": "",
40
  "end_of_word_suffix": "",
41
  "fuse_unk": false,
42
+ "byte_fallback": false,
43
  "vocab": {
44
  "!": 0,
45
  "\"": 1,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a61b627998a5329145c8cecbd72fb2f9964a3f10a28bd1d2e08198cfe4d54e36
3
  size 3579
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93363e62d4e8790f51c2dd26e31cfadeb8f71be7ee7329a05906bb54f398a1a
3
  size 3579