boris commited on
Commit
ed80f1f
·
1 Parent(s): 42a89f8

New model from https://wandb.ai/wandb/huggingtweets/runs/gbtq63s2

Browse files
README.md CHANGED
@@ -43,19 +43,19 @@ The model was trained on tweets from tommy he/him🤙.
43
  | Data | tommy he/him🤙 |
44
  | --- | --- |
45
  | Tweets downloaded | 3243 |
46
- | Retweets | 154 |
47
  | Short tweets | 773 |
48
- | Tweets kept | 2316 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/spuro96r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tommyboytwt's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7taab49z) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7taab49z/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
43
  | Data | tommy he/him🤙 |
44
  | --- | --- |
45
  | Tweets downloaded | 3243 |
46
+ | Retweets | 155 |
47
  | Short tweets | 773 |
48
+ | Tweets kept | 2315 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/glllkzat/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tommyboytwt's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gbtq63s2) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gbtq63s2/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -37,7 +37,7 @@
37
  }
38
  },
39
  "torch_dtype": "float32",
40
- "transformers_version": "4.27.4",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
 
37
  }
38
  },
39
  "torch_dtype": "float32",
40
+ "transformers_version": "4.28.1",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
- "transformers_version": "4.27.4"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
+ "transformers_version": "4.28.1"
6
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb60c808c550909b81d6184fc9ed72b471132b27baef7ac717d1bc821bd2489a
3
  size 510398013
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fcb1eaed376c110981dc4a1b23156abe5287e211c30ca790ea7d0f9476b6dc2
3
  size 510398013
tokenizer_config.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "add_prefix_space": false,
3
  "bos_token": "<|endoftext|>",
 
4
  "eos_token": "<|endoftext|>",
5
  "model_max_length": 1024,
6
- "special_tokens_map_file": null,
7
  "tokenizer_class": "GPT2Tokenizer",
8
  "unk_token": "<|endoftext|>"
9
  }
 
1
  {
2
  "add_prefix_space": false,
3
  "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
  "eos_token": "<|endoftext|>",
6
  "model_max_length": 1024,
 
7
  "tokenizer_class": "GPT2Tokenizer",
8
  "unk_token": "<|endoftext|>"
9
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2eb194502ab43b32ee47ad0c5d20bb1f35c31cac54b5d938db24651bc3ae1e60
3
  size 3579
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:306f4b911fede446fb2a31b6e6709e9b0e441c30332cb1bf0f11a473d4394bcb
3
  size 3579