EzraAragon commited on
Commit
797261c
·
1 Parent(s): e73934a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ widget:
18
  [DisorBERT](https://aclanthology.org/2023.acl-long.853/)
19
  is a double-domain adaptation of a BERT language model. First, is adapted to social media language, and then, adapted to the mental health domain. In both steps, it incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders.
20
 
21
- We follow the standard fine-tuning a masked language model of [Huggingface’s NLP Course](https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt) U+1F917.
22
 
23
  We used the models provided by HuggingFace v4.24.0, and Pytorch v1.13.0.
24
  In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of 1e<sup>-5</sup>, and cross-entropy as a loss function. We trained the model for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2.
 
18
  [DisorBERT](https://aclanthology.org/2023.acl-long.853/)
19
  is a double-domain adaptation of a BERT language model. First, is adapted to social media language, and then, adapted to the mental health domain. In both steps, it incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders.
20
 
21
+ We follow the standard fine-tuning a masked language model of [Huggingface’s NLP Course](https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt) 🤗.
22
 
23
  We used the models provided by HuggingFace v4.24.0, and Pytorch v1.13.0.
24
  In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of 1e<sup>-5</sup>, and cross-entropy as a loss function. We trained the model for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2.