EzraAragon commited on
Commit
42c39bb
·
1 Parent(s): 46ec27e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -24,6 +24,8 @@ We used the models provided by HuggingFace v4.24.0, and Pytorch v1.13.0.
24
  In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of 1e<sup>-5</sup>, and cross-entropy as a loss function. We trained the model for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2.
25
 
26
  # Usage
 
 
27
  ## Use a pipeline as a high-level helper from transformers import pipeline
28
  pipe = pipeline("fill-mask", model="citiusLTL/DisorBERT")
29
 
@@ -32,6 +34,7 @@ from transformers import AutoTokenizer, AutoModelForMaskedLM
32
 
33
  tokenizer = AutoTokenizer.from_pretrained("citiusLTL/DisorBERT")
34
  model = AutoModelForMaskedLM.from_pretrained("citiusLTL/DisorBERT")
 
35
 
36
  # Paper
37
 
 
24
  In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of 1e<sup>-5</sup>, and cross-entropy as a loss function. We trained the model for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2.
25
 
26
  # Usage
27
+
28
+ ```
29
  ## Use a pipeline as a high-level helper from transformers import pipeline
30
  pipe = pipeline("fill-mask", model="citiusLTL/DisorBERT")
31
 
 
34
 
35
  tokenizer = AutoTokenizer.from_pretrained("citiusLTL/DisorBERT")
36
  model = AutoModelForMaskedLM.from_pretrained("citiusLTL/DisorBERT")
37
+ ```
38
 
39
  # Paper
40