Matttttttt commited on
Commit
114102e
·
1 Parent(s): 9f6b9b2

update README

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -7,11 +7,11 @@ datasets:
7
  - wikipedia
8
  ---
9
 
10
- # Model Card for Japanese BART V2 base
11
 
12
  ## Model description
13
 
14
- This is a Japanese BART V2 base model pre-trained on Japanese Wikipedia.
15
 
16
  ## How to use
17
 
@@ -19,8 +19,8 @@ You can use this model as follows:
19
 
20
  ```python
21
  from transformers import XLMRobertaTokenizer, MBartForConditionalGeneration
22
- tokenizer = XLMRobertaTokenizer.from_pretrained('ku-nlp/bart-v2-base-japanese')
23
- model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-v2-base-japanese')
24
  sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
25
  encoding = tokenizer(sentence, return_tensors='pt')
26
  ...
 
7
  - wikipedia
8
  ---
9
 
10
+ # Model Card for Japanese BART base
11
 
12
  ## Model description
13
 
14
+ This is a Japanese BART base model pre-trained on Japanese Wikipedia.
15
 
16
  ## How to use
17
 
 
19
 
20
  ```python
21
  from transformers import XLMRobertaTokenizer, MBartForConditionalGeneration
22
+ tokenizer = XLMRobertaTokenizer.from_pretrained('ku-nlp/bart-base-japanese')
23
+ model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-base-japanese')
24
  sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
25
  encoding = tokenizer(sentence, return_tensors='pt')
26
  ...