JPharmaBERT
Collection
2 items
•
Updated
Our JpharmaBERT (base) is a continually pre-trained version of the BERT model (tohoku-nlp/bert-base-japanese-v3), further trained on pharmaceutical data — the same dataset used for eques/jpharmatron.
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
model = AutoModelForMaskedLM.from_pretrained("EQUES/jpharma-bert-base", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("EQUES/jpharma-bert-base")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("水は化学式で[MASK]2Oです。")
for result in results:
print(result)
# {'score': 0.49609375, 'token': 55, 'token_str': 'H', 'sequence': '水は化学式でH2Oです。'}
# {'score': 0.11767578125, 'token': 29257, 'token_str': 'Na', 'sequence': '水は化学式でNa2Oです。'}
# {'score': 0.047607421875, 'token': 61, 'token_str': 'N', 'sequence': '水は化学式でN2Oです。'}
# {'score': 0.038330078125, 'token': 16966, 'token_str': 'CH', 'sequence': '水は化学式でCH2Oです 。'}
# {'score': 0.0255126953125, 'token': 66, 'token_str': 'S', 'sequence': '水は化学式でS2Oです 。'}
We used the same dataset as eques/jpharmatron for training our JpharmaBERT, which consists of:
After removing duplicate entries across these sources, the final dataset contains approximately 9 billion tokens.
(For details, please refer to our paper about Jpharmatron: link)
The model was continually pre-trained with the following settings:
Created by Takuro Fujii ([email protected])