--- license: cc-by-4.0 language: - he --- # DictaBERT-splinter: Splintering Nonconcatenative Languages for Better Tokenization DictaBERT-splinter is a BERT-style language model for Hebrew, released [here](https://arxiv.org/abs/2503.14433). This is the base model pretrained with the masked-language-modeling objective. Sample usage: ```python from transformers import AutoModelForMaskedLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictabert-splinter', trust_remote_code=True) model = AutoModelForMaskedLM.from_pretrained('dicta-il/dictabert-splinter') model.eval() sentence = 'בשנת 1948 השלים אפרים קישון את [MASK] בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים' output = model(tokenizer.encode(sentence, return_tensors='pt')) # the [MASK] is the 7th token (including [CLS]) import torch top_2 = torch.topk(output.logits[0, 7, :], 2)[1] print('\n'.join(tokenizer.batch_decode(top_2))) # should print התמחותו / לימודיו ``` ## Citation If you use DictaBERT-splinter in your research, please cite ```Splintering Nonconcatenative Languages for Better Tokenization``` **BibTeX:** ```bibtex @misc{gazit2025splinteringnonconcatenativelanguagesbetter, title={Splintering Nonconcatenative Languages for Better Tokenization}, author={Bar Gazit and Shaltiel Shmidman and Avi Shmidman and Yuval Pinter}, year={2025}, eprint={2503.14433}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.14433}, } ``` ## License Shield: [![CC BY 4.0][cc-by-shield]][cc-by] This work is licensed under a [Creative Commons Attribution 4.0 International License][cc-by]. [![CC BY 4.0][cc-by-image]][cc-by] [cc-by]: http://creativecommons.org/licenses/by/4.0/ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg