File size: 942 Bytes
aa7c4cc 22b69bd aa7c4cc e4178ef aa7c4cc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
datasets:
- phonemetransformers/IPA-CHILDES
language:
- zh
- nl
- en
- et
- fr
- de
- id
- sr
- es
- ja
- it
- ko
- pl
- pt
- sv
---
# IPA CHILDES Models: Small
Phoneme-based GPT-2 models trained on the largest 17 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task](https://arxiv.org/abs/2504.03338).
The models have 800k non-embedding parameters and were trained on 700k tokens of their language. They were evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
To load a model:
```python
from transformers import AutoModel
swedish_model = AutoModel.from_pretrained('phonemetransformers/ipa-childes-models-small', subfolder='Swedish')
``` |