File size: 991 Bytes
668b32c 4a5765e 668b32c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
datasets:
- phonemetransformers/IPA-CHILDES
language:
- en
- eu
- zh
- da
- nl
- hr
- es
- et
- fa
- fr
- de
- hu
- is
- id
- ga
- it
- ja
- ko
- pt
- pl
- qu
- ro
- sr
- sv
- tr
- cy
- 'no'
---
# IPA CHILDES Models: Tiny
Phoneme-based GPT-2 models trained on all 31 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task](https://arxiv.org/abs/2504.03338).
The models have 600k non-embedding parameters and were trained on 100k tokens of their language. They were evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
To load a model:
```python
from transformers import AutoModel
farsi_model = AutoModel.from_pretrained('phonemetransformers/ipa-childes-models-tiny', subfolder='Farsi')
``` |