File size: 973 Bytes
7239535 e691be7 7239535 281fc87 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
datasets:
- phonemetransformers/IPA-CHILDES
language:
- zh
- nl
- en
- et
- fr
- de
- id
- sr
- es
- ja
---
# IPA CHILDES Models
Phoneme-based GPT-2 models trained on the largest 11 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for our paper [IPA-CHILDES & G2P+: Feature-Rich Resources for Cross-Lingual Phonology and Phonemic Language Modeling](https://arxiv.org/abs/2504.03036).
All models have 5M non-embedding parameters and were trained on 1.8M tokens from their language. These models were then probed for phonetic features using the corresponding inventories in [Phoible](https://phoible.org/). Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
To load a model:
```python
from transformers import AutoModel
dutch_model = AutoModel.from_pretrained('phonemetransformers/ipa-childes-models', subfolder='Dutch')
``` |