Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ language:
|
|
20 |
---
|
21 |
# IPA CHILDES Models: Small
|
22 |
|
23 |
-
Phoneme-based GPT-2 models trained on the largest 17 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task]().
|
24 |
|
25 |
The models have 800k non-embedding parameters and were trained on 700k tokens of their language. They were evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
|
26 |
|
|
|
20 |
---
|
21 |
# IPA CHILDES Models: Small
|
22 |
|
23 |
+
Phoneme-based GPT-2 models trained on the largest 17 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task](https://arxiv.org/abs/2504.03338).
|
24 |
|
25 |
The models have 800k non-embedding parameters and were trained on 700k tokens of their language. They were evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
|
26 |
|