Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ language:
|
|
33 |
|
34 |
# IPA CHILDES Models: Tiny
|
35 |
|
36 |
-
Phoneme-based GPT-2 models trained on all 31 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task]().
|
37 |
|
38 |
The models have 600k non-embedding parameters and were trained on 100k tokens of their language. They were evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
|
39 |
|
|
|
33 |
|
34 |
# IPA CHILDES Models: Tiny
|
35 |
|
36 |
+
Phoneme-based GPT-2 models trained on all 31 sections of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task](https://arxiv.org/abs/2504.03338).
|
37 |
|
38 |
The models have 600k non-embedding parameters and were trained on 100k tokens of their language. They were evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
|
39 |
|