This fastText model is a pretraining data filter, targeting the ARC Easy task. It's designed to select high-quality pretraining data using perplexity correlations, as described in Improving Pretraining Data Using Perplexity Correlations. The model classifies text as either "include" or "exclude" for use in pretraining a language model. It does not itself represent a pretrained language model.
The filter was created using a method that leverages correlations between LLM losses on various texts and downstream benchmark performance. By selecting texts with high correlation, this model aims to improve the efficiency of the data selection process for pretraining LLMs.
Code: https://github.com/TristanThrush/perplexity-correlations
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support