Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

A Few Questions About the Implementation Details of the finepdfs Project

#24
by yoliax - opened

Thank you very much for open-sourcing such an excellent dataset! I’m really looking forward to more details about your processing pipeline.
I have a few questions:

  1. Why didn’t you use vLLM for the entire process? Was it mainly due to cost considerations, or does your custom pipeline perform better in some cases?

  2. Based on rough estimates, the pipeline retained only about 150M out of 918M, while vLLM kept 324M out of 368M. Why does the pipeline filter out so much more data?

FineData org

Why didn’t you use vLLM for the entire process? Was it mainly due to cost considerations, or does your custom pipeline perform better in some cases?
vLLMs are better but incredibly expensive, so we had to make this decision due to cost

Based on rough estimates, the pipeline retained only about 150M out of 918M, while vLLM kept 324M out of 368M. Why does the pipeline filter out so much more data?
Where does this estimate come from ? I believe we retained way more pdfs for docling approach.

PS: I just noticed the labels (docling vs rolmOCR) are reversed... We will fix that

img_v3_02r4_c9c859e0-3ebc-4e78-81d2-d599ce4377eg

FineData org

I see, i don't have the number of docling/rolmOCR at the end of pipeline (but I assume your ones are correct just reversed, due to our mistake).

Why do we remove so much data:
We actually tried to remove as little as possible, the biggest removals come from:

  1. Exact deduplication
  2. Minhash deduplication

Each removed around 1/2 of the data.

There is other filtering we did (like LID thresholding/model based filtering), but these were always sub 10% removal.

Your work is really impressive and valuable. I’m excited to see the source code released—do you have an idea of when it might be available?

FineData org

Sign up or log in to comment