Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Deciding on extraction path

#10
by Mdspike - opened

"To determine the extraction path, we first manually annotated 1,350 PDFs and trained an XGBoost model. The model relies on 7 document-level features alongside 120 page-level features sampled from 8 random pages. We applied this classifier to PDFs that were not truncated and routed them accordingly"

This comment is super interesting. Would you be open to sharing a bit more about the process you used here, or even release the manually annotated data and training process / XGBoost model?

I think this could be really valuable for anyone trying to do similar projects with large volumes of PDF files.

Thank you for providing such a great resource.

FineData org

Hi, we will soon release a full code, with the classifier and feature extractor :)

Hi @hynky any news on possible date to release extraction code ? Thanks for all the work.

Sign up or log in to comment