Concerns regarding the quality - Groß gedacht, schlecht gemacht.
Although you have synthesized a massive amount of OCR data, the reliability of your language labels is questionable. Specifically, you do not appear to have filtered or separately processed code-only text; moreover, you seem to determine language directly from external sources without performing language detection.
Unlike Synthdog's target scenarios, if the goal of training is to create a more general-purpose VLM proficient in OCR, why do you, like Synthdog, also avoid using any newline characters, even if layout factors are disregarded in the data?
Hello,
thanks for your interest in this dataset.
We source all text from Wikipedias of the respective languages and include the "language" field to make filtering/ grouping the samples easier (e.g. to only use a subset of X samples per language). In general, the text from Wikipedia is in the intended language but - as you show - there can be sequences that are in programming languages, math, or quotes in other languages. We did not specifically filter for that but those cases should be the minority of data.
We used the provided code from donut/synthdog as-is to generate our data. It is probably possible to adjust the code so that linebreaks in the rendered text are reflected in the data but we did not look into that. As target goal and for evaluation, we focused more on shorter text (labels in plots and figures) rather than extraction of long text from documents so properly extracting newlines and other formatting was not really our priority.