ideas for OCR models to test
- docling (works well for modern documents)
- some vLLM
- tesseract
others?
potentially evaluate using approach similar to:
EasyOCR have danish support, but I didn't really like its quality when I tested it for some of my Image Analysis assignment back in my Bachelors, but probably worth a try
Also I have found this model https://huggingface.co/diversen/doctr-torch-crnn_vgg16_bn-danish-v1 which is compatible with this framework https://github.com/mindee/doctr
https://github.com/PaddlePaddle/PaddleOCR PaddleOCR seems to support danish also (according to their docs - https://www.paddleocr.ai/main/en/version3.x/algorithm/PP-OCRv5/PP-OCRv5_multi_languages.html#4-supported-languages-and-abbreviations)
Johan suggested;
- think of it in two different ways: Frakturprint (early period) and antique types (starting to be introduced in 1840 onwards)
- standardisation (1900 onwards, ongoing period)
A simple way to do this would be to run a model on everything, and then you would probably get two distributions (if looking at errors). Probably also easy to train a classifier based on -1800 vs new docs
likely model that the royal library used: abbyy finereader (but have since moved to tesseract)
Johan's experience on vLLM: not great experiences (on par with tesseract) - Gemma is probably best
transcribus: 0.25 credit pr page - credit is about 0.20 - 1 krone (~20k for 100k pages)
Probably the best for Fraktur (we would also get an XML)
Can we maybe find Fraktur model for some other Germanic language and finetune it for danish ?
I have seen several Fraktur models for german like here https://github.com/Calamari-OCR in the models repo
Johan also suggested: TrOCR (has a framework, reasonable to finetune)
Johan could provide ground truth data
Considering in 1700-1800 there was a strong german influence and probably the danish and german were a lot closer than now also (correct me if I am wrong)
Does TrOCR has to be tuned or not ? I can not find the list of supported languages for it
I see that the original seems to be trained on english, but there are quite a lot of fine-tuned versions so it is probably easy to do
For example swedish one and even saami one
https://huggingface.co/Riksarkivet/trocr-base-handwritten-hist-swe-2
https://huggingface.co/Sprakbanken/trocr_smi
Ok so the current list to try is:
Classical OCR models
-EasyOCR
-TrOCR
-PaddleOCR
-DocTR
VLMs :
-Gemma
- Llama ? ( I mean they have pirated a lot of book surely it knows some danish)
- SmolDocling (small VLM that is developed by Docling)
I have read the document https://generic-account.github.io/OCR-Accuracy-Without-Ground-Truth-Data.html, seems like LLM-as-a-Judge approach might be more relevant today (since we have more powerful models than GPT-2)
Also I have found that LLM post processing might help (not the greatest surprise)
Although I don't know about the cost of running OpenAI api, it would be great is Gemini Flash 2.5 also worked nicely and it is quite cheep
I have found this model from Mistral https://mistral.ai/news/mistral-ocr, I think might be quite cheep ($0.001 per page which is cheeper)
What exactly model did you use @rasgaard ? Normal GPT-5 or thinking ?
Just the default model on a free-tier account https://chatgpt.com/share/68c01a58-8368-8013-8cb4-67a65d27f4d9
- Llama ? ( I mean they have pirated a lot of book surely it knows some danish)
It is not performant. I have tried Llama 70b and Gemma3 27b head to head quite a lot, and Gemma is better, but still not super good at fraktur. General opinion is that among LLMs, Claude is best. For local LLMs, I do not know exactly what has the edge. I suspect you would need very serious hardware to bring you substantially beyond Tesseract, with an out-of-the-box solution. One issue you need to consider is that LLM-based solutions will get worse the denser the page, the smaller the print and the more words it has. That is very logical if you think about how it works. So it might work very well for some publications and not for others. And it might be hard to gauge where it fails, because its failures will look "correct".
@V4ldeLund , Johan also suggested LLM post processing so def. think that that is an option
Adding to this. My experience is that Gemini is best for doing this. It tends to not alter correctly transcribed elements, whereas the others (chatgpt especially) tend to also alter correctly transcribed elements with idiosyncratic spellings. Thus, it introduces new errors.
I have found this model from Mistral https://mistral.ai/news/mistral-ocr, I think might be quite cheep ($0.001 per page which is cheeper)
It is another good option for the antiqua print part. However, I am not sure if it would perform better than tesseract + post-OCR correction (which depending on setup might be even cheaper).
Unfortunately, Mistral is not super good with fraktur.
I have read the document https://generic-account.github.io/OCR-Accuracy-Without-Ground-Truth-Data.html, seems like LLM-as-a-Judge approach might be more relevant today (since we have more powerful models than GPT-2)
Yes, but that is expensive for an evaluation tool. A lightweight evaluator would be better. Besides you need to account for idiosyncratic spelling. If you ask most LLMs to evaluate the quality of a correctly transcribed text from before 1800 they often think there are errors on account of orthographic variation.
It might be cool to generate some data for training using frontier VLM and then tune some open model on it maybe
I would advise against synthetic training data. You are looking at an issue where you are aiming for something very precise, which means that very small margins of error can have the potential for a big effect, relatively. And you might run into the problem, that the synthetic data varies in quality according to the print/scan/resolution/paper deterioration. And then that will effect your model, if these elements introduce errors. Human transcriptions will have less systematic errors.
Hey guys, stumbled upon your discussion here and tried out ChatGPT which seems to do a pretty good job. It might not scale super well but it could serve as a nice 'quick-and-dirty' baseline :)
Please note, that while the output text looks correct, it is not. And some of the errors are quite hallucinatory, in that it actually injects something that looks like real words (Øren becomes Lyst, Didhen becomes the name Didon etc.).
Thank you for such detailed comments @JohanHeinsen
I wanted to ask : Won't the performance of more traditional OCR (like LSTM based tesseract) affected by the change in "page structure" among the books or it is generally consistent?
Mine intuition is that maybe VLMs are slightly better at generalisation due to massive pre-training corpus but I don't know if this is true.
Also @KennethEnevoldsen if we assume that OCR+post processing is better, then maybe it makes sense to tune open LLM (like to allow idiosyncratic spellings in post processing since we have some text data for it already). Because I am not sure about project funding, and don't know if it is feasible to go though all of the OCR text with Gemini which is quite expensive
Oh wow, what a nice discussion we got started here. Lots of valuable input..
A reasonable first step is to get a decent basis for comparison (e.g., a few annotated pages) from which we can benchmark methods.
Thank you for such detailed comments @JohanHeinsen
I wanted to ask : Won't the performance of more traditional OCR (like LSTM based tesseract) affected by the change in "page structure" among the books or it is generally consistent?
Mine intuition is that maybe VLMs are slightly better at generalisation due to massive pre-training corpus but I don't know if this is true.
My guess is that it is consistent enough for the running prose to be mostly unaffected. Initials (meaning the big letters at the start of chapters) will be botched and some marginalia might suffer (and might be represented "in-line" with adjacent prose). But there is probably only a small proportion of texts with columns, which would cause major trouble. Tables tend to be an issue though. You are right about generalisation, especially layout-wise, but it comes with a cost in terms of predictability. And layouts can also become too complicated even for something like Mistral (which otherwise tends to do the best job on that account) – as you can see from the example chatgpt actually just leaves out elements.
Also just to clarify. I only think Tesseract is a meaningful option, if you want to spend no money at all, and only for the antigua type (so only the last part of the corpus). I think there are better solutions, but they will also be more expensive (one way or another) and any Tesseract workflow should have a post-OCR pipeline. For the books in fraktur, the Tesseract models are just not good enough.
Oh wow, what a nice discussion we got started here. Lots of valuable input..
A reasonable first step is to get a decent basis for comparison (e.g., a few annotated pages) from which we can benchmark methods.
I can perhaps start on that and find pages with decent text from already existing OCR , although I would need help with annotation :)
@V4ldeLund
, you can start by creating a setup using a sample (e.g. 100-500 pages) from:
https://huggingface.co/datasets/aarhus-city-archives/historical-danish-handwriting
Once we have a working setup, then we can integrate other sources.
Sorry for stupid question @KennethEnevoldsen , but I think it is handwriting and the books we are OCRing are not. Won't it hurt if we evaluate models on Handwritten text and using it on non-hand written ?
It is for building the framework; the actual evaluation will likely use multiple data sources
If you pull 20-30 random pages from before 1850 and send me the images, I can create a baseline for comparison on the fraktur part. And test our Transkribus model in the same go.
It might be easier to discard pages with more than one text region, otherwise the order in which a given model reads them might need to be factored in to do the comparison.