File size: 13,493 Bytes
a540e2f
 
 
 
e8d811f
a540e2f
 
a6dfc3d
a540e2f
0a31a90
 
 
 
 
 
a6dfc3d
 
 
a540e2f
 
 
a6dfc3d
 
a540e2f
a6dfc3d
 
 
 
 
 
 
a540e2f
 
 
 
 
 
 
a6dfc3d
 
dfded7d
 
 
 
9bd9438
dce3be3
33104a6
 
 
6bde64f
 
 
 
 
 
 
 
 
 
 
 
a540e2f
6ff73da
 
98cb8e2
6ff73da
98cb8e2
6ff73da
98cb8e2
 
 
 
 
6ff73da
 
 
5143237
6ff73da
 
 
 
 
 
3f140e8
 
6ff73da
8714f71
 
 
 
 
 
 
 
 
 
 
 
 
98cb8e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8714f71
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  - name: Language
    dtype: string
  - name: Corpus
    dtype: string
  - name: Script
    dtype: string
  - name: Century
    dtype: string
  - name: Image_name
    dtype: string
  - name: NER_ann
    dtype: string
  splits:
  - name: train
    num_bytes: 30374609181
    num_examples: 177744
  - name: validation
    num_bytes: 1689908739
    num_examples: 9829
  - name: test
    num_bytes: 1278986029
    num_examples: 9827
  download_size: 33333506316
  dataset_size: 33343503949
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
tags:
- handwritten-text-recognition
- Image-to-text
- Image-Text-to-text
Pipeline_tag: Image-Text-to-text
Tasks:
- handwritten-text-recognition
- Image-to-text
- Image-Text-to-text
license: mit
task_categories:
- image-to-text
language:
- fr
- es
- la
- de
- nl
pretty_name: Tridis
size_categories:
- 100M<n<1B
---


This is the first version of the dataset derived from the corpora used for **TRIDIS** (*Tria Digita Scribunt*). 

TRIDIS encompasses a series of Handwriting Text Recognition (HTR) models trained using semi-diplomatic transcriptions of medieval and early modern manuscripts.

The semi-diplomatic transcription approach involves resolving abbreviations found in the original manuscripts and normalizing Punctuation and Allographs.

The dataset contains approximately 4,000 pages of manuscripts and is particularly suitable for working with documentary sources – manuscripts originating from legal, administrative, and memorial practices. Examples include registers, feudal books, charters, proceedings, and accounting records, primarily dating from the Late Middle Ages (13th century onwards).

The dataset covers Western European regions (mainly Spain, France, and Germany) and spans the 12th to the 17th centuries.


#### Corpora
The original ground-truth corpora are available under CC BY licenses on online repositories:


- The Alcar-HOME database (HOME): https://zenodo.org/record/5600884
- The e-NDP corpus (E-NDP): https://zenodo.org/record/7575693
- The Himanis project (HIMANIS): https://zenodo.org/record/5535306
- Königsfelden Abbey corpus (Konigsfelden): https://zenodo.org/record/5179361
- 6000 ground truth of VOC and notarial deeds (VOC) : https://zenodo.org/records/4159268
- Bullinger, Ruolph Gwalther: https://zenodo.org/records/4780947 
- CODEA: https://corpuscodea.es/
- Monumenta Luxemburgensia (MLH): www.tridis.me

## Citation

There is a pre-print presenting this corpus:

```bibtex
@article{aguilar2025tridis,
  title={TRIDIS: A Comprehensive Medieval and Early Modern Corpus for HTR and NER},
  author={Aguilar, Sergio Torres},
  journal={arXiv preprint arXiv:2503.22714},
  year={2025}
}
```

### How to Get Started with this DataSet
Use this Python code to easily train a TrOCR model with the TRIDIS dataset:

```python
#Use Transformers==4.43.0
#Note: Data augmentation is omitted here but strongly recommended.

import torch
from PIL import Image

import torchvision.transforms as transforms
from torch.utils.data import Dataset
from datasets import load_dataset # Import load_dataset
from transformers import (
    AutoFeatureExtractor,
    AutoTokenizer,
    TrOCRProcessor,
    VisionEncoderDecoderModel,
    Seq2SeqTrainer,
    Seq2SeqTrainingArguments,
    default_data_collator
)
from evaluate import load

# --- START MODIFIED SECTION ---

# Load the dataset from Hugging Face
dataset = load_dataset("magistermilitum/Tridis")
print("Dataset loaded.")

# Initialize the processor
# Use the specific processor associated with the TrOCR model
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") #or the large version for better performance
print("Processor loaded.")

# --- Custom Dataset Modified for Deferred Loading (No Augmentation) ---
class CustomDataset(Dataset):
    def __init__(self, hf_dataset, processor, max_target_length=160):
        """
        Args:
            hf_dataset: The dataset loaded by Hugging Face (datasets.Dataset).
            processor: The TrOCR processor.
            max_target_length: Maximum length for the target labels.
        """
        self.hf_dataset = hf_dataset
        self.processor = processor
        self.max_target_length = max_target_length

        # --- EFFICIENT FILTERING ---
        # Filter here to know the actual length and avoid processing invalid samples in __getitem__
        # Use indices to maintain the efficiency of accessing the original dataset
        self.valid_indices = [
            i for i, text in enumerate(self.hf_dataset["text"])
            if isinstance(text, str) and 3 < len(text) < 257 # Filter based on text length
        ]
        print(f"Dataset filtered. Valid samples: {len(self.valid_indices)} / {len(self.hf_dataset)}")

    def __len__(self):
        # The length is the number of valid indices after filtering
        return len(self.valid_indices)

    def __getitem__(self, idx):
        # Get the original index in the Hugging Face dataset
        original_idx = self.valid_indices[idx]

        # Load the specific sample from the Hugging Face dataset
        item = self.hf_dataset[original_idx]
        image = item["image"]
        text = item["text"]

        # Ensure the image is PIL and RGB
        if not isinstance(image, Image.Image):
            # If not PIL (rare with load_dataset, but for safety)
            # Assume it can be loaded by PIL or is a numpy array
            try:
                image = Image.fromarray(image).convert("RGB")
            except:
                # Fallback or error handling if conversion fails
                print(f"Error converting image at original index {original_idx}. Using placeholder.")
                # Returning a placeholder might be better handled by the collator or skipping.
                # For now, repeating the first valid sample as a placeholder (not ideal).
                item = self.hf_dataset[self.valid_indices[0]]
                image = item["image"].convert("RGB")
                text = item["text"]
        else:
            image = image.convert("RGB")

        # Process image using the TrOCR processor
        try:
            # The processor handles resizing and normalization
            pixel_values = self.processor(images=image, return_tensors="pt").pixel_values
        except Exception as e:
             print(f"Error processing image at original index {original_idx}: {e}. Using placeholder.")
             # Create a black placeholder tensor if processing fails
             # Ensure the size matches the expected input size for the model
             img_size = self.processor.feature_extractor.size
             # Check if size is defined as int or dict/tuple
             if isinstance(img_size, int):
                 h = w = img_size
             elif isinstance(img_size, dict) and 'height' in img_size and 'width' in img_size:
                 h = img_size['height']
                 w = img_size['width']
             elif isinstance(img_size, (tuple, list)) and len(img_size) == 2:
                 h, w = img_size
             else: # Default fallback size if uncertain
                 h, w = 384, 384 # Common TrOCR size, adjust if needed
             pixel_values = torch.zeros((3, h, w))


        # Tokenize the text
        labels = self.processor.tokenizer(
            text,
            padding="max_length",
            max_length=self.max_target_length,
            truncation=True # Important to add truncation just in case
        ).input_ids

        # Replace pad tokens with -100 to ignore in the loss function
        labels = [label if label != self.processor.tokenizer.pad_token_id else -100
                  for label in labels]

        encoding = {
            # .squeeze() removes dimensions of size 1, necessary as we process one image at a time
            "pixel_values": pixel_values.squeeze(),
            "labels": torch.tensor(labels)
        }
        return encoding

# --- Create Instances of the Modified Dataset ---
# Pass the Hugging Face dataset directly
train_dataset = CustomDataset(dataset["train"], processor)
eval_dataset = CustomDataset(dataset["validation"], processor)

print(f"\nNumber of training examples (valid and filtered): {len(train_dataset)}")
print(f"Number of validation examples (valid and filtered): {len(eval_dataset)}")

# --- END MODIFIED SECTION ---


# Load pretrained model
print("\nLoading pre-trained model...")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
model.to(device)
print(f"Model loaded on: {device}")

# Configure the model for fine-tuning
print("Configuring model...")
model.config.decoder.is_decoder = True # Explicitly set decoder flag
model.config.decoder.add_cross_attention = True # Ensure decoder attends to encoder outputs
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id # Start generation with CLS token
model.config.pad_token_id = processor.tokenizer.pad_token_id # Set pad token ID
model.config.vocab_size = model.config.decoder.vocab_size # Set vocabulary size
model.config.eos_token_id = processor.tokenizer.sep_token_id # Set end-of-sequence token ID

# Generation configuration (influences evaluation and inference)
model.config.max_length = 160 # Max generated sequence length
model.config.early_stopping = True # Stop generation early if EOS is reached
model.config.no_repeat_ngram_size = 3 # Prevent repetitive n-grams
model.config.length_penalty = 2.0 # Encourage longer sequences slightly
model.config.num_beams = 3 # Use beam search for better quality generation

# Metrics
print("Loading metrics...")
cer_metric = load("cer")
wer_metric = load("wer")

def compute_metrics(pred):
    labels_ids = pred.label_ids
    pred_ids = pred.predictions

    # Replace -100 with pad_token_id for correct decoding
    labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id

    # Decode predictions and labels
    pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
    label_str = processor.batch_decode(labels_ids, skip_special_tokens=True)

    # Calculate CER and WER
    cer = cer_metric.compute(predictions=pred_str, references=label_str)
    wer = wer_metric.compute(predictions=pred_str, references=label_str)

    print(f"\nEvaluation Step Metrics - CER: {cer:.4f}, WER: {wer:.4f}") # Print metrics

    return {"cer": cer, "wer": wer} # Return metrics required by Trainer


# Training configuration
batch_size_train = 32 # Adjust based on GPU memory, 32 for 48GB of vram
batch_size_eval = 32  # Adjust based on GPU memory
epochs = 10 # Number of training epochs (15 recommended)

print("\nConfiguring training arguments...")
training_args = Seq2SeqTrainingArguments(
    predict_with_generate=True,       # Use generate for evaluation (needed for CER/WER)
    per_device_train_batch_size=batch_size_train,
    per_device_eval_batch_size=batch_size_eval,
    fp16=True if device == "cuda" else False, # Enable mixed precision training on GPU
    output_dir="./trocr-model-tridis", # Directory to save model checkpoints
    logging_strategy="steps",
    logging_steps=10,                 # Log training loss every 50 steps
    evaluation_strategy='steps',      # Evaluate every N steps
    eval_steps=5000,                  # Adjust based on dataset size
    save_strategy='steps',            # Save checkpoint every N steps
    save_steps=5000,                  # Match eval steps)
    num_train_epochs=epochs,
    save_total_limit=3,               # Keep only the last 3 checkpoints
    learning_rate=7e-5,               # Learning rate for the optimizer
    weight_decay=0.01,                # Weight decay for regularization
    warmup_ratio=0.05,                # Percentage of training steps for learning rate warmup
    lr_scheduler_type="cosine",       # Learning rate scheduler type (better than linear)
    dataloader_num_workers=8,         # Use multiple workers for data loading (adjust based on CPU cores)
    # report_to="tensorboard",        # Uncomment to enable TensorBoard logging
)

# Initialize the Trainer
trainer = Seq2SeqTrainer(
    model=model,
    tokenizer=processor.feature_extractor, # Pass the feature_extractor for collation
    args=training_args,
    compute_metrics=compute_metrics,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    data_collator=default_data_collator, # Default collator handles padding inputs/labels
)

# Start Training
print("\n--- Starting Training ---")
try:
    trainer.train()
    print("\n--- Training Completed ---")
except Exception as e:
    error_message = f"Error during training: {e}"
    print(error_message)
    # Consider saving a checkpoint on error if needed
    # trainer.save_model("./trocr-model-magistermilitum-interrupted")

# Save the final model and processor
print("Saving final model and processor...")
# Ensure the final directory name is consistent
final_save_path = "./trocr-model-tridis-final"
trainer.save_model(final_save_path)
processor.save_pretrained(final_save_path) # Save the processor alongside the model
print(f"Model and processor saved to {final_save_path}")

# Clean up CUDA cache if GPU was used
if device == "cuda":
    torch.cuda.empty_cache()
    print("CUDA cache cleared.")
```