File size: 4,259 Bytes
3c69542
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bda816a
e76de65
3c69542
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: cc-by-4.0
task_categories:
- image-to-text
language:
- fr
tags:
- ocr
- htr
- handwriting-texte-recognition
- table-recognition
pretty_name: >-
  POPP Datasets : Datasets for handwriting recognition from French population
  census
size_categories:
- 1K<n<10K
---

# POPP datasets

This repository contains 3 datasets created by the LITIS lab (University of Rouen Normandie) within the [POPP project (Project for the Oceration of the Paris Population Census)](https://popp.hypotheses.org/) for the task of handwriting text recognition.
These datasets have been published in [Recognition and information extraction in historical handwritten tables: toward understanding early 20th century Paris census at DAS 2022](https://hal.science/hal-03675614/) from T. Constum et al and are also available on [Zenodo](https://zenodo.org/records/6581158).

The 3 datasets are called “Generic dataset”, “Belleville”, and “Chaussée d’Antin” and contain lines made from the extracted rows of census tables from 1926. Each table in the Paris census contains 30 rows, thus each page in these datasets corresponds to 30 lines.

The structure of each dataset is as follows:

- double-pages: images of the double pages
- pages:
  - images: images of the pages
  - xml: METS and ALTO files of each page containing the coordinates of the bounding boxes of each line
- lines: contains the labels in the file `labels.json` and the line images split into the folders train, valid and test.

The double pages were scanned at a resolution of 200dpi and saved as PNG images with 256 gray levels. The line and page images are shared in the TIFF format, also with 256 gray levels.

Since the lines are extracted from table rows, we defined 4 special characters to describe the structure of the text:

- ¤: indicates an empty cell
- /: indicates the separation into columns
- ?: indicates that the content of the cell following this symbol is written above the regular baseline
- !: indicates that the content of the cell following this symbol is written below the regular baseline

We provide a script `format_dataset.py` to define which special character you want to use in the ground-truth.

The split for the Generic Dataset and Belleville have been made at the double-page level so that each writer only appears in one subset among train, evaluation and test. The following table summarizes the splits and the number of writers for each dataset:
   Dataset        | Train - # of lines | Validation - # of lines | Test - # of lines | # of writers |
 |----------------|--------------------|-------------------------|-------------------|--------------|
 | Generic        | 3840 (128 pages)   | 480 (16 pages)          | 480 (16 pages)    | 80           |
 | Belleville     | 1140 (38 pages)    | 150 (5 pages)           | 180 (6 pages)     | 1            |
 | Chaussée d’Antin | 625              | 78                      | 77                | 10           |

## Generic dataset (or POPP dataset)

This dataset is made of 4800 annotated lines extracted from 80 double pages of the 1926 Paris census.

- There is one double page for each of the 80 districts of Paris.
- There is one writer per double page, so the dataset contains 80 different writers.

## Belleville dataset

This dataset is a mono-writer dataset made of 1470 lines (49 pages) from the Belleville district census of 1926.

## Chaussée d’Antin dataset

This dataset is a multi-writer dataset made of 780 lines (26 pages) from the Chaussée d’Antin district census of 1926 and written by 10 different writers.

## Error reporting

It is possible that errors persist in the ground truth, so any suggestions for correction are welcome. To do so, please make a merge request on the Github repository and include the correction in both the `labels.json` file and in the XML file concerned.

## Citation Request

If you publish material based on this database, we request you to include a reference to the paper:

T. Constum, N. Kempf, T. Paquet, P. Tranouez, C. Chatelain, S. Brée, and F. Merveille, Recognition and information extraction in historical handwritten tables: toward understanding early 20th century Paris census, Document Analysis Systems (DAS), pp. 143-157, La Rochelle, 2022.