Datasets:
Update readme
Browse files
README.md
CHANGED
@@ -39,10 +39,12 @@ The *Modern Danish Handwriting* dataset is a Danish-language dataset containing
|
|
39 |
|
40 |
The *Modern Danish Handwriting* dataset currently consists of handwritten samples of text from the ePAROLE dataset. The samples were created by volunteers at the Danish National Archives and guests at the festival *Historiske Dage* in 2025.
|
41 |
|
42 |
-
The ePAROLE dataset is a 2015-version of the PAROLE, which consists primarily of publically available newspaper and magazine articles from the early
|
|
|
|
|
43 |
|
44 |
- **Curated by:** [Joen Rommedahl](mailto:[email protected])
|
45 |
-
- **Language(s)
|
46 |
- **License:** [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
|
47 |
|
48 |
## Uses
|
@@ -53,7 +55,7 @@ The ePAROLE dataset is a 2015-version of the PAROLE, which consists primarily of
|
|
53 |
|
54 |
<!-- This section describes suitable use cases for the dataset. -->
|
55 |
|
56 |
-
The dataset has been created as training data for HTR of modern danish handwriting, but can also be used as training data for baseline detection and polygon extraction models, as these have been carefully created and quality checked as well.
|
57 |
|
58 |
### Out-of-Scope Use
|
59 |
|
@@ -61,7 +63,7 @@ The dataset has been created as training data for HTR of modern danish handwriti
|
|
61 |
|
62 |
The dataset is not particularly suited as training data for text region detection models, as all text on all samples are contained within a single text region, and thus not representative of how documents would look 'in the wild'.
|
63 |
|
64 |
-
The *Modern Danish Handwriting* dataset is not subdivided into individual words, but operate on textlines as the lowest granularity, and thus cannot be used for training word detection and isolation.
|
65 |
|
66 |
## Dataset Structure
|
67 |
|
@@ -90,52 +92,32 @@ It also seeks to draw attention to the fact, that no large-scale dataset of mode
|
|
90 |
|
91 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
92 |
|
93 |
-
The source data consists of written samples of the ePAROLE dataset. The original text is primarily news articles from the
|
94 |
|
95 |
#### Data Collection and Processing
|
96 |
|
97 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
98 |
|
99 |
-
The original physical samples were set up by the Data Science department at the Danish National Archives. The samples were selected randomly from the ePAROLE dataset, curated by The Society for Danish Language and Literature, Ole Norling-Christensen, Britt-Katrin Keson, Jørg Asmussen and more. See the [documentation](https://korpus.dsl.dk/resources/details/parole.html) for
|
100 |
|
101 |
#### Who are the source data producers?
|
102 |
|
103 |
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
104 |
|
105 |
-
The digitized and transcribed sentences were originally written by the journalists of various newspaper outlets and magazines in Denmark, and
|
106 |
-
|
107 |
-
The writing down of the text samples were done by volunteers at the Danish National Archives and guests at the festival *Historiske Dage* in 2025.
|
108 |
-
|
109 |
-
All layout segmentation were done intially with the [Loghi/Laypa](https://github.com/knaw-huc/loghi) toolkit, developed by Rutger van Koert and Stefan Klut at KNAW Humanities Cluster, and afterwards manually corrected by the Data Science departments at the Danish National Archives.
|
110 |
-
|
111 |
-
### Annotations [optional]
|
112 |
-
|
113 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
114 |
-
|
115 |
-
[N/A]
|
116 |
-
|
117 |
-
#### Annotation process
|
118 |
-
|
119 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
120 |
-
|
121 |
-
Samples of varying length were printed out and handed out to volunteers with the instruction of writing the text by hand underneath the printed sample while keeping line breaks and formating exactly as the printed text.
|
122 |
-
|
123 |
-
After re-collecting the samples, the papers were scanned and the original eParole sample id used to infer what text the volunteer had written. It has not been verified that the volunteers actually wrote the text and preserved line breaks as instructed.
|
124 |
|
125 |
-
|
126 |
|
|
|
127 |
|
128 |
-
|
129 |
|
130 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
131 |
-
|
132 |
-
The annotators were the guests at the festival *Historiske Dage* in 2025.
|
133 |
|
134 |
#### Personal and Sensitive Information
|
135 |
|
136 |
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
137 |
|
138 |
-
The
|
139 |
No efforts have been made to anonymize the data.
|
140 |
|
141 |
## Bias, Risks, and Limitations
|
@@ -144,28 +126,11 @@ No efforts have been made to anonymize the data.
|
|
144 |
|
145 |
Due to its limited size we do not expect any model trained only on this dataset to achieve great HTR-results. As the text corpus is primarily based on magazine resources, the language represented by the dataset is defined thereby. Thus, there is probably little language from specialized fields nor informal conversation.
|
146 |
|
147 |
-
### Recommendations
|
148 |
-
|
149 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
150 |
-
|
151 |
-
[N/A]
|
152 |
-
|
153 |
-
## Citation [optional]
|
154 |
-
|
155 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
156 |
-
|
157 |
-
[N/A]
|
158 |
-
|
159 |
-
## Glossary [optional]
|
160 |
-
|
161 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
162 |
-
|
163 |
-
[N/A]
|
164 |
|
165 |
-
## More Information
|
166 |
|
167 |
Thank you to the guests at *Historiske Dage* who contributed to the dataset!
|
168 |
-
Also thank you to all the people involved with creating and maintaining the
|
169 |
|
170 |
## Dataset Card Contact
|
171 |
|
|
|
39 |
|
40 |
The *Modern Danish Handwriting* dataset currently consists of handwritten samples of text from the ePAROLE dataset. The samples were created by volunteers at the Danish National Archives and guests at the festival *Historiske Dage* in 2025.
|
41 |
|
42 |
+
The ePAROLE dataset is a 2015-version of the PAROLE, which consists primarily of publically available newspaper and magazine articles from the early 1990s.
|
43 |
+
|
44 |
+
The *Modern Danish Handwriting* dataset consists of 223 transcribed images, containing a total of 977 lines, 5632 words, and 33026 characters (including spaces).
|
45 |
|
46 |
- **Curated by:** [Joen Rommedahl](mailto:[email protected])
|
47 |
+
- **Language(s):** Danish
|
48 |
- **License:** [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
|
49 |
|
50 |
## Uses
|
|
|
55 |
|
56 |
<!-- This section describes suitable use cases for the dataset. -->
|
57 |
|
58 |
+
The dataset has been created as training data for Handwritten Text Recognition (HTR) of modern danish handwriting, but can also be used as training data for baseline detection and polygon extraction models, as these have been carefully created and quality checked as well.
|
59 |
|
60 |
### Out-of-Scope Use
|
61 |
|
|
|
63 |
|
64 |
The dataset is not particularly suited as training data for text region detection models, as all text on all samples are contained within a single text region, and thus not representative of how documents would look 'in the wild'.
|
65 |
|
66 |
+
The line segmentations of the *Modern Danish Handwriting* dataset is not subdivided into individual words, but operate on textlines as the lowest granularity, and thus cannot be used for training word detection and isolation.
|
67 |
|
68 |
## Dataset Structure
|
69 |
|
|
|
92 |
|
93 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
94 |
|
95 |
+
The source data consists of written samples of the ePAROLE dataset. The original text is primarily news articles from the 1990s and the handwritten samples were created in 2025.
|
96 |
|
97 |
#### Data Collection and Processing
|
98 |
|
99 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
100 |
|
101 |
+
The original physical samples were set up by the Data Science department at the Danish National Archives. The samples were selected randomly from the ePAROLE dataset, curated by The Society for Danish Language and Literature, Ole Norling-Christensen, Britt-Katrin Keson, Jørg Asmussen and more. See the [documentation](https://korpus.dsl.dk/resources/details/parole.html) for ePAROLE for further information on how it was created.
|
102 |
|
103 |
#### Who are the source data producers?
|
104 |
|
105 |
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
106 |
|
107 |
+
The digitized and transcribed sentences were originally written by the journalists of various newspaper outlets and magazines in Denmark, and are publically available.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
|
109 |
+
Text samples of varying length were printed out and handed out to volunteers at the Danish National Archives and guests at the festival *Historiske Dage* in 2025 with the instruction of writing the text by hand underneath the printed sample while keeping line breaks and formating exactly as the printed text.
|
110 |
|
111 |
+
After re-collecting the samples, the papers were scanned and the original ePAROLE sample id used to infer what text the volunteer had written. It has not been verified that the volunteers actually wrote the text and preserved line breaks as instructed.
|
112 |
|
113 |
+
Layout and line segmentation was performed on each page with the [Loghi/Laypa](https://github.com/knaw-huc/loghi) toolkit, developed by Rutger van Koert and Stefan Klut at KNAW Humanities Cluster, and afterwards manually corrected by the Data Science departments at the Danish National Archives using the [Transkribus](https://www.transkribus.org/) interface.
|
114 |
|
|
|
|
|
|
|
115 |
|
116 |
#### Personal and Sensitive Information
|
117 |
|
118 |
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
119 |
|
120 |
+
The ePAROLE dataset is publically available through The Society for Danish Language and Literature. The Data Science department at the Danish National Archives does not take responsibility for any data therein that might be considered personal, sensitive or private.
|
121 |
No efforts have been made to anonymize the data.
|
122 |
|
123 |
## Bias, Risks, and Limitations
|
|
|
126 |
|
127 |
Due to its limited size we do not expect any model trained only on this dataset to achieve great HTR-results. As the text corpus is primarily based on magazine resources, the language represented by the dataset is defined thereby. Thus, there is probably little language from specialized fields nor informal conversation.
|
128 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
+
## More Information
|
131 |
|
132 |
Thank you to the guests at *Historiske Dage* who contributed to the dataset!
|
133 |
+
Also thank you to all the people involved with creating and maintaining the ePAROLE dataset at The Society for Danish Language and Literature, as one of the few publically available license-free resources of danish language!
|
134 |
|
135 |
## Dataset Card Contact
|
136 |
|