Files changed (1) hide show
  1. README.md +162 -112
README.md CHANGED
@@ -32,41 +32,63 @@ task_ids:
32
  - language-modeling
33
  ---
34
 
35
- # Dataset Card for Dataset Name
36
 
 
37
 
38
- This dataset contains historic newspapers from [Europeana](https://pro.europeana.eu/page/iiif#download). In total the collection has ~32 Billion tokens. Documentation for this dataset is a WIP.
39
 
 
40
 
 
41
 
42
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
 
 
 
 
 
 
43
 
44
  ## Dataset Details
45
 
46
  ### Dataset Description
47
 
48
-
49
- - **Curated by:** [More Information Needed]
50
- - **Funded by [optional]:** [More Information Needed]
51
- - **Shared by [optional]:** [More Information Needed]
52
- - **Language(s) (NLP):** [More Information Needed]
53
  - **License:** [More Information Needed]
54
 
55
- ### Dataset Sources [optional]
56
 
57
- <!-- Provide the basic links for the dataset. -->
58
 
59
- - **Repository:** [More Information Needed]
60
- - **Paper [optional]:** [More Information Needed]
61
- - **Demo [optional]:** [More Information Needed]
 
 
 
 
 
 
 
 
 
 
62
 
63
- ## Uses
 
 
 
 
64
 
65
- <!-- Address questions around how the dataset is intended to be used. -->
 
 
66
 
67
  ### Direct Use
68
 
69
- To download the full dataset using the `Datasets` library you can do the following
70
 
71
  ```python
72
  from datasets import load_dataset
@@ -74,157 +96,185 @@ from datasets import load_dataset
74
  dataset = load_dataset("biglam/europeana_newspapers")
75
  ```
76
 
77
- You can also access a subset based on language or decade ranges using the following function.
78
 
79
  ```python
80
  from typing import List, Optional, Literal, Union
81
  from huggingface_hub import hf_hub_url, list_repo_files
82
 
83
  LanguageOption = Literal[
84
- "et",
85
- "pl",
86
- "sr",
87
- "ru",
88
- "sv",
89
  "no_language_found",
90
- "ji",
91
- "hr",
92
- "el",
93
- "uk",
94
- "fr",
95
- "fi",
96
- "de",
97
- "multi_language",
98
  ]
99
 
100
 
101
  def get_files_for_lang_and_years(
102
- languages: Union[None, List[LanguageOption]] = None,
103
  min_year: Optional[int] = None,
104
  max_year: Optional[int] = None,
105
  ):
 
 
 
 
 
 
 
 
 
 
 
 
106
  files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
107
  parquet_files = [f for f in files if f.endswith(".parquet")]
108
- parquet_files_filtered_for_lang = [
109
- f for f in parquet_files if any(lang in f for lang in ["uk", "fr"])
110
- ]
111
- filtered_files = [
112
- f
113
- for f in parquet_files
114
- if (min_year is None or min_year <= int(f.split("-")[1].split(".")[0]))
115
- and (max_year is None or int(f.split("-")[1].split(".")[0]) <= max_year)
116
- ]
 
 
 
 
 
 
 
 
 
 
 
 
117
  return [
118
  hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
119
- for f in filtered_files
120
  ]
121
-
122
  ```
123
 
124
- This function takes a list of language codes, and a min, max value for decades you want to include. You can can use this function to get the URLs for files you want to download from the Hub:
125
 
126
  ```python
127
- ds = load_dataset("parquet", data_files=get_files_for_lang_and_years(['fr']), num_proc=4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
  ```
129
 
130
- [More Information Needed]
131
 
132
- ### Out-of-Scope Use
133
 
134
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
 
 
 
135
 
136
- [More Information Needed]
 
 
 
 
 
 
137
 
138
- ## Dataset Structure
 
 
 
 
139
 
140
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
141
 
142
- [More Information Needed]
 
 
 
143
 
144
  ## Dataset Creation
145
 
146
- ### Curation Rationale
147
-
148
- <!-- Motivation for the creation of this dataset. -->
149
-
150
- [More Information Needed]
151
-
152
  ### Source Data
153
 
154
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
155
 
156
  #### Data Collection and Processing
157
 
158
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
159
-
160
- [More Information Needed]
161
-
162
- #### Who are the source data producers?
163
-
164
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
165
-
166
- [More Information Needed]
167
 
168
- ### Annotations [optional]
169
 
170
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
171
 
172
- #### Annotation process
 
 
173
 
174
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
175
 
176
- [More Information Needed]
 
 
 
 
177
 
178
- #### Who are the annotators?
179
 
180
- <!-- This section describes the people or systems who created the annotations. -->
181
 
182
- [More Information Needed]
183
-
184
- #### Personal and Sensitive Information
185
-
186
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
187
-
188
- [More Information Needed]
189
 
190
  ## Bias, Risks, and Limitations
191
 
192
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
193
-
194
- [More Information Needed]
 
195
 
196
  ### Recommendations
197
 
198
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
199
-
200
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
201
-
202
- ## Citation [optional]
203
-
204
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
205
-
206
- **BibTeX:**
207
-
208
- [More Information Needed]
209
 
210
- **APA:**
211
 
212
- [More Information Needed]
213
-
214
- ## Glossary [optional]
215
-
216
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
217
-
218
- [More Information Needed]
219
-
220
- ## More Information [optional]
221
-
222
- [More Information Needed]
223
-
224
- ## Dataset Card Authors [optional]
225
-
226
- [More Information Needed]
227
 
228
  ## Dataset Card Contact
229
 
230
- [More Information Needed]
 
 
 
32
  - language-modeling
33
  ---
34
 
35
+ # Dataset Card for Europeana Newspapers
36
 
37
+ ## Dataset Overview
38
 
39
+ This dataset contains historic newspapers from [Europeana](https://pro.europeana.eu/page/iiif#download), processed and converted to a format more suitable for machine learning and digital humanities research. In total, the collection contains approximately 32 billion tokens across multiple European languages, spanning from the 18th to the early 20th century.
40
 
41
+ Created by the BigLAM initiative, this unofficial version extracts text content from ALTO XML and converts it into a parquet format, making it more accessible for ML/AI-based work and large-scale digital humanities/history research.
42
 
43
+ ### Key Features
44
 
45
+ - **Massive historical corpus**: One of the largest collections of historical text data available in a machine-learning friendly format
46
+ - **Cross-lingual coverage**: Includes 12 European languages with varying degrees of representation
47
+ - **OCR quality metrics**: Contains confidence scores to allow filtering based on text quality
48
+ - **Rich metadata**: Preserves publication information, dates, and links to original materials
49
+ - **Structured format**: Organized by language and decade for efficient access to specific subsets
50
+ - **Illustration data**: Includes bounding box coordinates for visual elements on newspaper pages
51
+ - **IIIF integration**: Direct links to high-quality images of the original documents
52
 
53
  ## Dataset Details
54
 
55
  ### Dataset Description
56
 
57
+ - **Curated by:** BigLAM initiative
58
+ - **Language(s):** German (de), French (fr), Greek (el), Estonian (et), Finnish (fi), Croatian (hr), Yiddish (ji), Polish (pl), Russian (ru), Serbian (sr), Swedish (sv), Ukrainian (uk)
 
 
 
59
  - **License:** [More Information Needed]
60
 
61
+ ### Dataset Structure
62
 
63
+ Each record in the dataset contains the following fields:
64
 
65
+ | Field | Type | Description |
66
+ |-------|------|-------------|
67
+ | **text** | string | Extracted text content from each newspaper page |
68
+ | **mean_ocr** | float | Mean OCR confidence score (0-1 scale where higher values indicate higher confidence) |
69
+ | **std_ocr** | float | Standard deviation of OCR confidence scores (indicates consistency of recognition quality) |
70
+ | **bounding_boxes** | list of lists | Coordinates for illustrations on the page in format [HEIGHT, WIDTH, VPOS, HPOS] |
71
+ | **title** | string | Newspaper title |
72
+ | **date** | string | Publication date in ISO format (YYYY-MM-DD) |
73
+ | **language** | list | Language codes of the content (supports multi-language detection) |
74
+ | **item_iiif_url** | string | IIIF URL for accessing the original digitized image |
75
+ | **multi_language** | boolean | Flag indicating whether the page contains multiple languages |
76
+ | **issue_uri** | string | Persistent URI for the newspaper issue in Europeana |
77
+ | **id** | string | Unique identifier combining issue URI and page number |
78
 
79
+ ### Data Splits
80
+
81
+ The dataset is organized into files by:
82
+ - Language (e.g., 'fr' for French, 'de' for German)
83
+ - Decade (e.g., '1770' for newspapers from the 1770s)
84
 
85
+ This organization allows researchers to easily access specific subsets of the data relevant to their research questions.
86
+
87
+ ## Uses
88
 
89
  ### Direct Use
90
 
91
+ To download the full dataset using the `Datasets` library:
92
 
93
  ```python
94
  from datasets import load_dataset
 
96
  dataset = load_dataset("biglam/europeana_newspapers")
97
  ```
98
 
99
+ You can also access a subset based on language or year ranges using the following function:
100
 
101
  ```python
102
  from typing import List, Optional, Literal, Union
103
  from huggingface_hub import hf_hub_url, list_repo_files
104
 
105
  LanguageOption = Literal[
106
+ "et", # Estonian
107
+ "pl", # Polish
108
+ "sr", # Serbian
109
+ "ru", # Russian
110
+ "sv", # Swedish
111
  "no_language_found",
112
+ "ji", # Yiddish
113
+ "hr", # Croatian
114
+ "el", # Greek
115
+ "uk", # Ukrainian
116
+ "fr", # French
117
+ "fi", # Finnish
118
+ "de", # German
119
+ "multi_language"
120
  ]
121
 
122
 
123
  def get_files_for_lang_and_years(
124
+ languages: Optional[List[LanguageOption]] = None,
125
  min_year: Optional[int] = None,
126
  max_year: Optional[int] = None,
127
  ):
128
+ """
129
+ Get dataset file URLs filtered by language and/or year range.
130
+
131
+ Args:
132
+ languages: List of language codes to include
133
+ min_year: Minimum year to include (inclusive)
134
+ max_year: Maximum year to include (inclusive)
135
+
136
+ Returns:
137
+ List of file URLs that can be passed to load_dataset
138
+ """
139
+ # List all files in the repository
140
  files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
141
  parquet_files = [f for f in files if f.endswith(".parquet")]
142
+
143
+ # Filter by language if specified
144
+ if languages:
145
+ parquet_files = [
146
+ f for f in parquet_files if any(lang in f for lang in languages)
147
+ ]
148
+
149
+ # Filter by year range if specified
150
+ if min_year is not None or max_year is not None:
151
+ filtered_files = []
152
+ for f in parquet_files:
153
+ parts = f.split("-")
154
+ if len(parts) > 1:
155
+ year_part = parts[1].split(".")[0]
156
+ if year_part.isdigit():
157
+ year = int(year_part)
158
+ if (min_year is None or min_year <= year) and (max_year is None or year <= max_year):
159
+ filtered_files.append(f)
160
+ parquet_files = filtered_files
161
+
162
+ # Convert local paths to full URLs
163
  return [
164
  hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
165
+ for f in parquet_files
166
  ]
 
167
  ```
168
 
169
+ You can use this function to get the URLs for files you want to download from the Hub:
170
 
171
  ```python
172
+ # Example 1: Load French newspaper data
173
+ french_files = get_files_for_lang_and_years(['fr'])
174
+ ds_french = load_dataset("parquet", data_files=french_files, num_proc=4)
175
+
176
+ # Example 2: Load Ukrainian and French newspapers between 1900 and 1950
177
+ historical_files = get_files_for_lang_and_years(
178
+ languages=['uk', 'fr'],
179
+ min_year=1900,
180
+ max_year=1950
181
+ )
182
+ ds_historical = load_dataset("parquet", data_files=historical_files, num_proc=4)
183
+
184
+ # Example 3: Load all German newspapers from the 19th century
185
+ german_19th_century = get_files_for_lang_and_years(
186
+ languages=['de'],
187
+ min_year=1800,
188
+ max_year=1899
189
+ )
190
+ ds_german_historical = load_dataset("parquet", data_files=german_19th_century, num_proc=4)
191
  ```
192
 
193
+ ### Use Cases
194
 
195
+ This dataset is particularly valuable for:
196
 
197
+ #### Machine Learning Applications
198
+ - Training large language models on historical texts
199
+ - Fine-tuning models for historical language understanding
200
+ - Developing OCR post-correction models using the confidence scores
201
+ - Training layout analysis models using the bounding box information
202
 
203
+ #### Digital Humanities Research
204
+ - Cross-lingual analysis of historical newspapers
205
+ - Studying information spread across European regions
206
+ - Tracking cultural and political developments over time
207
+ - Analyzing language evolution and shifts in terminology
208
+ - Topic modeling of historical discourse
209
+ - Named entity recognition in historical contexts
210
 
211
+ #### Historical Research
212
+ - Comparative analysis of news reporting across different countries
213
+ - Studying historical events from multiple contemporary perspectives
214
+ - Tracking the evolution of public discourse on specific topics
215
+ - Analyzing changes in journalistic style and content over centuries
216
 
217
+ #### OCR Development
218
+ - Using the mean_ocr and std_ocr fields to assess OCR quality
219
+ - Filtering content based on quality thresholds for specific applications
220
+ - Benchmarking OCR improvement techniques against historical materials
221
 
222
+ #### Institutional Uses
223
+ - Enabling libraries and archives to provide computational access to their collections
224
+ - Supporting searchable interfaces for digital historical collections
225
+ - Creating teaching resources for historical linguistics and discourse analysis
226
 
227
  ## Dataset Creation
228
 
 
 
 
 
 
 
229
  ### Source Data
230
 
231
+ The dataset is derived from the Europeana Newspapers collection, which contains digitized historical newspapers from various European countries. The original data is in ALTO XML format, which includes OCR text along with layout and metadata information.
232
 
233
  #### Data Collection and Processing
234
 
235
+ The BigLAM initiative developed a comprehensive processing pipeline to convert the Europeana newspaper collections from their original ALTO XML format into a structured dataset format suitable for machine learning and digital humanities research:
 
 
 
 
 
 
 
 
236
 
237
+ 1. **ALTO XML Parsing**: Custom parsers handle various ALTO schema versions (1-5 and BnF dialect) to ensure compatibility across the entire collection.
238
 
239
+ 2. **Text Extraction**: The pipeline extracts full-text content while preserving reading order and handling special cases like hyphenated words.
240
 
241
+ 3. **OCR Quality Assessment**: For each page, the system calculates:
242
+ - `mean_ocr`: Average confidence score of the OCR engine
243
+ - `std_ocr`: Standard deviation of confidence scores to indicate consistency
244
 
245
+ 4. **Visual Element Extraction**: The pipeline captures bounding box coordinates for illustrations and visual elements, stored in the `bounding_boxes` field.
246
 
247
+ 5. **Metadata Integration**: Each page is enriched with corresponding metadata from separate XML files:
248
+ - Publication title and date
249
+ - Language identification (including multi-language detection)
250
+ - IIIF URLs for accessing the original digitized images
251
+ - Persistent identifiers linking back to the source material
252
 
253
+ 6. **Parallel Processing**: The system utilizes multi-processing to efficiently handle the massive collection (containing approximately 32 billion tokens).
254
 
255
+ 7. **Dataset Creation**: The processed data is converted to Hugging Face's `Dataset` format and saved as parquet files, organized by language and decade for easier access.
256
 
257
+ This processing approach preserves the valuable structure and metadata of the original collection while making it significantly more accessible for computational analysis and machine learning applications.
 
 
 
 
 
 
258
 
259
  ## Bias, Risks, and Limitations
260
 
261
+ - **OCR Quality**: The dataset is based on OCR'd historical documents, which may contain errors, especially in older newspapers or those printed in non-standard fonts.
262
+ - **Historical Bias**: Historical newspapers reflect the biases, prejudices, and perspectives of their time periods, which may include content that would be considered offensive by modern standards.
263
+ - **Temporal and Geographic Coverage**: The coverage across languages, time periods, and geographic regions may be uneven.
264
+ - **Data Completeness**: Some newspaper issues or pages may be missing or incomplete in the original Europeana collection.
265
 
266
  ### Recommendations
267
 
268
+ - Users should consider the OCR confidence scores (mean_ocr and std_ocr) when working with this data, possibly filtering out low-quality content depending on their use case.
269
+ - Researchers studying historical social trends should be aware of the potential biases in the source material and interpret findings accordingly.
270
+ - For applications requiring high text accuracy, additional validation or correction may be necessary.
 
 
 
 
 
 
 
 
271
 
272
+ ## More Information
273
 
274
+ For more information about the original data source, visit [Europeana Newspapers](https://pro.europeana.eu/page/iiif#download).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
275
 
276
  ## Dataset Card Contact
277
 
278
+ Daniel van Strien (daniel [at] hf [dot] co)
279
+
280
+ For questions about this processed version of the Europeana Newspapers dataset, please contact the BigLAM initiative representative above.