
The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: TypeError Message: Couldn't cast array of type struct<html_seq: string, otsl_seq: string> to null Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1914, in _iter_arrow pa_table = cast_table_to_features(pa_table, self.features) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2197, in cast_table_to_features arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2197, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2052, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2001, in cast_array_to_feature arrays = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2002, in <listcomp> _c(array.field(name) if name in array_fields else null_array, subfeature) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2052, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2086, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1950, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type struct<html_seq: string, otsl_seq: string> to null
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The ArGiMI Ardian datasets : Text only
The ArGiMi project is committed to open-source principles and data sharing. Thanks to our generous partners, we are releasing several valuable datasets to the public.
Dataset description
This dataset comprises 11,000 financial annual reports, written in english, meticulously extracted from their original PDF format to provide a valuable resource for researchers and developers in financial analysis and natural language processing (NLP). These reports were published from the late 90s to 2023.
This dataset only provides extracted text data. A heavier, more complete dataset that includes images of each document pages, is also available at
artefactory/Argimi-Ardian-Finance-10k-text-imaage
.
You can load the dataset with:
from datasets import load_dataset
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text", split="train")
# Or you can stream the dataset to save memory space :
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text", split="train", streaming=True)
Dataset composition:
Each PDF was divided into individual pages to facilitate granular analysis.
For each page, the following data points were extracted:
- Raw Text: The complete textual content of the page, capturing all textual information present.
- Cells: Each cell within tables was identified and represented as a
Cell
object within thedocling
framework. EachCell
object encapsulates:id
: A unique identifier assigned to each cell, ensuring unambiguous referencing.text
: The textual content contained within the cell.bbox
: The precise bounding box coordinates of the cell, defining its location and dimensions on the page.- When OCR is employed, cells are further represented as
OcrCell
objects, which include an additionalconfidence
attribute. This attribute quantifies the confidence level of the OCR process in accurately recognizing the cell's textual content.
- Segments: Beyond individual cells,
docling
segments each page into distinct content units, each represented as aSegment
object. These segments provide a structured representation of the document's layout and content, encompassing elements such as tables, headers, paragraphs, and other structural components. EachSegment
object contains:text
: The textual content of the segment.bbox
: The bounding box coordinates, specifying the segment's position and size on the page.label
: A categorical label indicating the type of content the segment represents (e.g., "table," "header," "paragraph").
To guarantee unique identification, each document is assigned a distinct identifier derived from the hash of its content.
Parsing description:
The datasets creation involved a systematic process using the docling
library (Documentation).
- PDFs were processed using the
DocumentConverter
class, employing thePyPdfiumDocumentBackend
for handling of the PDF format. - To ensure high-quality extraction, the following
PdfPipelineOptions
were configured:pipeline_options = PdfPipelineOptions(ocr_options=EasyOcrOptions(use_gpu=True)) pipeline_options.images_scale = 2.0 # Scale image resolution by a factor of 2 pipeline_options.generate_page_images = True # Generate page images pipeline_options.do_ocr = True # Perform OCR pipeline_options.do_table_structure = True # Extract table structure pipeline_options.table_structure_options.do_cell_matching = True # Perform cell matching in tables pipeline_options.table_structure_options.mode = TableFormerMode.ACCURATE # Use accurate mode for table structure extraction
- These options collectively enable:
- GPU-accelerated Optical Character Recognition (OCR) via
EasyOcr
. - Upscaling of image resolution by a factor of 2, enhancing the clarity of visual elements.
- Generation of page images, providing a visual representation of each page within the document.
- Comprehensive table structure extraction, including cell matching, to accurately capture tabular data within the reports.
- The "accurate" mode for table structure extraction, prioritizing precision in identifying and delineating tables.
- GPU-accelerated Optical Character Recognition (OCR) via
Disclaimer:
This dataset, made available for experimental purposes as part of the ArGiMi research project, is provided "as is" for informational purposes only. The original publicly available data was provided by Ardian. Artefact has processed this dataset and now publicly releases it through Ardian, with Ardian's agreement. None of ArGiMi, Artefact, or Ardian make any representations or warranties of any kind (express or implied) regarding the completeness, accuracy, reliability, suitability, or availability of the dataset or its contents. Any reliance you place on such information is strictly at your own risk. In no event shall ArGiMi, Artefact, or Ardian be liable for any loss or damage, including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this dataset. This disclaimer includes, but is not limited to, claims relating to intellectual property infringement, negligence, breach of contract, and defamation.
Acknowledgement:
The ArGiMi consortium gratefully acknowledges Ardian for their invaluable contribution in gathering the documents that comprise this dataset. Their effort and collaboration were essential in enabling the creation and release of this dataset for public use. The ArGiMi project is a collaborative project with Giskard, Mistral, INA and BnF, and is sponsored by the France 2030 program of the French Government.
Citation:
If you find our datasets useful for your research, consider citing us in your works:
@misc{argimi2024Datasets,
title={The ArGiMi datasets},
author={Hicham Randrianarivo, Charles Moslonka, Arthur Garnier and Emmanuel Malherbe},
year={2024},
}
- Downloads last month
- 6,316
Models trained or fine-tuned on artefactory/Argimi-Ardian-Finance-10k-text
