Datasets:
sourceData: https://www.ekhtebar.ir/ contents:
- ara_vahdat_ravie
- ghavanin_asli
- nazariyehaye_mashverati
- ravie_ghazaei
- ray_heyat_omoomi_divan_edalat length:
- ara_vahdat_ravie (215)
- ghavanin_asli (7)
- nazariyehaye_mashverati (264)
- ravie_ghazaei (270)
- ray_heyat_omoomi_divan_edalat (1203)
Dataset: QomSSLab/legal_laws_lite_chunk_v_1 (Processed by Script v1)
Description
This dataset comprises chunked segments of Persian legal documents, processed from Markdown source files. The pipeline aims to generate structured and manageable text units suitable for various Natural Language Processing applications, particularly in the legal domain.
Source documents were processed from the directory: data\ekhtebar
.
Processing Pipeline Overview:
- Read Markdown files (UTF-8 encoded).
- Extract YAML front matter if present (stored in document metadata, including
tags
). - Chunking: Strategy: 'marker_count_overlap'. Splits by major marker count. Target Markers/Chunk: 5, Marker Overlap: 1. Resulting chunks whose 'text' field (hierarchy + markdown) exceeds 10000 characters are further split (hard character cut if necessary).
- Link Handling: Markdown links are preserved as-is. All URLs are kept if part of preserved links/text.
- Text Normalization (Persian characters, Hazm if available): Enabled.
- Plain Text Generation from Markdown: Enabled.
- HTML Tag Removal from Plain Text: Enabled.
- Footnote Definition Removal (from lines used for chunk content): Enabled.
- Minimum Chunk Character Length (normalized Markdown): 30.
- Line Overlap Between Chunks (for some strategies, applied to original document lines): 0 lines.
- Regex-based Metadata Extraction from content (dates, authority, type, etc.): Enabled.
- Document-level Metadata JSONL file generation: Enabled (
legal_laws_lite_chunk_v1_documents.jsonl
). - Cross-reference Extraction within chunks: Disabled.
- Document
tags
from YAML are propagated to each chunk (document_tags
field).
Token counts (token_count
field) are estimated using the tokenizer: HooshvareLab/bert-fa-base-uncased
.
Dataset Structure
Splits
The dataset is organized into the following splits (if apply_splitting
was true in config):
- Splitting disabled by config. All data is in the 'train' split.
Chunk Features
Each record (chunk) in the dataset includes the following features:
chunk_id
(string
)document_id
(string
)prev_chunk_id
(string
)next_chunk_id
(string
)markdown_content
(string
)text
(string
)start_line
(int32
)end_line
(int32
)section_hierarchy
(Sequence<string>
)is_continuation
(bool
)char_length_markdown
(int32
)total_estimated_char_length
(int32
)token_count
(int32
)document_tags
(Sequence<string>
)plain_text
(string
)char_length_plain_text
(int32
)
Document-Level Metadata (legal_laws_lite_chunk_v1_documents.jsonl
)
A companion JSONL file, legal_laws_lite_chunk_v1_documents.jsonl
, provides metadata for each source document processed. Its fields typically include:
document_id
: A unique identifier for the document.original_filepath
: The path to the source Markdown file.document_title
: The title of the document, extracted from YAML front matter or the first H1 heading.tags
: A list of tags extracted from the document's YAML front matter.- Date fields (
approval_date
,notification_date
,effective_date
): Extracted if found by regex. document_number
: Extracted if found by regex.authority
: Extracted legislative or issuing authority, if found.document_type
: Extracted type of the document (e.g., قانون, آییننامه), if found.related_entities
: A list of related entities or organizations mentioned.yaml_metadata
: The raw YAML front matter from the source file as a dictionary.total_lines
: Total number of lines in the original source file.content_start_line
: The line number where the main content (after any front matter) begins.
Intended Use
This dataset is suitable for tasks such as:
- Legal text analysis and understanding.
- Information retrieval and search systems for legal documents.
- Fine-tuning language models for Retrieval Augmented Generation (RAG) in the legal sphere.
- Development of legal domain-specific language models.
- Filtering or grouping chunks based on their parent document's
document_tags
.
Users should be mindful of the chunking strategy (marker_count_overlap
) employed and its implications on text segmentation (e.g., coherence, length variation). The quality of extracted metadata is contingent on the consistency of source document formatting and the regex patterns defined.
Known Limitations & Considerations
- The efficacy of chunking and metadata extraction relies heavily on the structural consistency of the input Markdown files and the precision of the regex patterns.
- For the 'marker_count_overlap' strategy, if
max_chars_for_marker_count_overlap_chunk
is used, chunks whose 'text' field exceeds this limit are split. This split attempts to preserve newlines first, but will perform hard character cuts on overly long segments if necessary, which might break words or Markdown syntax mid-stream. - Cross-references, if extracted, are identified via regex and may not be exhaustive or entirely accurate.
- Licensing: The license specified (e.g.,
mit
) pertains to this dataset's structure and the processing script. The license of the original legal texts must be independently verified and adhered to.
Citation
[TODO: Provide citation details if this dataset or its generation method is published or documented elsewhere.]
Licensing Information
The script used for generating this dataset is typically licensed under MIT. However, the underlying legal documents possess their own original licensing terms, which must be respected. Users are responsible for ensuring compliance with the source documents' licenses.
- Downloads last month
- 4