The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: TypeError Message: 'str' object is not a mapping Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory return HubDatasetModuleFactory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2023, in from_yaml_inner return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2023, in <dictcomp> return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2020, in from_yaml_inner return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} TypeError: 'str' object is not a mapping
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π§ DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining
π Overview
DBpediaOntoTrain is a dataset of 1,766 OWL ontologies in Turtle format, extracted from DBpedia Archivo and prepared for continual pretraining of Large Language Models (LLMs) in ontology generation and completion tasks.
Each ontology is analyzed using a set of semantic quality metrics, tokenized using the LLaMA 3.2 tokenizer, and sorted by Quality Score (QS). The dataset includes cumulative token counts and percentages, allowing precise and reproducible slicing for quality-aware training.
π¦ Dataset Contents
data.json
: A JSON file where each entry contains:File Name
: name of the ontology file (.ttl
)plain_text
: raw ontology content in Turtle syntaxPD
: Property Density by ClassNTR
: Non-Taxonomic Relations per ClassSC
: Subclasses per ClassPD_norm
,NTR_norm
,SC_norm
: min-max normalized versions of the above metricsQS
: Quality Score (PD_norm + NTR_norm + SC_norm
)Token Count
: number of tokens computed using the LLaMA 3.2 tokenizerToken Count Accumulation
: cumulative token count (sorted by descending QS)Percentage of Token Count Accumulation
: running percentage of total tokens across all ontologies
The dataset is sorted in descending order by Quality Score (QS
), enabling easy extraction of quality-based subsets (e.g., Q1, Q1,2, etc.).
β οΈ Loading the Dataset
The standard datasets.load_dataset()
function from the Hugging Face datasets
library does not work with this dataset, likely due to format or hosting issues.
However, you can easily load it using Python's built-in json
module:
import json
with open('path/to/data.json', 'r', encoding='utf-8') as f:
data = json.load(f)
This will give you a list of dictionary entries, each representing one ontology and its associated quality metrics, ready for filtering or slicing based on your training needs.
π Quality Metrics
Each ontology is scored with:
Metric | Description |
---|---|
PD | Property Density β properties per class |
NTR | Non-Taxonomic Relations β domain-specific relations per class |
SC | Subclass Count β hierarchical depth |
QS | Sum of normalized PD, NTR, SC |
These metrics reflect semantic modeling richness rather than raw size.
π§ͺ Intended Use
- Continual pretraining of LLMs on semantic data
- Research in ontology learning, alignment, enrichment
- Studying the effect of data quality on model generalization and reasoning
This dataset supports the research study:
Enhancing LLM Ontology Generation: The Role of Quality Semantic Data
Miquel Canal-Esteve, Yoan GutiΓ©rrez, JosΓ© Abreu-Salas (submitted to ICT Express, 2025)
π οΈ Tokenization
- Tokenized using LLaMA 3.2-1B tokenizer
- Total tokens: 1.25 billion
- Cumulative token fields allow extracting top-N% token subsets based on QS
- Token overlap and LLM input chunking are described in the accompanying paper
π‘ Reproducibility
The repository includes:
- Metric calculation scripts using
rdflib
- Tokenization scripts with Hugging Face libraries
- Pretraining configs and logs
Repository:
π https://github.com/miquelcanalesteve/LLM4Onto/
π Citation
@misc{canal2025dbpediaontotrain,
author = {Miquel Canal-Esteve and Yoan GutiΓ©rrez and JosΓ© Abreu-Salas},
title = {DBpediaOntoTrain: A Quality-Segmented Ontology Dataset for LLM Pretraining},
year = {2025},
url = {https://github.com/miquelcanalesteve/LLM4Onto/}
}
- Downloads last month
- 114