RKEFino1: A Regulation Knowledge-Enhanced Large Language Model
Abstract
RKEFino1, a regulation-aware LLM fine-tuned with financial domain knowledge, effectively handles compliance-critical tasks including QA and numerical NER.
Recent advances in large language models (LLMs) hold great promise for financial applications but introduce critical accuracy and compliance challenges in Digital Regulatory Reporting (DRR). To address these issues, we propose RKEFino1, a regulation knowledge-enhanced financial reasoning model built upon Fino1, fine-tuned with domain knowledge from XBRL, CDM, and MOF. We formulate two QA tasks-knowledge-based and mathematical reasoning-and introduce a novel Numerical NER task covering financial entities in both sentences and tables. Experimental results demonstrate the effectiveness and generalization capacity of RKEFino1 in compliance-critical financial tasks. We have released our model on Hugging Face.
Community
This is a regulation knowledge-enhanced large language model
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- FinLoRA: Benchmarking LoRA Methods for Fine-Tuning LLMs on Financial Datasets (2025)
- Harnessing Generative LLMs for Enhanced Financial Event Entity Extraction Performance (2025)
- SynLexLM: Scaling Legal LLMs with Synthetic Data and Curriculum Learning (2025)
- KFinEval-Pilot: A Comprehensive Benchmark Suite for Korean Financial Language Understanding (2025)
- Stabilizing Reasoning in Medical LLMs with Continued Pretraining and Reasoning Preference Optimization (2025)
- m-KAILIN: Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training (2025)
- Meta-aware Learning in text-to-SQL Large Language Model (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper