| # Financial Language Models for Reducing Hallucinations π‘ | |
| ## Introduction π | |
| Welcome to our project repository, where we aim to address the challenge of fact-conflict hallucinations in Large Language Models (LLMs) with a focus on the financial domain. Our approach integrates innovative techniques like Multi-Agent Systems (MAS) and Retrieval-Augmented Generation (RAG) to enhance the factuality of LLM outputs. | |
| ## Project Overview π | |
| - **Hallucination Mitigation**: Tackling financial fact-conflict hallucinations with our novel framework. | |
| - **MAS Debates**: Implementing a debate framework within MAS to improve reasoning and accuracy. | |
| - **RAG**: Leveraging up-to-date external knowledge to inform and refine the language model responses. | |
| - **Financial Expertise**: Fine-tuning our models with rich financial datasets for domain-specific expertise. | |
| ## Dataset π | |
| We utilize diverse financial datasets including FiQA and WealthAlpaca, equipping our models with robust financial knowledge. | |
| ## Methodology π οΈ | |
| 1. **Instruction-Tuning**: Leveraging instruction-tuning to enhance the financial acumen of our models. | |
| 2. **MAS Integration**: Orchestrating debates among agents to critique and refine responses. | |
| 3. **RAG Workflow**: Incorporating a custom retrieval engine to supplement model responses with external knowledge. | |