Qwen2.5-1.5B-Instruct-Conversation-Maker
Overview
A specialized variant of Qwen2.5-1.5B-Instruct fine-tuned for generating interview-style dialogues between a person and an expert. The model produces structured conversations in XML format for educational and AI applications.
Key Features:
- Structured Output: Generates
<conversation>
blocks with<person>
and<expert>
roles. - Training Data: 9996 conversations derived from FineWebEdu, chunked at ~1950 Llama 3 tokens.
- Methodology: Used agentlans/Llama3.1-LexiHermes-SuperStorm and cognitivecomputations/Dolphin3.0-Llama3.2-3B for synthetic dialogue generation.
Usage
Input Format
Conversation:
{{YOUR_TEXT_HERE}}
Output Example
<conversation>
<person>What causes climate change?</person>
<expert>Human activities like burning fossil fuels release greenhouse gases...</expert>
...
</conversation>
Training Details
- Framework: LLaMA Factory
- Parameters: LoRA rank 16, alpha 32, rSLoRA, NEFTune (δ=5), dropout 0.2
- Epochs: 3
Limitations
- Context Gaps: May refer to entities outside the conversation like a table or a figure.
- Repetition: Occasional dull or redundant responses.
- Role Reversals: Expert/person labels may flip or be in the wrong order.
- Varying Quality: Depends on the length and formatting of the input data.
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.