File size: 4,128 Bytes
a04d647 b3efdd0 a04d647 b3efdd0 7f9d583 b3efdd0 7f9d583 b3efdd0 7f9d583 a04d647 7f9d583 a04d647 b3efdd0 a04d647 b3efdd0 7f9d583 b3efdd0 a04d647 7fb4186 a04d647 7fb4186 a04d647 b3efdd0 a04d647 b3efdd0 a04d647 b3efdd0 a04d647 5d8ccb8 c58e70f 5d8ccb8 7f9d583 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
license: apache-2.0
base_model: openai/gpt-oss-20b
tags:
- merge-conflicts
- git-automation
- developer-tools
- code-generation
- version-control
- devops
languages:
- en
pipeline_tag: text-generation
library_name: transformers
datasets:
- SoarAILabs/merge-conflict-dataset
metrics:
- name: exact_match
type: exact_match
value: 22.0
- name: token_f1
type: f1
value: 0.617
- name: bleu
type: bleu
value: 50.82
- name: rouge-l
type: rouge
value: 58.64
- name: levenshtein_sim
type: similarity
value: 0.549
- name: char_similarity
type: similarity
value: 0.765
model-index:
- name: KiteResolve-20B
results:
- task:
type: text-generation
name: Merge Conflict Resolution
metrics:
- name: Exact Match
type: exact_match
value: 22.0
- name: Token F1
type: f1
value: 0.617
- name: BLEU
type: bleu
value: 50.82
- name: ROUGE-L
type: rouge
value: 58.64
- name: Levenshtein Similarity
type: similarity
value: 0.549
- name: Character Similarity
type: similarity
value: 0.765
---
# 🪁 KiteResolve-20B: AI-Powered Merge Conflict Resolution
*Developed by [Soar AI Labs](https://huggingface.co/SoarAILabs)*
<div align="center">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License">
<img src="https://img.shields.io/badge/Model-20B%20Parameters-red.svg" alt="Parameters">
<img src="https://img.shields.io/badge/Task-Code%20Generation-green.svg" alt="Task">
<img src="https://img.shields.io/badge/BLEU-54.83-orange.svg" alt="BLEU Score">
</div>
## 🚀 Model Description
**KiteResolve-20B** is a fine-tuned version of GPT-OSS-20B specifically engineered for **automated Git merge conflict resolution**. This model transforms the tedious process of manually resolving merge conflicts into an intelligent, automated workflow that understands code semantics across multiple programming languages.
### ✨ Key Features
- 🎯 **20% Exact Match Accuracy** on real-world merge conflicts
- 📈 **12% Token-F1 Score Improvement** over base model
- 🌐 **Multi-Language Support**: Java, JavaScript, Python, C#, TypeScript, and more
- ⚡ **Fast Inference**: Optimized for CLI and webhook integrations
- 🔧 **Production Ready**: Designed for enterprise Git workflows
## 📊 Performance Metrics
| Model | Exact Match | Token F1 | BLEU | ROUGE-L | Char Sim |
| ------------------- | ----------- | --------- | --------- | --------- | --------- |
| **codellama:13b** | 0.00 | 0.193 | 13.28 | 0.208 | 0.710 |
| **llama3.1:8b** | 0.04 | 0.583 | 50.59 | 0.610 | 0.818 |
| **gpt-oss:20b** | **0.24** | 0.549 | 47.19 | 0.572 | 0.736 |
| **KiteResolve-20B** | 0.22 | **0.617** | **50.82** | **0.586** | **0.765** |
*Evaluated on 50 held-out samples from real-world merge conflicts.*
## 🛠️ Usage
### Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from unsloth.chat_templates import get_chat_template
# Load the model
model = AutoModelForCausalLM.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = AutoTokenizer.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = get_chat_template(tokenizer, chat_template="gpt-oss")
# Resolve a merge conflict
conflict = """
<<<<<<< ours
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
=======
function calculateTotal(items) {
return items.map(item => item.price).reduce((a, b) => a + b, 0);
}
>>>>>>> theirs
"""
messages = [{"role": "user", "content": f"Resolve this merge conflict:\n```{conflict}```"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=False)
resolution = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(resolution)
```
### Ollama 🦙️
```bash
ollama run hf.co/SoarAILabs/KiteResolve-20B/model-q4_k_m.gguf
``` |