KiteResolve-20B / README.md
aniruddhr04's picture
Update README.md
c58e70f verified
metadata
license: apache-2.0
base_model: openai/gpt-oss-20b
tags:
  - merge-conflicts
  - git-automation
  - developer-tools
  - code-generation
  - version-control
  - devops
languages:
  - en
pipeline_tag: text-generation
library_name: transformers
datasets:
  - SoarAILabs/merge-conflict-dataset
metrics:
  - name: exact_match
    type: exact_match
    value: 20
  - name: bleu
    type: bleu
    value: 54.83
  - name: rouge-l
    type: rouge
    value: 67.1
model-index:
  - name: KiteResolve-20B
    results:
      - task:
          type: text-generation
          name: Merge Conflict Resolution
        metrics:
          - type: exact_match
            value: 20
            name: Exact Match
          - type: bleu
            value: 54.83
            name: BLEU
          - type: rouge-l
            value: 67.1
            name: ROUGE-L

🪁 KiteResolve-20B: AI-Powered Merge Conflict Resolution

Developed by Soar AI Labs

License Parameters Task BLEU Score

🚀 Model Description

KiteResolve-20B is a fine-tuned version of GPT-OSS-20B specifically engineered for automated Git merge conflict resolution. This model transforms the tedious process of manually resolving merge conflicts into an intelligent, automated workflow that understands code semantics across multiple programming languages.

✨ Key Features

  • 🎯 20% Exact Match Accuracy on real-world merge conflicts
  • 📈 43.6% BLEU Score Improvement over base model
  • 🌐 Multi-Language Support: Java, JavaScript, Python, C#, TypeScript, and more
  • Fast Inference: Optimized for CLI and webhook integrations
  • 🔧 Production Ready: Designed for enterprise Git workflows

📊 Performance Metrics

Metric Score Improvement
Exact Match 20.0
BLEU Score 54.83 ↗️ +43.6%
ROUGE-L 67.10 ↗️ +33.7%

Evaluated on 20 held-out samples from real-world merge conflicts.

🛠️ Usage

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
from unsloth.chat_templates import get_chat_template

# Load the model
model = AutoModelForCausalLM.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = AutoTokenizer.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = get_chat_template(tokenizer, chat_template="gpt-oss")

# Resolve a merge conflict
conflict = """
<<<<<<< ours
function calculateTotal(items) {
    return items.reduce((sum, item) => sum + item.price, 0);
}
=======
function calculateTotal(items) {
    return items.map(item => item.price).reduce((a, b) => a + b, 0);
}
>>>>>>> theirs
"""

messages = [{"role": "user", "content": f"Resolve this merge conflict:\n```{conflict}```"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=False)

resolution = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(resolution)

Ollama 🦙️

ollama run hf.co/SoarAILabs/KiteResolve-20B/model-q4_k_m.gguf