metadata
license: apache-2.0
language:
- en
- zh
base_model:
- prithivMLmods/Viper-Coder-v0.1
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- trl
- coder
model-index:
- name: Viper-Coder-v1.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 44.32
name: averaged accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 49.27
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 54.61
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 20.13
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 26.21
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.02
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v1.1
name: Open LLM Leaderboard
Viper-Coder-v1.1
Viper-Coder-v1.1 is based on the Qwen 2.5 14B modality architecture, designed to be the best for coding and reasoning tasks. It has been fine-tuned on a synthetic dataset leveraging the latest coding logits and CoT datasets, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex coding tasks, instruction-following, and text generation.
Key Improvements
- Best-in-Class Coding Proficiency: Enhanced understanding of programming languages, debugging, and code generation.
- Fine-Tuned Instruction Following: Optimized for precise responses, structured outputs (e.g., JSON, YAML), and extended text generation (8K+ tokens).
- Advanced Logical & Mathematical Reasoning: Improved multi-step problem-solving and theorem proving.
- Long-Context Mastery: Handles up to 128K tokens with an output capability of 8K tokens per response.
- Multilingual Code Support: Excels in Python, JavaScript, C++, Java, SQL, and other major programming languages, with documentation in 29+ languages.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Viper-Coder-v1.1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to merge two sorted lists."
messages = [
{"role": "system", "content": "You are an advanced AI assistant with expert-level coding and reasoning abilities."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Elite Coding & Debugging: Best-in-class model for writing, analyzing, and optimizing code.
- Complex Algorithmic Reasoning: Solves intricate logic problems and algorithm-based challenges.
- Scientific & Mathematical Computation: Advanced support for formulas, equations, and theorem verification.
- Structured Data Processing: Seamlessly handles JSON, XML, SQL, and data pipeline automation.
- Multilingual Programming Support: Proficient in Python, JavaScript, C++, Java, Go, and more.
- Extended Technical Content Generation: Ideal for writing documentation, research papers, and technical blogs.
Limitations
- High Computational Demand: Requires powerful GPUs/TPUs for smooth inference due to 14B parameters.
- Language-Specific Variability: Performance may vary across different programming languages.
- Possible Error Propagation: Extended text outputs might introduce logical inconsistencies.
- Limited Real-World Awareness: The model does not have access to real-time internet updates.
- Prompt Sensitivity: Performance depends on how well the prompt is structured.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 40.26 |
IFEval (0-Shot) | 44.32 |
BBH (3-Shot) | 49.27 |
MATH Lvl 5 (4-Shot) | 54.61 |
GPQA (0-shot) | 20.13 |
MuSR (0-shot) | 26.21 |
MMLU-PRO (5-shot) | 47.02 |