File size: 2,644 Bytes
8d67efb
 
3621384
bbaa436
 
 
 
5a8bbe6
 
 
8d67efb
 
 
3621384
5a8bbe6
4227547
5a8bbe6
4227547
 
 
 
 
 
 
bbaa436
 
 
 
5a8bbe6
bbaa436
 
 
31c6982
3621384
5a8bbe6
bbaa436
5a8bbe6
 
 
 
bbaa436
 
 
5a8bbe6
 
 
8730971
 
933ee57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a8bbe6
 
 
 
bbaa436
 
 
3621384
bbaa436
 
 
3621384
bbaa436
 
 
 
 
 
 
 
 
5a8bbe6
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
base_model: qwen3-0.6B
tags:
- code-generation
- svg
- fine-tuned
- fp16
- vllm
- merged
language:
- en
pipeline_tag: text-generation
library_name: transformers
model_type: qwen
inference: true
torch_dtype: float16
widget:
- example_title: "Simple Circle"
  text: "Create a red circle"
- example_title: "Rectangle with Border"
  text: "Draw a blue rectangle with black border"
- example_title: "Complex Shape"
  text: "Generate a star with 5 points in yellow"
---

# SVG Code Generator

This is a fine-tuned model for generating SVG code from natural language descriptions. The model has been merged with the base model weights and optimized in fp16 format.

## Model Details

- **Model Name**: model_v15
- **Base Model**: qwen3-0.6B
- **Training Method**: Fine-tuning with merged weights
- **Task**: Text-to-SVG code generation
- **Model Type**: Merged Qwen model
- **Precision**: fp16
- **Library**: Transformers, vLLM compatible
- **Format**: Merged model (not adapter-based)

## Usage

### With Transformers

Load the model directly using the transformers library:

```python
# Load base model and tokenizer
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("vinoku89/svg-code-generator")
model = AutoModelForCausalLM.from_pretrained("vinoku89/svg-code-generator")


# Generate SVG code
prompt = "Create a blue circle with radius 50"
inputs = tokenizer(prompt, return_tensors="pt")

# Generate with parameters
outputs = model.generate(
    **inputs, 
    max_length=200,
    temperature=0.7,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

# Decode the generated SVG code
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
svg_code = generated_text[len(prompt):].strip()

print("Generated SVG:")
print(svg_code)
```

### With vLLM

This model supports vLLM for high-performance inference in fp16 format.

## Training Data

The model was trained on SVG code generation tasks with natural language descriptions.

## Intended Use

This model is designed to generate SVG code from text descriptions for educational and creative purposes.

## Limitations

- Generated SVG may require validation
- Performance depends on prompt clarity
- Limited to SVG syntax and features seen during training

## Model Performance

The model has been fine-tuned specifically for SVG generation tasks with merged weights for optimal performance.

## Technical Details

- **Precision**: fp16 for memory efficiency
- **Compatibility**: vLLM supported for high-throughput inference
- **Architecture**: Merged fine-tuned weights (no adapters required)