Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,9 @@ base_model:
|
|
| 14 |
- Qwen/Qwen3-1.7B
|
| 15 |
pipeline_tag: text-generation
|
| 16 |
---
|
|
|
|
|
|
|
|
|
|
| 17 |
# **Leporis-Qwen3-Radiation-1.7B**
|
| 18 |
|
| 19 |
> **Leporis-Qwen3-Radiation-1.7B** is a reasoning-focused model fine-tuned on **Qwen** for **Abliterated Reasoning** and **polished token probabilities**, enhancing balanced **multilingual generation** across mathematics and general-purpose reasoning.
|
|
@@ -104,4 +107,4 @@ print(response)
|
|
| 104 |
* Focused on reasoning and mathematics—less suited for creative writing
|
| 105 |
* Smaller size compared to large-scale LLMs may limit performance on complex, multi-hop reasoning tasks
|
| 106 |
* Prioritizes structured reasoning and probabilistic accuracy over conversational or emotional tone
|
| 107 |
-
* May produce inconsistent outputs when dealing with **very long contexts** or cross-domain multi-document inputs
|
|
|
|
| 14 |
- Qwen/Qwen3-1.7B
|
| 15 |
pipeline_tag: text-generation
|
| 16 |
---
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
# **Leporis-Qwen3-Radiation-1.7B**
|
| 21 |
|
| 22 |
> **Leporis-Qwen3-Radiation-1.7B** is a reasoning-focused model fine-tuned on **Qwen** for **Abliterated Reasoning** and **polished token probabilities**, enhancing balanced **multilingual generation** across mathematics and general-purpose reasoning.
|
|
|
|
| 107 |
* Focused on reasoning and mathematics—less suited for creative writing
|
| 108 |
* Smaller size compared to large-scale LLMs may limit performance on complex, multi-hop reasoning tasks
|
| 109 |
* Prioritizes structured reasoning and probabilistic accuracy over conversational or emotional tone
|
| 110 |
+
* May produce inconsistent outputs when dealing with **very long contexts** or cross-domain multi-document inputs
|