Update README.md
Browse files
README.md
CHANGED
@@ -10,13 +10,15 @@ tags:
|
|
10 |
- tool_use
|
11 |
---
|
12 |
|
13 |
-
#
|
|
|
|
|
14 |
|
15 |
> π **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
|
16 |
|
17 |
## Overview π
|
18 |
|
19 |
-
|
20 |
|
21 |
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
|
22 |
|
@@ -33,9 +35,9 @@ Available in three sizes with full models, LoRA adapters, and quantized GGUF var
|
|
33 |
|
34 |
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|
35 |
|------------|------------|--------------|------------------|
|
36 |
-
| **2.6B** | [
|
37 |
-
| **9B** | [
|
38 |
-
| **27B** | [
|
39 |
|
40 |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
|
41 |
|
@@ -88,7 +90,7 @@ import json
|
|
88 |
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
89 |
|
90 |
# Load model
|
91 |
-
model_name = "s-emanuilov/
|
92 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
93 |
model = AutoModelForCausalLM.from_pretrained(
|
94 |
model_name,
|
@@ -171,5 +173,4 @@ For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.l
|
|
171 |
Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe).
|
172 |
|
173 |
## License π
|
174 |
-
This work is licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/).
|
175 |
-
|
|
|
10 |
- tool_use
|
11 |
---
|
12 |
|
13 |
+
# Tucan-2.6B-v1.0-GGUF
|
14 |
+
|
15 |
+
## Bulgarian Language Models for Function Calling π§π¬
|
16 |
|
17 |
> π **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
|
18 |
|
19 |
## Overview π
|
20 |
|
21 |
+
TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
|
22 |
|
23 |
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
|
24 |
|
|
|
35 |
|
36 |
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|
37 |
|------------|------------|--------------|------------------|
|
38 |
+
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA)| [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) π|
|
39 |
+
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
|
40 |
+
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) |
|
41 |
|
42 |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
|
43 |
|
|
|
90 |
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
91 |
|
92 |
# Load model
|
93 |
+
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
|
94 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
95 |
model = AutoModelForCausalLM.from_pretrained(
|
96 |
model_name,
|
|
|
173 |
Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe).
|
174 |
|
175 |
## License π
|
176 |
+
This work is licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/).
|
|