s-emanuilov commited on
Commit
4ca206a
Β·
verified Β·
1 Parent(s): 0e8f426

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -10,13 +10,15 @@ tags:
10
  - tool_use
11
  ---
12
 
13
- # LLMBG-ToolUse: Bulgarian Language Models for Function Calling πŸ‡§πŸ‡¬
 
 
14
 
15
  > πŸ“„ **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
16
 
17
  ## Overview πŸš€
18
 
19
- LLMBG-ToolUse is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
20
 
21
  These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
22
 
@@ -33,9 +35,9 @@ Available in three sizes with full models, LoRA adapters, and quantized GGUF var
33
 
34
  | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
35
  |------------|------------|--------------|------------------|
36
- | **2.6B** | [LLMBG-ToolUse-2.6B-v1.0](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-2.6B-v1.0-LoRA)| [GGUF](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-2.6B-v1.0-GGUF)|
37
- | **9B** | [LLMBG-ToolUse-9B-v1.0](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-9B-v1.0-GGUF) |
38
- | **27B** | [LLMBG-ToolUse-27B-v1.0](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/LLMBG-ToolUse-27B-v1.0-GGUF)πŸ“|
39
 
40
  *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
41
 
@@ -88,7 +90,7 @@ import json
88
  from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
89
 
90
  # Load model
91
- model_name = "s-emanuilov/LLMBG-ToolUse-2.6B-v1.0"
92
  tokenizer = AutoTokenizer.from_pretrained(model_name)
93
  model = AutoModelForCausalLM.from_pretrained(
94
  model_name,
 
10
  - tool_use
11
  ---
12
 
13
+ # Tucan-27B-v1.0-GGUF
14
+
15
+ ## Bulgarian Language Models for Function Calling πŸ‡§πŸ‡¬
16
 
17
  > πŸ“„ **Full methodology, dataset details, and evaluation results coming in the upcoming paper**
18
 
19
  ## Overview πŸš€
20
 
21
+ TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
22
 
23
  These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
24
 
 
35
 
36
  | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
37
  |------------|------------|--------------|------------------|
38
+ | **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA)| [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF)|
39
+ | **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
40
+ | **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF)πŸ“|
41
 
42
  *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
43
 
 
90
  from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
91
 
92
  # Load model
93
+ model_name = "s-emanuilov/Tucan-2.6B-v1.0"
94
  tokenizer = AutoTokenizer.from_pretrained(model_name)
95
  model = AutoModelForCausalLM.from_pretrained(
96
  model_name,