Update README.md
Browse files
README.md
CHANGED
@@ -2,10 +2,14 @@
|
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
- uz
|
|
|
5 |
base_model: mistralai/Mistral-7B-Instruct-v0.3
|
6 |
library_name: transformers
|
7 |
tags:
|
8 |
- text-generation-inference
|
|
|
|
|
|
|
9 |
datasets:
|
10 |
- tahrirchi/uz-crawl
|
11 |
- allenai/c4
|
@@ -17,21 +21,19 @@ metrics:
|
|
17 |
- accuracy
|
18 |
---
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
-
|
24 |
-
## Model Details
|
25 |
|
26 |
### Model Description
|
27 |
|
28 |
-
<!-- Provide a longer summary of what this model is. -->
|
29 |
-
|
30 |
The Mistral-7B-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and self-constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
|
|
|
31 |
|
32 |
-
- **Developed by:**
|
33 |
-
-
|
34 |
-
-
|
|
|
35 |
|
36 |
## Installation
|
37 |
|
|
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
- uz
|
5 |
+
- en
|
6 |
base_model: mistralai/Mistral-7B-Instruct-v0.3
|
7 |
library_name: transformers
|
8 |
tags:
|
9 |
- text-generation-inference
|
10 |
+
- summarization
|
11 |
+
- translation
|
12 |
+
- question-answering
|
13 |
datasets:
|
14 |
- tahrirchi/uz-crawl
|
15 |
- allenai/c4
|
|
|
21 |
- accuracy
|
22 |
---
|
23 |
|
24 |
+
## Overview
|
25 |
|
26 |
+
The **Mistral-7B-Instruct-Uz** model is built upon the Mistral-7B architecture, comprising approximately 7 billion parameters. It has undergone continuous pre-training and instruction tuning using a combination of publicly available and curated Uzbek and English datasets. This approach maintains the model's original knowledge base while enhancing its proficiency in Uzbek.
|
|
|
|
|
27 |
|
28 |
### Model Description
|
29 |
|
|
|
|
|
30 |
The Mistral-7B-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and self-constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications.
|
31 |
+
For more details on the model's construction and performance metrics, see [this post.](#)
|
32 |
|
33 |
+
- **Developed by:**
|
34 |
+
- [Eldor Fozilov](https://www.linkedin.com/in/eldor-fozilov/)
|
35 |
+
- [Azimjon Urinov](https://azimjonn.github.io/)
|
36 |
+
- [Khurshid Juraev](https://kjuraev.com/)
|
37 |
|
38 |
## Installation
|
39 |
|