Update README.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ tags:
|
|
| 21 |

|
| 22 |

|
| 23 |
# OpenThaiLLM-Prebuilt-7B: Thai & China & English Large Language Model
|
| 24 |
-
**OpenThaiLLM-Prebuilt-7B** is a Thai 🇹🇭 & Chinese 🇨🇳 & English 🇬🇧 large language model with 7 billion parameters, and it is continue pretrain based on Qwen2-7B.
|
| 25 |
It demonstrates competitive performance with llama-3-typhoon-v1.5-8b, and is optimized for application use cases, Retrieval-Augmented Generation (RAG),
|
| 26 |
constrained generation, and reasoning tasks.
|
| 27 |
|
|
@@ -30,6 +30,8 @@ For release notes, please see our [blog](https://medium.com/@superkingbasskb/ope
|
|
| 30 |
|
| 31 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
|
| 32 |
|
|
|
|
|
|
|
| 33 |
## **Datasets Ratio**
|
| 34 |

|
| 35 |
|
|
|
|
| 21 |

|
| 22 |

|
| 23 |
# OpenThaiLLM-Prebuilt-7B: Thai & China & English Large Language Model
|
| 24 |
+
**OpenThaiLLM-Prebuilt-7B** is a Thai 🇹🇭 & Chinese 🇨🇳 & English 🇬🇧 large language model with 7 billion parameters, and it is continue pretrain based on Qwen2.5-7B.
|
| 25 |
It demonstrates competitive performance with llama-3-typhoon-v1.5-8b, and is optimized for application use cases, Retrieval-Augmented Generation (RAG),
|
| 26 |
constrained generation, and reasoning tasks.
|
| 27 |
|
|
|
|
| 30 |
|
| 31 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
|
| 32 |
|
| 33 |
+
## How to use:
|
| 34 |
+
The Colab notebook for getting started with fine-tuning OpenThaiLLM using LoRA is available [here](https://colab.research.google.com/drive/1JMncfoG7RsVVyLekjFd5quMeOenEYfSA?usp=sharing).
|
| 35 |
## **Datasets Ratio**
|
| 36 |

|
| 37 |
|