Post
1589
🖖 Let me introduce the work I've done over the past three months: 𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝗧𝗮𝗶𝘄𝗮𝗻-𝟯𝗕 and 𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝗧𝗮𝗶𝘄𝗮𝗻-𝟯𝗕-𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁, now open-sourced on 🤗 Hugging Face.
𝗹𝗶𝗮𝗻𝗴𝗵𝘀𝘂𝗻/𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝗧𝗮𝗶𝘄𝗮𝗻-𝟯𝗕: This model is built on top of 𝗺𝗲𝘁𝗮-𝗹𝗹𝗮𝗺𝗮/𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝟯𝗕 with continual pretraining. The training dataset consists of a mixture of Traditional Chinese and multilingual texts in specific proportions, including 20B tokens of Traditional Chinese text.
𝗹𝗶𝗮𝗻𝗴𝗵𝘀𝘂𝗻/𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝗧𝗮𝗶𝘄𝗮𝗻-𝟯𝗕-𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁: This is a fine-tuned conversational model based on the foundation model.
This Llama-3.2-Taiwan open-source project is currently a one-person effort (yes, I did everything from text preparation — so exhausting!). If you're interested, feel free to join the Discord server for discussions.
🅱🅴🅽🅲🅷🅼🅰🆁🅺🅸🅽🅶
The evaluation was conducted using ikala/tmmluplus, though the README page does not yet reflect the latest results. The performance is close to the previous versions, indicating that further improvements might require adding more specialized knowledge in the datasets.
🅰 🅲🅰🅻🅻 🅵🅾🆁 🆂🆄🅿🅿🅾🆁🆃
If anyone is willing to provide compute resources, it would be greatly appreciated to help this project continue and grow. 💪
---
🏔️ Foundation model: lianghsun/Llama-3.2-Taiwan-3B
🤖 Instruction model: lianghsun/Llama-3.2-Taiwan-3B-Instruct
⚡ GGUF: lianghsun/Llama-3.2-Taiwan-3B-Instruct-GGUF
𝗹𝗶𝗮𝗻𝗴𝗵𝘀𝘂𝗻/𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝗧𝗮𝗶𝘄𝗮𝗻-𝟯𝗕: This model is built on top of 𝗺𝗲𝘁𝗮-𝗹𝗹𝗮𝗺𝗮/𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝟯𝗕 with continual pretraining. The training dataset consists of a mixture of Traditional Chinese and multilingual texts in specific proportions, including 20B tokens of Traditional Chinese text.
𝗹𝗶𝗮𝗻𝗴𝗵𝘀𝘂𝗻/𝗟𝗹𝗮𝗺𝗮-𝟯.𝟮-𝗧𝗮𝗶𝘄𝗮𝗻-𝟯𝗕-𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁: This is a fine-tuned conversational model based on the foundation model.
This Llama-3.2-Taiwan open-source project is currently a one-person effort (yes, I did everything from text preparation — so exhausting!). If you're interested, feel free to join the Discord server for discussions.
🅱🅴🅽🅲🅷🅼🅰🆁🅺🅸🅽🅶
The evaluation was conducted using ikala/tmmluplus, though the README page does not yet reflect the latest results. The performance is close to the previous versions, indicating that further improvements might require adding more specialized knowledge in the datasets.
🅰 🅲🅰🅻🅻 🅵🅾🆁 🆂🆄🅿🅿🅾🆁🆃
If anyone is willing to provide compute resources, it would be greatly appreciated to help this project continue and grow. 💪
---
🏔️ Foundation model: lianghsun/Llama-3.2-Taiwan-3B
🤖 Instruction model: lianghsun/Llama-3.2-Taiwan-3B-Instruct
⚡ GGUF: lianghsun/Llama-3.2-Taiwan-3B-Instruct-GGUF