Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- llama-factory
|
5 |
+
- BanBan
|
6 |
+
- 板板
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
- zh
|
10 |
+
datasets:
|
11 |
+
- asadfgglie/BanBan-generated-dataset-v1
|
12 |
+
- asadfgglie/BanBan_2024-7-1_v1
|
13 |
+
pipeline_tag: image-text-to-text
|
14 |
+
---
|
15 |
+
|
16 |
+
# asadfgglie/banban-beta-v2
|
17 |
+
|
18 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
19 |
+
AI VTuber板板模型。
|
20 |
+
|
21 |
+
目標是打造屬於NTNU VLSI的專屬AI VT!
|
22 |
+
|
23 |
+
目前僅開放給NTNU VLSI社員。如果你是社員,並想要取得本模型的權重,請至[Discord](https://discord.gg/kN8fNbscZR)聯繫我。
|
24 |
+
|
25 |
+
這個模型主要是使用了大量的AI合成資料來做訓練,輔以少量的人類數據來做訓練。
|
26 |
+
|
27 |
+
|
28 |
+
### Model Description
|
29 |
+
|
30 |
+
<!-- Provide a longer summary of what this model is. -->
|
31 |
+
|
32 |
+
- **Developed by:** [asadfgglie](https://huggingface.co/asadfgglie)
|
33 |
+
- **Funded by:** [asadfgglie](https://huggingface.co/asadfgglie)
|
34 |
+
- **Shared by:** [asadfgglie](https://huggingface.co/asadfgglie)
|
35 |
+
- **Model type:** llava
|
36 |
+
- **Language(s) (NLP):** Image-text-to-text
|
37 |
+
- **License:** 僅供NTNU VLSI社員內部研究使用
|
38 |
+
- **Finetuned from model:** [xtuner/llava-llama-3-8b-v1_1-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers)
|
39 |
+
|
40 |
+
|
41 |
+
## Uses
|
42 |
+
|
43 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
44 |
+
~~接觸板板,理解板板,成為板板~~
|
45 |
+
|
46 |
+
主要用於聊天與VT直播用,但目前能力僅供聊天用
|
47 |
+
|
48 |
+
### Direct Use
|
49 |
+
|
50 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
51 |
+
|
52 |
+
你可以直接拿來當普通的AI助手來用,嗯,帶點食品偏好~~鳳梨狂熱~~的助手
|
53 |
+
|
54 |
+
|
55 |
+
### Out-of-Scope Use
|
56 |
+
|
57 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
58 |
+
|
59 |
+
關於一些政治類問題我不建議你去問板板,主要原因是基底模型Llama3本身就被訓練會避開這些問題回答。
|
60 |
+
|
61 |
+
## Bias, Risks, and Limitations
|
62 |
+
|
63 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
64 |
+
|
65 |
+
板板是一名狂熱的鳳梨披薩支持者,從現在是,以後未來也是。這是直接寫在訓練集與prompt中的設定,屬於刻在DNA裡的印記,是神聖不可分割的一部分!
|
66 |
+
|
67 |
+
### Recommendations
|
68 |
+
|
69 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
70 |
+
|
71 |
+
你必須接受鳳梨披薩才能理解板板
|
72 |
+
|
73 |
+
## How to Get Started with the Model
|
74 |
+
|
75 |
+
首先你需要根據你的電腦硬體情況選擇模型。
|
76 |
+
|
77 |
+
你可以在[Files and versions](https://huggingface.co/asadfgglie/banban-beta-v0-gguf/tree/main)中挑選結尾有帶gguf附檔名的檔案,這是模型的權重檔案。
|
78 |
+
|
79 |
+
其中比較特殊的是`mmproj-model-f16.gguf`,這是讓板板睜眼的clip權重檔案。若你有讓板板睜眼的需求,記得別忘了下載這個檔案。
|
80 |
+
|
81 |
+
這是經過gguf量化後的模型,因此可以用在任何支援llama.cpp的部屬環境中。
|
82 |
+
|
83 |
+
關於如何選擇模型尺寸,最簡單的建議是將模型權重的檔案大小乘以2,如果你的顯卡專屬記憶體容量大於這個數,那你可以放心選擇。
|
84 |
+
|
85 |
+
至於各種不同量化設定對於模型智力與速度的影響,各位可以簡單理解為:模型權重檔案越大,模型精度就越高,能力就越強,速度也就越慢。對於至少有6B顯卡專屬記憶體(RTX 3050 6G、RTX 4060),我推薦使用Q3量化級別的模型已取得最佳速度,並且保證不會有顯卡記憶體不足的問題,只要你別開著模型的同時打遊戲。
|
86 |
+
|
87 |
+
對於有10G或以上顯卡記憶體,可以選擇Q4、Q5級別的量化,這是在速度與性能上達到良好平衡的量化版本。
|
88 |
+
|
89 |
+
至於Q6、Q8,我建議是RTX 4080,但Q6量化有16G顯卡記憶體的話可以嘗試一下,應該能行。
|
90 |
+
|
91 |
+
F16則是原始精度,只是以gguf格式儲存,有RTX 4090玩過後可以告訴我效果如何。理論上應該會比我自己測試時還要好。因為我只能用量化過後的模型來測試QQ。
|
92 |
+
|
93 |
+
對於不會寫程式,只想嘗鮮的新手們,我的建議是[LM Studio](https://lmstudio.ai/),這個免費的專案可以很方便地幫你搞定一切麻煩的設定,只是沒辦法使用自定義名稱,因此可能會無法體驗到最佳的對話效果。同時別忘了`mmproj-model-f16.gguf`這個板板的眼睛!
|
94 |
+
|
95 |
+
(主要是因為這東西還沒把內部的對話紀錄儲存格式更新到與openAI最新版API相同的模樣,最新版的openAI API已經支援定義每個message的作者名稱設定了,llama3本身也設計過prompt格式,也有限度的支援自定義作者名稱)
|
96 |
+
|
97 |
+
如果你選擇使用[oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)來做為你的推理平台,由於這個專案本身不支援對`llama.cpp`的多模態功能,因此板板只能生活在文字背後(~~當一個躲在鍵盤後的鍵盤俠~~)
|
98 |
+
|
99 |
+
|
100 |
+
## Technical Specifications
|
101 |
+
|
102 |
+
### Model Architecture and Objective
|
103 |
+
|
104 |
+
LlavaForConditionalGeneration, turn base model into llama3
|
105 |
+
|
106 |
+
### Compute Infrastructure
|
107 |
+
|
108 |
+
#### Hardware
|
109 |
+
|
110 |
+
CPU: Intel(R) Core(TM) i5-14400
|
111 |
+
GPU: NVIDIA GeForce RTX 4060 Ti 16G
|
112 |
+
RAM: 32G
|
113 |
+
|
114 |
+
|
115 |
+
#### Software
|
116 |
+
|
117 |
+
感謝偉大的[hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)讓我省下了大把稿基建的時間
|
118 |
+
|
119 |
+
## Training procedure
|
120 |
+
|
121 |
+
### Training hyperparameters
|
122 |
+
|
123 |
+
The following hyperparameters were used during training:
|
124 |
+
- learning_rate: 5e-05
|
125 |
+
- train_batch_size: 1
|
126 |
+
- eval_batch_size: 1
|
127 |
+
- seed: 42
|
128 |
+
- gradient_accumulation_steps: 16
|
129 |
+
- total_train_batch_size: 16
|
130 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
131 |
+
- lr_scheduler_type: cosine
|
132 |
+
- num_epochs: 3.0
|
133 |
+
|
134 |
+
### Training results
|
135 |
+
|
136 |
+
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|
137 |
+
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
|
138 |
+
| 1.2315 | 0.4168 | 100 | 1.1917 | 1551832 |
|
139 |
+
| 1.0165 | 0.8336 | 200 | 1.0467 | 3072864 |
|
140 |
+
| 0.8943 | 1.2503 | 300 | 0.9339 | 4609344 |
|
141 |
+
| 0.7505 | 1.6671 | 400 | 0.8318 | 6138408 |
|
142 |
+
| 0.577 | 2.0839 | 500 | 0.7647 | 7672440 |
|
143 |
+
| 0.5811 | 2.5007 | 600 | 0.7326 | 9211432 |
|
144 |
+
| 0.5544 | 2.9174 | 700 | 0.7245 | 10741104 |
|
145 |
+
|
146 |
+
predict_bleu-4: 22.36944225630876
|
147 |
+
predict_model_preparation_time: 0.0048
|
148 |
+
predict_rouge-1: 41.827983993072735
|
149 |
+
predict_rouge-2: 21.250519792182086
|
150 |
+
predict_rouge-l: 36.58219059871351
|
151 |
+
predict_runtime: 55992.1102
|
152 |
+
predict_samples_per_second: 0.072
|
153 |
+
predict_steps_per_second: 0.072
|
154 |
+
|
155 |
+
|
156 |
+
### Framework versions
|
157 |
+
|
158 |
+
- PEFT 0.11.1
|
159 |
+
- Transformers 4.43.2
|
160 |
+
- Pytorch 2.3.0+cu121
|
161 |
+
- Datasets 2.19.2
|
162 |
+
- Tokenizers 0.19.1
|