Update README.md
Browse files
README.md
CHANGED
@@ -25,19 +25,14 @@ base_model:
|
|
25 |
This model was created using the script below. It is compatible with:
|
26 |
|
27 |
* Llama 3.1 8B & 70B
|
28 |
-
|
29 |
-
|
30 |
-
## Setup
|
31 |
-
|
32 |
-
```bash
|
33 |
-
pip install torch transformers
|
34 |
-
```
|
35 |
|
|
|
|
|
36 |
## Merge Script
|
37 |
|
38 |
```python
|
39 |
-
import os
|
40 |
-
import torch
|
41 |
from transformers import MllamaForConditionalGeneration, MllamaProcessor, AutoModelForCausalLM
|
42 |
|
43 |
# NOTE: You need sufficient DRAM to load both models at once (otherwise, need to process layer by layer which is not shown here)
|
|
|
25 |
This model was created using the script below. It is compatible with:
|
26 |
|
27 |
* Llama 3.1 8B & 70B
|
28 |
+
|
29 |
+
Respectively
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
* Llama Vision 3.2 11B & 90B
|
32 |
+
|
33 |
## Merge Script
|
34 |
|
35 |
```python
|
|
|
|
|
36 |
from transformers import MllamaForConditionalGeneration, MllamaProcessor, AutoModelForCausalLM
|
37 |
|
38 |
# NOTE: You need sufficient DRAM to load both models at once (otherwise, need to process layer by layer which is not shown here)
|