Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,12 @@ It is _**the largest open-source vision/vision-language foundation model (14B)**
|
|
19 |
|
20 |

|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
## Model details
|
23 |
|
24 |
**Model type:**
|
|
|
19 |
|
20 |

|
21 |
|
22 |
+
## How to Run?
|
23 |
+
|
24 |
+
Please refer to this [README](https://github.com/OpenGVLab/InternVL/tree/main/llava#internvl-for-multimodal-dialogue-using-llava) to run this model.
|
25 |
+
|
26 |
+
Note: We have retained the original documentation of LLaVA 1.5 as a more detailed manual. In most cases, you will only need to refer to the new documentation that we have added.
|
27 |
+
|
28 |
## Model details
|
29 |
|
30 |
**Model type:**
|