Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ tags:
|
|
13 |
|
14 |
ONNX/RKNN2部署Florence-2视觉多模态大模型!
|
15 |
|
16 |
-
- 推理速度(RKNN2):RK3588推理一张768x768图片, 使用`<MORE_DETAILED_CAPTION>`指令, 总时间需要~4
|
17 |
- 内存占用(RKNN2):约2GB
|
18 |
|
19 |
## 使用方法
|
@@ -23,11 +23,9 @@ ONNX/RKNN2部署Florence-2视觉多模态大模型!
|
|
23 |
2. 安装依赖
|
24 |
|
25 |
```bash
|
26 |
-
pip install transformers onnxruntime pillow numpy<2
|
27 |
```
|
28 |
|
29 |
-
如果需要使用rknn推理, 还需要安装rknn-toolkit2-lite2.
|
30 |
-
|
31 |
3. 修改项目路径
|
32 |
分词器和预处理配置仍然需要使用原项目中的文件. 将onnx/onnxrun.py或onnx/rknnrun.py中的对应路径修改为项目所在路径.
|
33 |
```python
|
@@ -44,7 +42,7 @@ python onnx/onnxrun.py # 或 python onnx/rknnrun.py
|
|
44 |
|
45 |
## RKNN模型转换
|
46 |
|
47 |
-
你需要提前安装rknn-toolkit2 v2.
|
48 |
|
49 |
```bash
|
50 |
cd onnx
|
@@ -71,7 +69,7 @@ python convert.py all
|
|
71 |
|
72 |
ONNX/RKNN2 deployment for Florence-2 visual-language multimodal large model!
|
73 |
|
74 |
-
- Inference speed (RKNN2): RK3588 inference with a 768x768 image, using the `<MORE_DETAILED_CAPTION>` instruction, takes ~4
|
75 |
- Memory usage (RKNN2): Approximately 2GB
|
76 |
|
77 |
## Usage
|
@@ -81,11 +79,9 @@ ONNX/RKNN2 deployment for Florence-2 visual-language multimodal large model!
|
|
81 |
2. Install dependencies
|
82 |
|
83 |
```bash
|
84 |
-
pip install transformers onnxruntime pillow numpy<2
|
85 |
```
|
86 |
|
87 |
-
If you need to use rknn for inference, you also need to install rknn-toolkit2-lite2.
|
88 |
-
|
89 |
3. Modify project paths
|
90 |
The tokenizer and preprocessing configurations still need to use files from the original project. Modify the corresponding paths in onnx/onnxrun.py or onnx/rknnrun.py to the project's location.
|
91 |
```python
|
@@ -102,7 +98,7 @@ python onnx/onnxrun.py # or python onnx/rknnrun.py
|
|
102 |
|
103 |
## RKNN Model Conversion
|
104 |
|
105 |
-
You need to install rknn-toolkit2 v2.
|
106 |
|
107 |
```bash
|
108 |
cd onnx
|
|
|
13 |
|
14 |
ONNX/RKNN2部署Florence-2视觉多模态大模型!
|
15 |
|
16 |
+
- 推理速度(RKNN2):RK3588推理一张768x768图片, 使用`<MORE_DETAILED_CAPTION>`指令, 总时间需要~4秒
|
17 |
- 内存占用(RKNN2):约2GB
|
18 |
|
19 |
## 使用方法
|
|
|
23 |
2. 安装依赖
|
24 |
|
25 |
```bash
|
26 |
+
pip install transformers onnxruntime pillow numpy<2 rknn-toolkit-lite2
|
27 |
```
|
28 |
|
|
|
|
|
29 |
3. 修改项目路径
|
30 |
分词器和预处理配置仍然需要使用原项目中的文件. 将onnx/onnxrun.py或onnx/rknnrun.py中的对应路径修改为项目所在路径.
|
31 |
```python
|
|
|
42 |
|
43 |
## RKNN模型转换
|
44 |
|
45 |
+
你需要提前安装rknn-toolkit2 v2.3.2或更高版本.
|
46 |
|
47 |
```bash
|
48 |
cd onnx
|
|
|
69 |
|
70 |
ONNX/RKNN2 deployment for Florence-2 visual-language multimodal large model!
|
71 |
|
72 |
+
- Inference speed (RKNN2): RK3588 inference with a 768x768 image, using the `<MORE_DETAILED_CAPTION>` instruction, takes ~4 seconds in total.
|
73 |
- Memory usage (RKNN2): Approximately 2GB
|
74 |
|
75 |
## Usage
|
|
|
79 |
2. Install dependencies
|
80 |
|
81 |
```bash
|
82 |
+
pip install transformers onnxruntime pillow numpy<2 rknn-toolkit-lite2
|
83 |
```
|
84 |
|
|
|
|
|
85 |
3. Modify project paths
|
86 |
The tokenizer and preprocessing configurations still need to use files from the original project. Modify the corresponding paths in onnx/onnxrun.py or onnx/rknnrun.py to the project's location.
|
87 |
```python
|
|
|
98 |
|
99 |
## RKNN Model Conversion
|
100 |
|
101 |
+
You need to install rknn-toolkit2 v2.3.2 or higher in advance.
|
102 |
|
103 |
```bash
|
104 |
cd onnx
|