helenai commited on
Commit
f646063
·
verified ·
1 Parent(s): 95c8adf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -6,9 +6,9 @@ base_model:
6
  This is the [OpenGVLab/InternVL2-1B](https://huggingface.co/OpenGVLab/InternVL2-1B) model, converted to OpenVINO
7
  with INT4 compressed weights for the language model, INT8 weights for the other models.
8
 
9
- Use OpenVINO GenAI to run inference on this model:
10
 
11
- - `pip install openvino-genai pillow`
12
  - Download a test image, for example: `curl -O "https://storage.openvinotoolkit.org/test_data/images/dog.jpg"`
13
  - Run inference:
14
 
@@ -20,7 +20,6 @@ from PIL import Image
20
 
21
  # Choose GPU instead of CPU in the line below to run the model on Intel integrated or discrete GPU
22
  pipe = openvino_genai.VLMPipeline("./InternVL2-1B-ov", "CPU")
23
- pipe.start_chat()
24
 
25
  image = Image.open("dog.jpg")
26
  image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)
 
6
  This is the [OpenGVLab/InternVL2-1B](https://huggingface.co/OpenGVLab/InternVL2-1B) model, converted to OpenVINO
7
  with INT4 compressed weights for the language model, INT8 weights for the other models.
8
 
9
+ Use OpenVINO GenAI 2025.1 or later to run inference on this model:
10
 
11
+ - `pip install --upgrade openvino-genai pillow`
12
  - Download a test image, for example: `curl -O "https://storage.openvinotoolkit.org/test_data/images/dog.jpg"`
13
  - Run inference:
14
 
 
20
 
21
  # Choose GPU instead of CPU in the line below to run the model on Intel integrated or discrete GPU
22
  pipe = openvino_genai.VLMPipeline("./InternVL2-1B-ov", "CPU")
 
23
 
24
  image = Image.open("dog.jpg")
25
  image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)