jxu124 commited on
Commit
44c892d
·
1 Parent(s): 2b54ad7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -10
README.md CHANGED
@@ -11,26 +11,22 @@ TiO is an Interactive Visual Grounding Model for Disambiguation. (WIP)
11
 
12
  ## Online / Offline Demo
13
 
 
 
14
  - [Colab Online Demo](https://colab.research.google.com/drive/195eDITKi6dahnVz8Cum91sNUCF_lFle8?usp=sharing) - Free T4 is available on Google Colab.
15
  - Gradio Offline Demo:
16
 
17
  ```python
18
- import os; os.system("pip3 install transformers accelerate bitsandbytes gradio fire")
19
  from transformers import AutoModel, AutoTokenizer, AutoImageProcessor
 
20
 
21
  model_id = "jxu124/TiO"
22
- model = AutoModel.from_pretrained(
23
- model_id,
24
- trust_remote_code=True,
25
- torch_dtype=torch.float16,
26
- device_map='cuda',
27
- # load_in_4bit=True,
28
- # bnb_4bit_compute_dtype=torch.float16,
29
- )
30
  tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
31
  image_processor = AutoImageProcessor.from_pretrained(model_id)
32
 
33
- # ---- setup gradio demo ----
34
  model.get_gradio_demo(tokenizer, image_processor).queue(max_size=20).launch(server_name="0.0.0.0", server_port=7860)
35
  ```
36
 
 
11
 
12
  ## Online / Offline Demo
13
 
14
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63ea2c4ed235e6788af30eaa/NrF9poZzUVz1oTZ0uQrun.png" width="300px" />
15
+
16
  - [Colab Online Demo](https://colab.research.google.com/drive/195eDITKi6dahnVz8Cum91sNUCF_lFle8?usp=sharing) - Free T4 is available on Google Colab.
17
  - Gradio Offline Demo:
18
 
19
  ```python
20
+ import os; os.system("pip3 install transformers gradio fire accelerate bitsandbytes > /dev/null")
21
  from transformers import AutoModel, AutoTokenizer, AutoImageProcessor
22
+ import torch
23
 
24
  model_id = "jxu124/TiO"
25
+ model = AutoModel.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16).cuda()
 
 
 
 
 
 
 
26
  tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
27
  image_processor = AutoImageProcessor.from_pretrained(model_id)
28
 
29
+ # ---- gradio demo ----
30
  model.get_gradio_demo(tokenizer, image_processor).queue(max_size=20).launch(server_name="0.0.0.0", server_port=7860)
31
  ```
32