caizhi1 zheyishine commited on
Commit
967583c
·
verified ·
1 Parent(s): 63f4d3f

Update README.md (#4)

Browse files

- Update README.md (03bfb25f6f4e6d22ff255bbe129defb8102911d3)


Co-authored-by: Yao Zhao <[email protected]>

Files changed (1) hide show
  1. README.md +39 -33
README.md CHANGED
@@ -3,12 +3,9 @@ license: mit
3
  language:
4
  - en
5
  base_model:
6
- - inclusionAI/Ring-flash-linear-2.0
7
  pipeline_tag: text-generation
8
  ---
9
-
10
-
11
-
12
  # Quantized Ring-Linear-2.0
13
 
14
  ## Introduction
@@ -34,14 +31,22 @@ To enable deployment of [Ring-Linear-2.0](https://github.com/inclusionAI/Ring-V2
34
 
35
  #### Environment Preparation
36
 
37
- Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
 
 
38
  ```shell
39
- pip install torch==2.7.0 torchvision==0.22.0
 
40
  ```
41
 
42
- Then you should install our vLLM wheel package:
43
  ```shell
44
- pip install https://media.githubusercontent.com/media/inclusionAI/Ring-V2/refs/heads/main/hybrid_linear/whls/vllm-0.8.5%2Bcuda12_8_gcc10_2_1-cp310-cp310-linux_x86_64.whl --no-deps --force-reinstall
 
 
 
 
 
45
  ```
46
 
47
  #### Offline Inference
@@ -50,35 +55,39 @@ pip install https://media.githubusercontent.com/media/inclusionAI/Ring-V2/refs/h
50
  from transformers import AutoTokenizer
51
  from vllm import LLM, SamplingParams
52
 
53
- tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-mini-linear-2.0-GPTQ-int4")
54
-
55
- sampling_params = SamplingParams(temperature=0.6, top_p=1.0, max_tokens=16384)
56
-
57
-
58
- llm = LLM(model="inclusionAI/Ring-flash-linear-2.0-GPTQ-int4", dtype='auto', enable_prefix_caching=False, max_num_seqs=128)
59
-
60
-
61
- prompt = "Give me a short introduction to large language models."
62
- messages = [
63
- {"role": "user", "content": prompt}
64
- ]
65
-
66
- text = tokenizer.apply_chat_template(
67
- messages,
68
- tokenize=False,
69
- add_generation_prompt=True
70
- )
71
- outputs = llm.generate([text], sampling_params)
 
 
 
72
  ```
73
 
74
  #### Online Inference
75
  ```shell
76
- vllm serve inclusionAI/Ring-mini-linear-2.0-GPTQ-int4 \
77
  --tensor-parallel-size 2 \
78
  --pipeline-parallel-size 1 \
79
  --gpu-memory-utilization 0.90 \
80
- --max-num-seqs 512 \
81
  --no-enable-prefix-caching
 
82
  ```
83
 
84
 
@@ -114,7 +123,4 @@ This code repository is licensed under [the MIT License](https://github.com/incl
114
 
115
  ## Citation
116
 
117
- If you find our work helpful, feel free to give us a cite.
118
-
119
-
120
-
 
3
  language:
4
  - en
5
  base_model:
6
+ - inclusionAI/Ring-mini-linear-2.0
7
  pipeline_tag: text-generation
8
  ---
 
 
 
9
  # Quantized Ring-Linear-2.0
10
 
11
  ## Introduction
 
31
 
32
  #### Environment Preparation
33
 
34
+ Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below.
35
+
36
+ First, create a Conda environment with Python 3.10 and CUDA 12.8:
37
  ```shell
38
+ conda create -n vllm python=3.10
39
+ conda activate vllm
40
  ```
41
 
42
+ Next, install our vLLM wheel package:
43
  ```shell
44
+ pip install https://media.githubusercontent.com/media/zheyishine/vllm_whl/refs/heads/main/vllm-0.8.5.post2.dev28%2Bgd327eed71.cu128-cp310-cp310-linux_x86_64.whl --force-reinstall
45
+ ```
46
+
47
+ Finally, install compatible versions of transformers after vLLM is installed:
48
+ ```shell
49
+ pip install transformers==4.51.1
50
  ```
51
 
52
  #### Offline Inference
 
55
  from transformers import AutoTokenizer
56
  from vllm import LLM, SamplingParams
57
 
58
+ if __name__ == '__main__':
59
+ tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-flash-linear-2.0-GPTQ-int4")
60
+
61
+ sampling_params = SamplingParams(temperature=0.6, top_p=1.0, max_tokens=16384)
62
+
63
+ # use `max_num_seqs=1` without concurrency
64
+ llm = LLM(model="inclusionAI/Ring-flash-linear-2.0-GPTQ-int4", dtype='auto', enable_prefix_caching=False, max_num_seqs=128)
65
+
66
+
67
+ prompt = "Give me a short introduction to large language models."
68
+ messages = [
69
+ {"role": "user", "content": prompt}
70
+ ]
71
+
72
+ text = tokenizer.apply_chat_template(
73
+ messages,
74
+ tokenize=False,
75
+ add_generation_prompt=True
76
+ )
77
+ outputs = llm.generate([text], sampling_params)
78
+ for output in outputs:
79
+ print(output.outputs[0].text)
80
  ```
81
 
82
  #### Online Inference
83
  ```shell
84
+ vllm serve inclusionAI/Ring-flash-linear-2.0-GPTQ-int4 \
85
  --tensor-parallel-size 2 \
86
  --pipeline-parallel-size 1 \
87
  --gpu-memory-utilization 0.90 \
88
+ --max-num-seqs 128 \
89
  --no-enable-prefix-caching
90
+ --api-key your-api-key
91
  ```
92
 
93
 
 
123
 
124
  ## Citation
125
 
126
+ If you find our work helpful, feel free to give us a cite.