hypothetical commited on
Commit
a190ca4
·
verified ·
1 Parent(s): cdba98f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -0
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - mistralai/Mistral-7B-Instruct-v0.3
5
+ base_model_relation: quantized
6
+ pipeline_tag: text2text-generation
7
+ ---
8
+
9
+ TODO: CHANGE NAMES
10
+ # Elastic model: Mistral-7B-Instruct-v0.3. Fastest and most flexible models for self-serving.
11
+
12
+ Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
13
+
14
+ * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
15
+
16
+ * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
17
+
18
+ * __M__: Faster model, with accuracy degradation less than 1.5%.
19
+
20
+ * __S__: The fastest model, with accuracy degradation less than 2%.
21
+
22
+
23
+ __Goals of elastic models:__
24
+
25
+ * Provide flexibility in cost vs quality selection for inference
26
+ * Provide clear quality and latency benchmarks
27
+ * Provide interface of HF libraries: transformers and diffusers with a single line of code
28
+ * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
29
+ * Provide the best models and service for self-hosting.
30
+
31
+ > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.
32
+
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6487003ecd55eec571d14f96/BV-6-DVgNW-aqcmY1Zr20.png)
35
+
36
+ -----
37
+
38
+ ## Inference
39
+
40
+ TODO: CHANGE NAMES
41
+ To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
42
+
43
+ ```python
44
+ import torch
45
+ from transformers import AutoTokenizer
46
+ from elastic_models.transformers import AutoModelForCausalLM
47
+
48
+ # Currently we require to have your HF token
49
+ # as we use original weights for part of layers and
50
+ # model confugaration as well
51
+ model_name = "mistralai/Mistral-7B-Instruct-v0.3"
52
+ hf_token = ''
53
+ device = torch.device("cuda")
54
+
55
+ # Create mode
56
+ tokenizer = AutoTokenizer.from_pretrained(
57
+ model_name, token=hf_token
58
+ )
59
+ model = AutoModelForCausalLM.from_pretrained(
60
+ model_name,
61
+ token=hf_token,
62
+ torch_dtype=torch.bfloat16,
63
+ attn_implementation="sdpa",
64
+ mode='s'
65
+ ).to(device)
66
+ model.generation_config.pad_token_id = tokenizer.eos_token_id
67
+
68
+ # Inference simple as transformers library
69
+ prompt = "Describe basics of DNNs quantization."
70
+ inputs = tokenizer(prompt, return_tensors="pt")
71
+ inputs.to(device)
72
+
73
+ with torch.inference_mode:
74
+ generate_ids = model.generate(**inputs, max_length=500)
75
+
76
+ input_len = inputs['input_ids'].shape[1]
77
+ generate_ids = generate_ids[:, input_len:]
78
+ output = tokenizer.batch_decode(
79
+ generate_ids,
80
+ skip_special_tokens=True,
81
+ clean_up_tokenization_spaces=False
82
+ )[0]
83
+
84
+ # Validate answer
85
+ print(f"# Q:\n{prompt}\n")
86
+ print(f"# A:\n{output}\n")
87
+ ```
88
+
89
+ __System requirements:__
90
+ * GPUs: H100, L40s
91
+ * CPU: AMD, Intel
92
+ * Python: 3.10-3.12
93
+
94
+
95
+ TODO: UPDATE VERSION (0.0.4?)
96
+ To work with our models just run these lines in your terminal:
97
+
98
+ ```shell
99
+ pip install thestage
100
+ pip install elastic_models==0.0.4\
101
+ --index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
102
+ --extra-index-url https://pypi.nvidia.com\
103
+ --extra-index-url https://pypi.org/simple
104
+
105
+ pip install flash_attn==2.7.3 --no-build-isolation
106
+ pip uninstall apex
107
+ ```
108
+
109
+ Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
110
+
111
+ ```shell
112
+ thestage config set --api-token <YOUR_API_TOKEN>
113
+ ```
114
+
115
+ Congrats, now you can use accelerated models!
116
+
117
+ ----
118
+
119
+ ## Benchmarks
120
+
121
+ Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
122
+
123
+ ### Quality benchmarks
124
+
125
+ TODO: UPLOAD BENCHS
126
+ <!-- For quality evaluation we have used: #TODO link to github -->
127
+
128
+ | Metric/Model | S | M | L | XL | Original | W8A8, int8 |
129
+ |---------------|---|---|---|----|----------|------------|
130
+ | MMLU | 59.7 | 60.1 | 60.8 | 61.4 | 61.4 | 28 |
131
+ | PIQA | 80.8 | 82 | 81.7 | 81.5 | 81.5 | 65.3 |
132
+ | Arc Challenge | 56.6 | 55.1 | 56.8 | 57.4 | 57.4 | 33.2 |
133
+ | Winogrande | 73.2 | 72.3 | 73.2 | 74.1 | 74.1 | 57 |
134
+
135
+
136
+ * **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
137
+ * **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
138
+ * **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
139
+ * **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
140
+
141
+ ### Latency benchmarks
142
+
143
+ TODO: UPLOAD BENCHS
144
+ __100 input/300 output; tok/s:__
145
+
146
+ | GPU/Model | S | M | L | XL | Original | W8A8, int8 |
147
+ |-----------|-----|---|---|----|----------|------------|
148
+ | H100 | 189 | 166 | 148 | 134 | 49 | 192 |
149
+ | L40s | 79 | 68 | 59 | 47 | 38 | 82 |
150
+
151
+
152
+
153
+ ## Links
154
+
155
+ * __Platform__: [app.thestage.ai](app.thestage.ai)
156
+ <!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
157
+ * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
158
+ * __Contact email__: [email protected]