psynote123 commited on
Commit
80bf1bd
·
verified ·
1 Parent(s): 5423911

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - meta-llama/Llama-3.2-1B-Instruct
5
+ base_model_relation: quantized
6
+ pipeline_tag: text2text-generation
7
+ ---
8
+
9
+ # Elastic model: Meta-Llama-3.1-8B-Instruct. Fastest and most flexible models for self-serving.
10
+
11
+ Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
12
+
13
+ * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
14
+
15
+ * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
16
+
17
+ * __M__: Faster model, with accuracy degradation less than 1.5%.
18
+
19
+ * __S__: The fastest model, with accuracy degradation less than 2%.
20
+
21
+
22
+ __Goals of elastic models:__
23
+
24
+ * Provide flexibility in cost vs quality selection for inference
25
+ * Provide clear quality and latency benchmarks
26
+ * Provide interface of HF libraries: transformers and diffusers with a single line of code
27
+ * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
28
+ * Provide the best models and service for self-hosting.
29
+
30
+ > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.
31
+
32
+
33
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6487003ecd55eec571d14f96/BV-6-DVgNW-aqcmY1Zr20.png)
34
+
35
+ -----
36
+
37
+ ## Inference
38
+
39
+ To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
40
+
41
+ ```python
42
+ import torch
43
+ from transformers import AutoTokenizer
44
+ from elastic_models.transformers import AutoModelForCausalLM
45
+
46
+ # Currently we require to have your HF token
47
+ # as we use original weights for part of layers and
48
+ # model confugaration as well
49
+ model_name = "meta-llama/Llama-3.2-1B-Instruct"
50
+ hf_token = ''
51
+ device = torch.device("cuda")
52
+
53
+ # Create mode
54
+ tokenizer = AutoTokenizer.from_pretrained(
55
+ model_name, token=hf_token
56
+ )
57
+ model = AutoModelForCausalLM.from_pretrained(
58
+ model_name,
59
+ token=hf_token,
60
+ torch_dtype=torch.bfloat16,
61
+ attn_implementation="sdpa",
62
+ mode='s'
63
+ ).to(device)
64
+ model.generation_config.pad_token_id = tokenizer.eos_token_id
65
+
66
+ # Inference simple as transformers library
67
+ prompt = "Describe basics of DNNs quantization."
68
+ inputs = tokenizer(prompt, return_tensors="pt")
69
+ inputs.to(device)
70
+
71
+ with torch.inference_mode:
72
+ generate_ids = model.generate(**inputs, max_length=500)
73
+
74
+ input_len = inputs['input_ids'].shape[1]
75
+ generate_ids = generate_ids[:, input_len:]
76
+ output = tokenizer.batch_decode(
77
+ generate_ids,
78
+ skip_special_tokens=True,
79
+ clean_up_tokenization_spaces=False
80
+ )[0]
81
+
82
+ # Validate answer
83
+ print(f"# Q:\n{prompt}\n")
84
+ print(f"# A:\n{output}\n")
85
+ ```
86
+
87
+ __System requirements:__
88
+ * GPUs: H100, L40s
89
+ * CPU: AMD, Intel
90
+ * Python: 3.10-3.12
91
+
92
+
93
+ TODO: UPDATE VERSION (0.0.4?)
94
+ To work with our models just run these lines in your terminal:
95
+
96
+ ```shell
97
+ pip install thestage
98
+ pip install elastic_models==0.0.4\
99
+ --index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
100
+ --extra-index-url https://pypi.nvidia.com\
101
+ --extra-index-url https://pypi.org/simple
102
+
103
+ pip install flash_attn==2.7.3 --no-build-isolation
104
+ pip uninstall apex
105
+ ```
106
+
107
+ Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
108
+
109
+ ```shell
110
+ thestage config set --api-token <YOUR_API_TOKEN>
111
+ ```
112
+
113
+ Congrats, now you can use accelerated models!
114
+
115
+ ----
116
+
117
+ ## Benchmarks
118
+
119
+ Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
120
+
121
+ ### Quality benchmarks
122
+
123
+ <!-- For quality evaluation we have used: #TODO link to github -->
124
+
125
+ | Metric/Model | S | M | L | XL | Original | W8A8, int8 |
126
+ |---------------|---|---|---|----|----------|------------|
127
+ | MMLU | 45.5 | 45.9 | 45.9 | 46.2 | 46.2 | 24 |
128
+ | PIQA | 73.1 | 73.7 | 74.2 | 74.3 | 74.3 | 55.8 |
129
+ | Arc Challenge | 34.5 | 35.9 | 36.0 | 35.8 | 35.8 | 20.3 |
130
+ | Winogrande | 60.4 | 59.7 | 60.8 | 59.5 | 59.5 | 50.3 |
131
+
132
+
133
+ * **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
134
+ * **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
135
+ * **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
136
+ * **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
137
+
138
+ ### Latency benchmarks
139
+
140
+ TODO: UPLOAD BENCHS
141
+ __100 input/300 output; tok/s:__
142
+
143
+ | GPU/Model | S | M | L | XL | Original | W8A8, int8 |
144
+ |-----------|-----|---|---|----|----------|------------|
145
+ | H100 | 189 | 166 | 148 | 134 | 49 | 192 |
146
+ | L40s | 79 | 68 | 59 | 47 | 38 | 82 |
147
+
148
+
149
+
150
+ ## Links
151
+
152
+ * __Platform__: [app.thestage.ai](app.thestage.ai)
153
+ <!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
154
+ * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
155
+ * __Contact email__: [email protected]