kcz358 commited on
Commit
ab2f482
·
verified ·
1 Parent(s): 2ea8a95

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +251 -0
README.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-1.5B-Instruct
7
+ library_name: transformers
8
+ ---
9
+ # Model Card for Model ID
10
+
11
+ <!-- Provide a quick summary of what the model is/does. -->
12
+
13
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ <!-- Provide a longer summary of what this model is. -->
20
+
21
+
22
+
23
+ - **Developed by:** [More Information Needed]
24
+ - **Funded by [optional]:** [More Information Needed]
25
+ - **Shared by [optional]:** [More Information Needed]
26
+ - **Model type:** [More Information Needed]
27
+ - **Language(s) (NLP):** [More Information Needed]
28
+ - **License:** [More Information Needed]
29
+ - **Finetuned from model [optional]:** [More Information Needed]
30
+
31
+ ### Model Sources [optional]
32
+
33
+ <!-- Provide the basic links for the model. -->
34
+
35
+ - **Repository:** [More Information Needed]
36
+ - **Paper [optional]:** [More Information Needed]
37
+ - **Demo [optional]:** [More Information Needed]
38
+
39
+ ## Uses
40
+
41
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
+
43
+ ### Direct Use
44
+
45
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
46
+
47
+ [More Information Needed]
48
+
49
+ ### Downstream Use [optional]
50
+
51
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
52
+
53
+ [More Information Needed]
54
+
55
+ ### Out-of-Scope Use
56
+
57
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
58
+
59
+ [More Information Needed]
60
+
61
+ ## Bias, Risks, and Limitations
62
+
63
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
+
65
+ [More Information Needed]
66
+
67
+ ### Recommendations
68
+
69
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
70
+
71
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
72
+
73
+ ## How to Get Started with the Model
74
+
75
+ Use the code below to get started with the model.
76
+
77
+ You are encouraged to install transformers by using
78
+ ```bash
79
+ python3 -m pip install transformers@git+https://github.com/huggingface/[email protected]
80
+ ```
81
+ as this is the transformers version we are using when building this model.
82
+
83
+ ```python
84
+ from transformers import AutoProcessor, AutoModelForCausalLM
85
+
86
+ import torch
87
+ import librosa
88
+
89
+ def load_audio():
90
+ return librosa.load(librosa.ex("libri1"), sr=16000)[0]
91
+
92
+
93
+ processor = AutoProcessor.from_pretrained("lmms-lab/Aero-1-Audio-1.5B", trust_remote_code=True)
94
+ # We encourage to use flash attention 2 for better performance
95
+ # Please install it with `pip install --no-build-isolation flash-attn`
96
+ # If you do not want flash attn, please use sdpa or eager`
97
+ model = AutoModelForCausalLM.from_pretrained("lmms-lab/Aero-1-Audio-1.5B", device_map="cuda", torch_dtype="auto", attn_implementation="flash_attention_2", trust_remote_code=True)
98
+ model.eval()
99
+
100
+ messages = [
101
+ {
102
+ "role": "user",
103
+ "content": [
104
+ {
105
+ "type": "audio_url",
106
+ "audio": "placeholder",
107
+ },
108
+ {
109
+ "type": "text",
110
+ "text": "Please transcribe the audio",
111
+ }
112
+ ]
113
+ }
114
+ ]
115
+
116
+ audios = [load_audio()]
117
+
118
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
119
+ inputs = processor(text=prompt, audios=audios, sampling_rate=16000, return_tensors="pt")
120
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
121
+ outputs = model.generate(**inputs, eos_token_id=151645, max_new_tokens=4096)
122
+
123
+ cont = outputs[:, inputs["input_ids"].shape[-1] :]
124
+
125
+ print(processor.batch_decode(cont, skip_special_tokens=True)[0])
126
+ ```
127
+
128
+ ## Training Details
129
+
130
+ ### Training Data
131
+
132
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
133
+
134
+ [More Information Needed]
135
+
136
+ ### Training Procedure
137
+
138
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
139
+
140
+ #### Preprocessing [optional]
141
+
142
+ [More Information Needed]
143
+
144
+
145
+ #### Training Hyperparameters
146
+
147
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
148
+
149
+ #### Speeds, Sizes, Times [optional]
150
+
151
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
152
+
153
+ [More Information Needed]
154
+
155
+ ## Evaluation
156
+
157
+ <!-- This section describes the evaluation protocols and provides the results. -->
158
+
159
+ ### Testing Data, Factors & Metrics
160
+
161
+ #### Testing Data
162
+
163
+ <!-- This should link to a Dataset Card if possible. -->
164
+
165
+ [More Information Needed]
166
+
167
+ #### Factors
168
+
169
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
170
+
171
+ [More Information Needed]
172
+
173
+ #### Metrics
174
+
175
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
176
+
177
+ [More Information Needed]
178
+
179
+ ### Results
180
+
181
+ [More Information Needed]
182
+
183
+ #### Summary
184
+
185
+
186
+
187
+ ## Model Examination [optional]
188
+
189
+ <!-- Relevant interpretability work for the model goes here -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## Environmental Impact
194
+
195
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
196
+
197
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
198
+
199
+ - **Hardware Type:** [More Information Needed]
200
+ - **Hours used:** [More Information Needed]
201
+ - **Cloud Provider:** [More Information Needed]
202
+ - **Compute Region:** [More Information Needed]
203
+ - **Carbon Emitted:** [More Information Needed]
204
+
205
+ ## Technical Specifications [optional]
206
+
207
+ ### Model Architecture and Objective
208
+
209
+ [More Information Needed]
210
+
211
+ ### Compute Infrastructure
212
+
213
+ [More Information Needed]
214
+
215
+ #### Hardware
216
+
217
+ [More Information Needed]
218
+
219
+ #### Software
220
+
221
+ [More Information Needed]
222
+
223
+ ## Citation [optional]
224
+
225
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
226
+
227
+ **BibTeX:**
228
+
229
+ [More Information Needed]
230
+
231
+ **APA:**
232
+
233
+ [More Information Needed]
234
+
235
+ ## Glossary [optional]
236
+
237
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
238
+
239
+ [More Information Needed]
240
+
241
+ ## More Information [optional]
242
+
243
+ [More Information Needed]
244
+
245
+ ## Model Card Authors [optional]
246
+
247
+ [More Information Needed]
248
+
249
+ ## Model Card Contact
250
+
251
+ [More Information Needed]