mjbuehler commited on
Commit
2c8b2a3
·
verified ·
1 Parent(s): d55de2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +271 -195
README.md CHANGED
@@ -1,199 +1,275 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - nlp
5
+ - code
6
+ - vision
7
+ - chemistry
8
+ - engineering
9
+ - biology
10
+ - bio-inspired
11
+ - text-generation-inference
12
+ - materials science
13
+ - AI4Science
14
+ - Materiomics
15
+ - Biomateriomics
16
+ base_model:
17
+ - meta-llama/Meta-Llama-3.1-8B-Instruct
18
+ - NousResearch/Hermes-3-Llama-3.1-8B
19
+ datasets:
20
+ - mlabonne/orpo-dpo-mix-40k
21
+ - lamm-mit/magpie-ultra-v0.1-DPO
22
+ - HuggingFaceH4/deita-10k-v0-sft
23
+ - lamm-mit/bio-silk-mech-data-integrated
24
  ---
25
 
26
+ # lamm-mit/Bioinspired-SmolLM-1.7B-Instruct
27
+
28
+ This model was constructed from the SmolLM-1.7B base model using a combination of Continued Pre-training (CPT), Supervised fine-tuning (SFT), and DPO.
29
+
30
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/prfiePwzbYVqarvhnVYEt.png)
31
+
32
+ The model was trained on a mix of publically available datasets and a corpus of around 5,000 scientific papers in the bio-inspired materials field. During the CPT phase, the raw text of all papers is used. During SFT and ORPO, the model is shown a high-quality mix of question-answer pairs and question-answer-rejected triples, respectively, along with other datasets to train the model for instructions and chat interactions.
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/2D3Jm0goTW_mvNRK2gKKU.png)
35
+
36
+ ## Inference
37
+
38
+ ```python
39
+ import torch
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
41
+
42
+ def generate_response(model, tokenizer, text_input="Biology offers amazing materials. Tell me more!", system_prompt='You are a materials scientist.',
43
+ num_return_sequences=1, temperature=0.3, max_new_tokens=256, do_sample=True,
44
+ num_beams=1, eos_token_id=[2],
45
+ device='cuda', top_k=50, top_p=0.9, repetition_penalty=1.1, messages=None, ):
46
+
47
+ if messages is None:
48
+ if system_prompt:
49
+ messages = [{"role": "user", "content": system_prompt + text_input}]
50
+ else:
51
+ messages = [{"role": "user", "content": text_input}]
52
+ else:
53
+ messages.append({"role": "user", "content": text_input})
54
+
55
+ text_input = tokenizer.apply_chat_template(
56
+ messages,
57
+ tokenize=False,
58
+ add_generation_prompt=True
59
+ )
60
+
61
+ inputs = tokenizer([text_input], add_special_tokens=False, return_tensors='pt').to(device)
62
+
63
+ with torch.no_grad():
64
+ outputs = model.generate(
65
+ **inputs,
66
+ max_new_tokens=max_new_tokens,
67
+ temperature=temperature,
68
+ num_beams=num_beams,
69
+ top_k=top_k,
70
+ do_sample=do_sample,
71
+ top_p=top_p,
72
+ eos_token_id=eos_token_id,
73
+ num_return_sequences=num_return_sequences,
74
+ repetition_penalty=repetition_penalty,
75
+ )
76
+
77
+ outputs = outputs[:, inputs["input_ids"].shape[1]:]
78
+
79
+ return tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True), messages
80
+
81
+ def load_model(model_name, chat_template=None, compile_mode=None,
82
+ attn_implementation="flash_attention_2", quant=False):
83
+
84
+ if quant:
85
+ bnb_config4bit = BitsAndBytesConfig(
86
+ load_in_4bit=True,
87
+ bnb_4bit_quant_type="nf4",
88
+ bnb_4bit_compute_dtype=torch.bfloat16,
89
+ bnb_4bit_use_double_quant=True,
90
+ )
91
+ model = AutoModelForCausalLM.from_pretrained(
92
+ model_name,
93
+ trust_remote_code=True,
94
+ use_cache=False,
95
+ quantization_config=bnb_config4bit,
96
+ device_map="auto",
97
+ torch_dtype=torch.bfloat16,
98
+ attn_implementation=attn_implementation,
99
+ )
100
+ else:
101
+ model = AutoModelForCausalLM.from_pretrained(
102
+ model_name,
103
+ trust_remote_code=True,
104
+ use_cache=False,
105
+ device_map="auto",
106
+ torch_dtype=torch.bfloat16,
107
+ attn_implementation=attn_implementation,
108
+ )
109
+
110
+ if compile_mode:
111
+ model.generation_config.cache_implementation = "static"
112
+ model.forward = torch.compile(model.forward, mode=compile_mode, fullgraph=True)
113
+
114
+ tokenizer = AutoTokenizer.from_pretrained(
115
+ model_name,
116
+ trust_remote_code=True,
117
+ add_bos_token=False
118
+ )
119
+
120
+ if chat_template:
121
+ tokenizer.chat_template = chat_template
122
+
123
+ return model, tokenizer
124
+ ```
125
+
126
+ ### Chat template
127
+
128
+
129
+ The model uses the ChatML template, as so (with ```<|im_end|>``` as end token):
130
+ ```python
131
+ messages = [
132
+ {"role": "system", "content": "You are a biomaterials scientist."},
133
+ {"role": "user", "content": "What is the role of collagen in bone?"},
134
+ {"role": "assistant", "content": "Collagen is a key structural protein."},
135
+ ]
136
+ prompt = tokenizer.apply_chat_template(
137
+ messages,
138
+ tokenize=False,
139
+ add_generation_prompt=True
140
+ )
141
+ print (prompt)
142
+ ```
143
+ The output:
144
+ ```raw
145
+ <|begin_of_text|><|im_start|>system
146
+ You are a biomaterials scientist.<|im_end|>
147
+ <|im_start|>user
148
+ What is the role of collagen in bone?<|im_end|>
149
+ <|im_start|>assistant
150
+ Collagen is a key structural protein.<|im_end|>
151
+ <|im_start|>assistant
152
+ ```
153
+
154
+ ## Example use case
155
+
156
+ Load the model:
157
+ ```
158
+ model, tokenizer,=load_model (model_name='lamm-mit/Bioinspired-SmolLM-1.7B-Instruct')
159
+ ```
160
+
161
+ ### Collagen and the Glass Bead Game
162
+ ```python
163
+ messages=[]
164
+ result,messages=generate_response (model, tokenizer, text_input="Collagen and the Glass Bead Game, discuss their relationship.",
165
+ num_return_sequences=1, temperature=0.3,max_new_tokens=1024 , messages=[]
166
+ system_prompt="You are materials scientist. ",)
167
+
168
+ for res in result:
169
+ print(res, "\n")
170
+
171
+ messages.append ( {"role": "assistant", "content":result[0]})
172
+ ```
173
+
174
+ <pre style="white-space: pre-wrap;">
175
+ What a fascinating topic! As a materials scientist, I've always been intrigued by the intersection of collagen, a ubiquitous protein in biological systems, and the ancient, strategy-based game, the Glass Bead Game.
176
+
177
+ At first glance, collagen and the Glass Bead Game may seem like vastly different entities. Collagen is a structural protein found in connective tissues, providing strength, flexibility, and support to various biological structures. In contrast, the Glass Bead Game is a strategy board game originating from ancient Mesopotamia, where players navigate a complex, interconnected network of beads and cards to achieve victory.
178
+
179
+ Upon closer inspection, I discovered a surprising connection between these two seemingly disparate entities. The Glass Bead Game, with its intricate, interconnected network of beads and cards, shares a striking similarity with the hierarchical structure of collagen. Both are composed of smaller, interconnected units that form a larger, more complex system.
180
+
181
+ In collagen, the repeating unit of amino acids, glycine, proline, and hydroxyproline, creates a hierarchical structure that contributes to its mechanical properties. Similarly, the Glass Bead Game's beads and cards are arranged in a hierarchical manner, with each component influencing the overall gameplay and strategy. The Glass Bead Game's "beads" represent the individual components, while the "cards" symbolize the interactions and relationships between these components.
182
+
183
+ Furthermore, both collagen and the Glass Bead Game exhibit self-assembly and hierarchical organization. Collagen molecules self-assemble into fibrils, which then form larger structures, such as fibers and tissues. The Glass Bead Game, too, relies on the strategic arrangement of beads and cards to create a cohesive, interconnected system. The game's rules and strategies are designed to facilitate the assembly and interaction of its components, much like the self-assembly of collagen molecules.
184
+
185
+ Moreover, the concept of "interconnectedness" is a crucial aspect of both collagen and the Glass Bead Game. In collagen, the interconnected fibrils provide structural integrity and enable the transfer of mechanical loads. In the Glass Bead Game, the interconnected network of beads and cards creates a dynamic, ever-changing system that requires players to adapt and respond to the shifting relationships between components.
186
+
187
+ In conclusion, the relationship between collagen and the Glass Bead Game is one of shared principles and concepts. Both entities exhibit hierarchical structure, self-assembly, and interconnectedness, which are essential for their respective functions. By recognizing these parallels, we can gain a deeper understanding of the fundamental principles that govern the behavior of biological systems and the strategic complexities of games like the Glass Bead Game.
188
+ </pre>
189
+
190
+ You can ask a follow-up question like so:
191
+ ```python
192
+ result,messages=generate_response (model, tokenizer, text_input="Summarize the key points in JSON format.",num_return_sequences=1,
193
+ temperature=0.3,max_new_tokens=1024 , messages=messages)
194
+
195
+ for res in result:
196
+ print(res, "\n")
197
+
198
+ messages.append ( {"role": "assistant", "content":result[0]})
199
+
200
+ <pre style="white-space: pre-wrap;">
201
+
202
+ Here's a summary of collagen and the Glass Bead Game in JSON format:
203
+
204
+ ```json
205
+ {
206
+ "collagen": {
207
+ "structure": "proteinous fibrous network",
208
+ "composition": "glycoprotein",
209
+ "properties": "high tensile strength, low stiffness",
210
+ "functions": "structural support, cell signaling, wound healing",
211
+ "types": ["alpha", "beta", "gamma", "delta"]
212
+ },
213
+ "glass bead game": {
214
+ "rules": "a strategy game where players collect and trade glass beads, simulating the process of collagen synthesis and mineralization",
215
+ "components": ["glass beads", "mineral deposits", "cell signaling molecules", "growth factors"],
216
+ "goal": "collect and trade the most beads to achieve mineralization and growth",
217
+ "rules of play": "players must collect beads, deposit them in cells, and stimulate mineralization through signaling molecules and growth factors",
218
+ "game mechanics": "players earn points for successful mineralization, trade beads for resources, and manage their cells' mineralization and growth rates"
219
+ },
220
+ }
221
+ ```
222
+
223
+ In this JSON representation, collagen is a proteinous fibrous network with a high tensile strength and low stiffness, while the Glass Bead Game is a strategy game where players collect and trade glass beads to simulate the process of collagen synthesis and mineralization. The Glass Bead Game components include glass beads, mineral deposits, cell signaling molecules, and growth factors, with the goal of collecting and trading the most beads to achieve mineralization and growth. The rules of play involve collecting beads, depositing them in cells, and stimulating mineralization through signaling molecules and growth factors.
224
+ </pre>
225
+
226
+
227
+ ## Benchmark
228
+
229
+ Performance on the bioinspired benchmark (testing with domain questions to assess performance relative to the training corpus around biological materials and spider silk):
230
+
231
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/E2zJnRjtNHlZgy7uZElrP.png)
232
+
233
+ ## Citation
234
+
235
+ Please cite as:
236
+
237
+ ```bibtex
238
+ @article{Buehler2024DomainLLMs,
239
+ title={Experimental analyses of fine-tuning large language models for domain adaptation for emergent capabilitiesy},
240
+ author={Markus J. Buehler, et al.},
241
+ journal={arXiv preprint arXiv:XXX.YYYYY},
242
+ year={2024}
243
+ }
244
+
245
+ @article{Buehler2024DomainLLMs,
246
+ title={Fine-tuned domain-focused language modeling for applications in scientific analysis, ideation and discovery},
247
+ author={Markus J. Buehler, et al.},
248
+ journal={arXiv preprint arXiv:XXX.YYYYY},
249
+ year={2024}
250
+ }
251
+
252
+ @article{LuuBuehler2023bioinspiredLLM,
253
+ title={BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials},
254
+ author={Rachel K. Luu and Markus J. Buehler},
255
+ year={2023},
256
+ journal={Advanced Science},
257
+ doi={10.1002/advs.202306724},
258
+ eprint={2309.08788},
259
+ archivePrefix={arXiv},
260
+ primaryClass={cs.LG},
261
+ url={https://arxiv.org/abs/2309.08788}
262
+ }
263
+
264
+ @article{Buehler2024XLoRA,
265
+ title={X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design},
266
+ author={Eric L. Buehler and Markus J. Buehler},
267
+ year={2024},
268
+ journal={APL Machine Learning},
269
+ volume={2},
270
+ number={2},
271
+ pages={026119},
272
+ doi={10.1063/5.0203126},
273
+ note={\url{https://doi.org/10.1063/5.0203126}}
274
+ }
275
+ ```