nielsr HF Staff commited on
Commit
b729aea
·
verified ·
1 Parent(s): c7adc64

Add `library_name` and `pipeline_tag` to model card

Browse files

This PR improves the discoverability and usability of the ProLLaMA model by adding the `library_name: transformers` and `pipeline_tag: text-generation` to the model card's metadata.

- The `library_name` tag ensures that the model is correctly recognized as a Transformers-compatible model, enabling an automated code snippet on the Hub page for easy usage.
- The `pipeline_tag` helps users find this model when searching for text generation models, specifically in the context of protein language processing.

The existing content of the model card remains unchanged to reflect the majority consensus among colleagues.

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
5
 
6
  [Paper on arxiv](https://arxiv.org/abs/2402.16445) for more information
@@ -106,7 +109,8 @@ if __name__ == '__main__':
106
  s = generation_output[0]
107
  output = tokenizer.decode(s,skip_special_tokens=True)
108
  print("Output:",output)
109
- print("\n")
 
110
  else:
111
  outputs=[]
112
  with open(args.input_file, 'r') as f:
@@ -126,7 +130,8 @@ if __name__ == '__main__':
126
  output = tokenizer.decode(s,skip_special_tokens=True)
127
  outputs.append(output)
128
  with open(args.output_file,'w') as f:
129
- f.write("\n".join(outputs))
 
130
  print("All the outputs have been saved in",args.output_file)
131
  ```
132
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  # ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
8
 
9
  [Paper on arxiv](https://arxiv.org/abs/2402.16445) for more information
 
109
  s = generation_output[0]
110
  output = tokenizer.decode(s,skip_special_tokens=True)
111
  print("Output:",output)
112
+ print("
113
+ ")
114
  else:
115
  outputs=[]
116
  with open(args.input_file, 'r') as f:
 
130
  output = tokenizer.decode(s,skip_special_tokens=True)
131
  outputs.append(output)
132
  with open(args.output_file,'w') as f:
133
+ f.write("
134
+ ".join(outputs))
135
  print("All the outputs have been saved in",args.output_file)
136
  ```
137