pan-li commited on
Commit
3963346
·
verified ·
1 Parent(s): f01abf5

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ md5_to_str.fasta filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,71 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for Secondary Structure Prediction (Q3) Dataset for RAGProtein
2
+
3
+ ### Dataset Summary
4
+
5
+ The study of a protein’s secondary structure (Sec. Struc. P.) forms a fundamental cornerstone in understanding its biological function. This secondary structure, comprising helices, strands, and various turns, bestows the protein with a specific three-dimensional configuration, which is critical for the formation of its tertiary structure. In the context of this work, a given protein sequence is classified into three distinct categories, each representing a different structural element: H - Helix (includes alpha-helix, 3-10 helix, and pi helix), E - Strand (includes beta-strand and beta-bridge), C - Coil (includes turns, bends, and random coils).
6
+
7
+ ## Dataset Structure
8
+
9
+ ### Data Instances
10
+
11
+ For each instance, there is a string of the protein sequences, a sequence for the strucutral labels. See the [Secondary structure prediction dataset viewer](https://huggingface.co/datasets/Bo1015/ssp_q8/viewer/default/test) to explore more examples.
12
+
13
+ ```
14
+ {'seq':'MRGSHHHHHHGSVKVKFVSSGEEKEVDTSKIKKVWRNLTKYGTIVQFTYDDNGKTGRGYVRELDAPKELLDMLARAEGKLN'
15
+ 'label':[ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2 ]
16
+ 'msa': 'MRGSHHHHHHGSVKVKFVSSGEEKEVDTSKIKKVWRNLTKYGTIVQFTYDDNGKTGRGYVRELDAPKELLDMLARAEGKLN|MRGSHHHHHHGSVKVKFVSSGEEKEVDTSKIKKVWRNLTKYGTIVQFTYDDNGKTGRGYVRELDAPKELLDMLARAEGKLN...',
17
+ 'str_emb': [seq_len, 384]
18
+ }
19
+ ```
20
+
21
+ The average for the `seq` and the `label` are provided below:
22
+
23
+ | Feature | Mean Count |
24
+ | ---------- | ---------------- |
25
+ | seq | 256 |
26
+ | label (0) | 109 |
27
+ | label (1) | 54 |
28
+ | label (2) | 92 |
29
+
30
+ ### Data Fields
31
+
32
+ - `seq`: a string containing the protein sequence
33
+ - `label`: a sequence containing the structural label of each residue.
34
+ - `msa`: "|" seperated MSA sequences
35
+ - `str_emb`: AIDO.StructureTokenizer generated structure embedding from AF2 predicted structures
36
+
37
+ ### Data Splits
38
+
39
+ The secondary structure prediction dataset has 2 splits: _train_ and _test_. Below are the statistics of the dataset.
40
+
41
+ | Dataset Split | Number of Instances in Split |
42
+ | ------------- | ------------------------------------------- |
43
+ | Train | 10,848 |
44
+ | Test | 667 |
45
+
46
+ ### Source Data
47
+
48
+ #### Initial Data Collection and Normalization
49
+
50
+ The datasets applied in this study were originally published by [NetSurfP-2.0](https://pubmed.ncbi.nlm.nih.gov/30785653/).
51
+
52
+
53
+ ### Licensing Information
54
+
55
+ The dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
56
+
57
+ ### Processed data collection
58
+
59
+ Single sequence data are collected from this paper:
60
+
61
+ ```
62
+ @misc{chen2024xtrimopglm,
63
+ title={xTrimoPGLM: unified 100B-scale pre-trained transformer for deciphering the language of protein},
64
+ author={Chen, Bo and Cheng, Xingyi and Li, Pan and Geng, Yangli-ao and Gong, Jing and Li, Shen and Bei, Zhilei and Tan, Xu and Wang, Boyan and Zeng, Xin and others},
65
+ year={2024},
66
+ eprint={2401.06199},
67
+ archivePrefix={arXiv},
68
+ primaryClass={cs.CL},
69
+ note={arXiv preprint arXiv:2401.06199}
70
+ }
71
+ ```
codebook.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52139fb587368235751a0464fd3be7a6beb0fff2e96a0164012f858702a9bcf8
3
+ size 787617
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97643cdebce956bc6ffe703e9c9acfd868843d8ceb14089510c4675f3bca44b1
3
+ size 188886
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c601a52c53f0eefa668cb64cd5cd0f959a09d848afa8c2c2a1a820178be6d169
3
+ size 3421754
md5_to_str.fasta ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:980965c3de72520f0e2aa75c941bbf51016d02ddca13ecd4da6a633346f29e87
3
+ size 11193929
msa.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03e6b1168d15a4859a9f4e6aa8e2ff94bfbf6787a296974ec34bb360c3d4ac69
3
+ size 2006722560
ssp_q3-rag.py ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #-*- coding:utf-8 -*-
2
+
3
+ # import sys, os, shutil, re, logging, subprocess, string, io, argparse, bisect, concurrent, gzip, zipfile, tarfile, json, pickle, time, datetime, random, math, copy, itertools, functools, collections, multiprocessing, threading, queue, signal, inspect, warnings, distutils.spawn
4
+ import sys
5
+ import os
6
+ import pickle
7
+ import re
8
+ import torch
9
+ import random
10
+ import gzip
11
+ from os.path import exists, join, getsize, isfile, isdir, abspath, basename
12
+ from typing import Dict, Union, Optional, List, Tuple, Mapping
13
+ import numpy as np
14
+ import pandas as pd
15
+ from tqdm.auto import trange, tqdm
16
+ from concurrent.futures import ThreadPoolExecutor, as_completed
17
+ from typing import Dict, Union, Optional, List, Tuple, Mapping
18
+ import datasets
19
+
20
+ def get_md5(aa_str):
21
+ """
22
+ Calculate MD5 values for protein sequence
23
+ """
24
+ import hashlib
25
+ assert isinstance(aa_str, str), aa_str
26
+
27
+ aa_str = aa_str.upper()
28
+ return hashlib.md5(aa_str.encode('utf-8')).hexdigest()
29
+
30
+ def load_fasta(seqFn, rem_tVersion=False, load_annotation=False, full_line_as_id=False):
31
+ """
32
+ seqFn -- Fasta file or input handle (with readline implementation)
33
+ rem_tVersion -- Remove version information. ENST000000022311.2 => ENST000000022311
34
+ load_annotation -- Load sequence annotation
35
+ full_line_as_id -- Use the full head line (starts with >) as sequence ID. Can not be specified simutanouly with load_annotation
36
+
37
+ Return:
38
+ {tid1: seq1, ...} if load_annotation==False
39
+ {tid1: seq1, ...},{tid1: annot1, ...} if load_annotation==True
40
+ """
41
+ if load_annotation and full_line_as_id:
42
+ raise RuntimeError("Error: load_annotation and full_line_as_id can not be specified simutanouly")
43
+ if rem_tVersion and full_line_as_id:
44
+ raise RuntimeError("Error: rem_tVersion and full_line_as_id can not be specified simutanouly")
45
+
46
+ fasta = {}
47
+ annotation = {}
48
+ cur_tid = ''
49
+ cur_seq = ''
50
+
51
+ if isinstance(seqFn, str):
52
+ IN = open(seqFn)
53
+ elif hasattr(seqFn, 'readline'):
54
+ IN = seqFn
55
+ else:
56
+ raise RuntimeError(f"Expected seqFn: {type(seqFn)}")
57
+ for line in IN:
58
+ if line[0] == '>':
59
+ if cur_tid != '':
60
+ fasta[cur_tid] = re.sub(r"\s", "", cur_seq)
61
+ cur_seq = ''
62
+ data = line[1:-1].split(None, 1)
63
+ cur_tid = line[1:-1] if full_line_as_id else data[0]
64
+ annotation[cur_tid] = data[1] if len(data)==2 else ""
65
+ if rem_tVersion and '.' in cur_tid:
66
+ cur_tid = ".".join(cur_tid.split(".")[:-1])
67
+ elif cur_tid != '':
68
+ cur_seq += line.rstrip()
69
+
70
+ if isinstance(seqFn, str):
71
+ IN.close()
72
+
73
+ if cur_seq != '':
74
+ fasta[cur_tid] = re.sub(r"\s", "", cur_seq)
75
+
76
+ if load_annotation:
77
+ return fasta, annotation
78
+ else:
79
+ return fasta
80
+
81
+ def load_msa_txt(file_or_stream, load_id=False, load_annot=False, sort=False):
82
+ """
83
+ Read msa txt file
84
+
85
+ Parmeters
86
+ --------------
87
+ file_or_stream: file or stream to read (with read method)
88
+ load_id: read identity and return
89
+
90
+ Return
91
+ --------------
92
+ msa: list of msa sequences, the first sequence in msa is the query sequence
93
+ id_arr: Identity of msa sequences
94
+ annotations: Annotations of msa sequences
95
+ """
96
+ msa = []
97
+ id_arr = []
98
+ annotations = []
99
+
100
+ if hasattr(file_or_stream, 'read'):
101
+ lines = file_or_stream.read().strip().split('\n')
102
+ elif file_or_stream.endswith('.gz'):
103
+ with gzip.open(file_or_stream) as IN:
104
+ lines = IN.read().decode().strip().split('\n')
105
+ else:
106
+ with open(file_or_stream) as IN:
107
+ lines = IN.read().strip().split('\n')
108
+ # lines = open(file_or_stream).read().strip().split('\n')
109
+
110
+ for idx,line in enumerate(lines):
111
+ data = line.strip().split()
112
+ if idx == 0:
113
+ assert len(data) == 1, f"Expect 1 element for the 1st line, but got {data} in {file_or_stream}"
114
+ q_seq = data[0]
115
+ else:
116
+ if len(data) >= 2:
117
+ id_arr.append( float(data[1]) )
118
+ else:
119
+ assert len(q_seq) == len(data[0])
120
+ id_ = round(np.mean([ r1==r2 for r1,r2 in zip(q_seq, data[0]) ]), 3)
121
+ id_arr.append(id_)
122
+ msa.append( data[0] )
123
+ if len(data) >= 3:
124
+ annot = " ".join(data[2:])
125
+ annotations.append( annot )
126
+ else:
127
+ annotations.append(None)
128
+
129
+ id_arr = np.array(id_arr, dtype=np.float64)
130
+ if sort:
131
+ id_order = np.argsort(id_arr)[::-1]
132
+ msa = [ msa[i] for i in id_order ]
133
+ id_arr = id_arr[id_order]
134
+ annotations = [ annotations[i] for i in id_order ]
135
+ msa = [q_seq] + msa
136
+
137
+ outputs = [ msa ]
138
+ if load_id:
139
+ outputs.append( id_arr )
140
+ if load_annot:
141
+ outputs.append( annotations )
142
+ if len(outputs) == 1:
143
+ return outputs[0]
144
+ return outputs
145
+
146
+ # Find for instance the citation on arxiv or on the dataset repo/website
147
+ _CITATION = """
148
+ """
149
+
150
+ # You can copy an official description
151
+ _DESCRIPTION = """
152
+ """
153
+
154
+ _HOMEPAGE = "xxxxx"
155
+
156
+ _LICENSE = "xxxxx"
157
+
158
+ class DownStreamConfig(datasets.BuilderConfig):
159
+ """BuilderConfig for downstream taks dataset."""
160
+
161
+ def __init__(self, *args, **kwargs):
162
+ """BuilderConfig downstream tasks dataset.
163
+ Args:
164
+ **kwargs: keyword arguments forwarded to super.
165
+ """
166
+ super().__init__(*args, name=f"downstream", **kwargs)
167
+
168
+ class DownStreamTasks(datasets.GeneratorBasedBuilder):
169
+ VERSION = datasets.Version("1.1.0")
170
+ BUILDER_CONFIG_CLASS = DownStreamConfig
171
+ BUILDER_CONFIGS = [ DownStreamConfig() ]
172
+ DEFAULT_CONFIG_NAME = None
173
+
174
+ def _info(self):
175
+ features = datasets.Features(
176
+ {
177
+ "seq": datasets.Value("string"),
178
+ "label": datasets.Sequence(datasets.Value("int32")),
179
+ "msa": datasets.Value("string"),
180
+ "str_emb": datasets.Array2D(shape=(None, 384), dtype='float32'),
181
+ }
182
+ )
183
+ return datasets.DatasetInfo(
184
+ # This is the description that will appear on the datasets page.
185
+ description=_DESCRIPTION,
186
+ # This defines the different columns of the dataset and their types
187
+ features=features,
188
+ # Homepage of the dataset for documentation
189
+ homepage=_HOMEPAGE,
190
+ # License for the dataset if available
191
+ license=_LICENSE,
192
+ # Citation for the dataset
193
+ citation=_CITATION,
194
+ )
195
+
196
+ def _split_generators(
197
+ self, dl_manager: datasets.DownloadManager
198
+ ) -> List[datasets.SplitGenerator]:
199
+ train_parquet_file = dl_manager.download(f"data/train-00000-of-00001.parquet")
200
+ test_parquet_file = dl_manager.download(f"data/test-00000-of-00001.parquet")
201
+ msa_path = dl_manager.download_and_extract(f"msa.tar")
202
+ str_file = dl_manager.download(f"md5_to_str.fasta")
203
+ codebook_file = dl_manager.download(f"codebook.pt")
204
+
205
+ assert os.path.exists(join(msa_path, 'msa'))
206
+ msa_path = join(msa_path, 'msa')
207
+
208
+ return [
209
+ datasets.SplitGenerator(
210
+ name=datasets.Split.TRAIN,
211
+ gen_kwargs={
212
+ "parquet_file": train_parquet_file,
213
+ "msa_path": msa_path,
214
+ "str_file": str_file,
215
+ "codebook_file": codebook_file
216
+ }
217
+ ),
218
+ datasets.SplitGenerator(
219
+ name=datasets.Split.TEST,
220
+ gen_kwargs={
221
+ "parquet_file": test_parquet_file,
222
+ "msa_path": msa_path,
223
+ "str_file": str_file,
224
+ "codebook_file": codebook_file
225
+ }
226
+ ),
227
+ ]
228
+
229
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
230
+ def _generate_examples(self, parquet_file, msa_path, str_file, codebook_file):
231
+
232
+ dataset = datasets.Dataset.from_parquet(parquet_file)
233
+ md5_to_str = load_fasta(str_file)
234
+ codebook = torch.load(codebook_file, 'cpu', weights_only=True).numpy()
235
+
236
+ for key, item in enumerate(dataset):
237
+ seq = item['seq']
238
+ label = item['label']
239
+ md5_val = get_md5(seq)
240
+ if md5_val not in md5_to_str or md5_to_str[md5_val] == "":
241
+ str_emb = np.zeros([len(seq), 384], dtype=np.float32)
242
+ else:
243
+ str_toks = np.array([ int(x) for x in md5_to_str[md5_val].split('-')])
244
+ str_emb = codebook[str_toks]
245
+
246
+ msa = load_msa_txt(join(msa_path, md5_val+'.txt.gz'))
247
+ assert len(msa[0]) == len(seq), f"Error: {len(msa[0])} != {len(seq)}"
248
+ assert len(msa[0]) == str_emb.shape[0], f"Error: {len(msa[0])} != {str_emb.shape[0]}"
249
+ # breakpoint()
250
+ assert isinstance(label, list) and isinstance(label[0], int), f"label={label}"
251
+ yield key, {
252
+ "seq": seq,
253
+ "label": label,
254
+ "msa": "|".join(msa),
255
+ "str_emb": str_emb
256
+ }