peleke-1 🦋
Collection
Fine-Tuned Protein Language Models for Targeted Antibody Sequence Generation.
•
3 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct for antibody sequence generation. It takes in an antigen sequence, and returns novel Fv portions of heavy and light chain antibody sequences.
model_name = 'peleke-llama-3.1-8b-instruct'
config = PeftConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, model_name).cuda()
This model uses <epi>
and </epi>
to annotate epitope residues of interest.
It may be easier to use other characters for annotation, such as [ ]
's. For example: ...CSFS[S][F][V]L[N]WY...
.
Then, use the following function to properly format the input.
def format_prompt(antigen_sequence):
epitope_seq = re.sub(r'\[([A-Z])\]', r'<epi>\1</epi>', antigen_sequence)
formatted_str = f"Antigen: {epitope_seq}<|im_end|>\nAntibody:"
return formatted_str
prompt = format_prompt(antigen)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.cuda() for k, v in inputs.items()}
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=1000,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id,
use_cache=False,
)
full_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
antibody_sequence = full_text.split('<|im_end|>')[1].replace('Antibody: ', '')
print(f"Antigen: {antigen}\nAntibody: {antibody_sequence}\n")
This will generate a |
-delimited output, which is an Fv portion of a heavy and light chain.
Antigen: NPPTFSPALL...
Antibody: QVQLVQSGGG...|DIQMTQSPSS...
This model was trained with SFT.
Base model
meta-llama/Llama-3.1-8B