Modification of the base ProGen2-large model by Nijkamp, et al.. The vocab_size is changed from 51200 to 32, the same as the ProGen2-small and ProGen2-medium. The model follows the original code of Progen2.
Example usage:
from models.modeling_progen import ProGenForCausalLM
from tokenizers import Tokenizer
import torch
# load model and tokenizer
model = ProGenForCausalLM.from_pretrained("IDEA-XL/progen2-large", torch_dtype="auto")
tokenizer = Tokenizer.from_pretrained("IDEA-XL/progen2-large")
tokenizer.no_padding()
# prepare input
prompt = "1MEVVIVTGMSGAGK"
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device)
# forward
logits = model(input_ids).logits
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support