RadonSAI-Pretrained
Overview
RadonSAI-Pretrained is a variant of the Radon model family, based on the GPT2LMHeadModel architecture.
Model Details
- Source Model: microsoft/DialoGPT-medium
- Architecture: GPT2LMHeadModel
- Parameters: 355M
- Model Type: gpt2
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MagistrTheOne/RadonSAI-Pretrained")
model = AutoModelForCausalLM.from_pretrained("MagistrTheOne/RadonSAI-Pretrained")
prompt = "Hello, how are you?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model Information
- Languages: English, Russian
- License: Apache 2.0
- Format: Safetensors
- Library: Transformers
Citation
If you use this model, please cite the original source model and the Radon project.
- Downloads last month
- 29
Model tree for MagistrTheOne/RadonSAI-Pretrained
Base model
microsoft/DialoGPT-medium