RadonSAI-Pretrained

Overview

RadonSAI-Pretrained is a variant of the Radon model family, based on the GPT2LMHeadModel architecture.

Model Details

  • Source Model: microsoft/DialoGPT-medium
  • Architecture: GPT2LMHeadModel
  • Parameters: 355M
  • Model Type: gpt2

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("MagistrTheOne/RadonSAI-Pretrained")
model = AutoModelForCausalLM.from_pretrained("MagistrTheOne/RadonSAI-Pretrained")

prompt = "Hello, how are you?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Information

  • Languages: English, Russian
  • License: Apache 2.0
  • Format: Safetensors
  • Library: Transformers

Citation

If you use this model, please cite the original source model and the Radon project.

Downloads last month
29
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for MagistrTheOne/RadonSAI-Pretrained

Finetuned
(79)
this model