YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

RoBERTa Style Classifier

This is a fine-tuned roberta-base model for writing style classification.

πŸ” Task

Given an input sentence, the model predicts the most appropriate writing style such as:

  • Empathetic
  • Formal
  • Casual
  • Persuasive
  • Technical
  • ... and more

🧠 Model Details

  • Base model: roberta-base
  • Max length: 256 tokens
  • Trained using PyTorch and Hugging Face Transformers
  • Dataset: Custom curated and balanced dataset with 10+ writing styles

πŸ“Š Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("Akshay-Sai/roberta-style-classifier")
tokenizer = AutoTokenizer.from_pretrained("Akshay-Sai/roberta-style-classifier")

def predict_style(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=256)
    with torch.no_grad():
        outputs = model(**inputs)
        pred = torch.argmax(outputs.logits, dim=1)
    return model.config.id2label[pred.item()]

# Example
text = "I understand how tough this must be for you. Stay strong."
print("Predicted Style:", predict_style(text))
 
Downloads last month
61
Safetensors
Model size
125M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support