Autor-Regulatory Focus Classifier (German)

This model is a fine-tuned transformer-based classifier that detects the regulatory focus in German-language text, classifying whether the language expresses a promotion (aspirational, growth-oriented) or prevention (safety, obligation-oriented) focus.

It is fine-tuned on top of a German-language base model for the task of binary text classification.

Model Details

  • Base model: deepset/gbert-large
  • Fine-tuned for: Binary classification (Regulatory Focus)
  • Language: German
  • Framework: Hugging Face Transformers
  • Model format: safetensors

Use Cases

  • Social psychology and communication research
  • Marketing and consumer behavior analysis
  • Literary or political discourse analysis
  • Behavioral modeling and goal orientation profiling

Example Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("aveluth/author_regulatory_focus_classifier")
tokenizer = AutoTokenizer.from_pretrained("aveluth/author_regulatory_focus_classifier")

text = "Wir müssen sicherstellen, dass keine Fehler passieren. Sicherheit hat höchste Priorität."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits).item()

print("Predicted class:", "prevention" if predicted_class == 0 else "promotion")

Labels

Class Description
0 Prevention-focused language
1 Promotion-focused language

Training Details

  • Training data: Custom labeled corpus based on psychological framing
  • Loss function: Cross-entropy
  • Optimizer: AdamW
  • Epochs: 4
  • Learning rate: 3e-5

Limitations

  • Trained on German-language data only
  • Performance may vary on out-of-domain text (e.g., technical manuals, poetry)
  • May not generalize across all cultural framings of regulatory focus

License

MIT

Citation

If you use this model in your research, please cite:

@article{velutharambath2023prevention,
  title={Prevention or Promotion? Predicting Author's Regulatory Focus},
  author={Velutharambath, Aswathy and Sassenberg, Kai and Klinger, Roman},
  journal={Northern European Journal of Language Technology},
  volume={9},
  number={1},
  year={2023}
}
Downloads last month
6
Safetensors
Model size
336M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aveluth/author_regulatory_focus_classifier

Finetuned
(29)
this model