qwen_tweet_generator_pro
This model is a fine-tuned version of Qwen/Qwen3-14B for German tweet generation.
Model Description
- Developed by: Giordano De Marzo
- Model type: Causal Language Model
- Language(s): German
- License: Apache 2.0
- Finetuned from model: Qwen/Qwen3-14B
Intended Uses
This model is designed to generate German tweets in a specific style learned from training data.
Direct Use
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("giordano-dm/qwen_tweet_generator_pro")
base_model = AutoModelForCausalLM.from_pretrained('Qwen/Qwen3-14B')
model = PeftModel.from_pretrained(base_model, 'giordano-dm/qwen_tweet_generator_pro')
# Generate tweet
messages = [
{"role": "user", "content": "Schreib einen Tweet."}
]
formatted_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False
)
inputs = tokenizer(formatted_text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
do_sample=True
)
response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
print(response)
Training Details
Training Data
- Dataset: Custom German tweet dataset
- Data type: Social media posts
- Language: German
Training Procedure
- Training regime: LoRA fine-tuning
- Base model: Qwen/Qwen3-14B
- Training framework: Transformers + PEFT
- Hardware: GPU
Training Hyperparameters
- Training regime: LoRA (r=16, alpha=32, dropout=0.1)
- Learning rate: 1e-4
- Batch size: 8
- Epochs: 3
Limitations and Biases
- This model is trained on specific social media data and may reflect biases present in the training data
- Generated content should be reviewed before publication
- The model may not be suitable for all contexts or audiences
Ethical Considerations
- Users should be aware of potential biases in generated content
- Generated tweets should be fact-checked before sharing
- Consider the social impact of automated content generation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support