YAML Metadata
Warning:
The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
T5-Small Art Generation Bidirectional Prompt Converter
A fine-tuned T5-small model for bidirectional prompt transformation in AI art generation.
Model Description
This model can convert between simple descriptions and elaborate art generation prompts in both directions:
- Simple → Elaborate: Transform basic descriptions into rich, detailed art prompts
- Elaborate → Simple: Extract core concepts from complex prompts
Training Data
Trained on 53K+ high-quality prompt pairs with saturation control to reduce bias:
- Simple descriptions from BLIP2 image analysis
- Elaborate prompts from curated art generation datasets
- Bias reduction: Capped "beautiful woman" and similar oversaturated content
- Balanced bidirectional training (50/50 split)
Usage
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("mitchins/t5-small-artgen-bidirectional")
model = T5ForConditionalGeneration.from_pretrained("mitchins/t5-small-artgen-bidirectional")
# Simple to elaborate
input_text = "Generate a detailed artistic prompt for: a cat sitting on a table"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs, max_length=200, num_beams=3, temperature=0.8, do_sample=True)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
# Elaborate to simple
input_text = "Simplify this prompt: A majestic golden dragon soaring through storm clouds above a medieval castle, with lightning illuminating its scales in photorealistic detail"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs, max_length=200, num_beams=3, temperature=0.8, do_sample=True)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Examples
Simple → Elaborate:
- Input:
"Generate a detailed artistic prompt for: a robot in a garden"
- Output:
"A colossal, bioluminescent robot stands in a lush, bioluminescent garden, its scales shimmering with iridescent colors. The scene is bathed in the soft, ethereal light of the setting sun. Rendered in a detailed matte painting style, with deep colors, fantastical elements, and intricate details, reminiscent of fantasy concept art trending on Artstation."
Elaborate → Simple:
- Input:
"Simplify this prompt: Hyperrealistic 8K render of a majestic phoenix rising from crystalline flames, its feathers crafted from pure starlight, soaring above an ancient mystical forest at dawn with volumetric lighting"
- Output:
"A phoenix flying over a forest at sunset"
Training Details
- Base Model: t5-small
- Training Samples: 53,372 bidirectional pairs
- Epochs: 3
- Saturation Control: Applied bias reduction techniques
- Task Balance: 25K elaborate→simple + 24K simple→elaborate
Limitations
- Trained primarily on English prompts
- May occasionally repeat tokens (use repetition_penalty=1.2)
- Optimized for art generation prompts, may not work well for other domains
Citation
If you use this model, please cite:
@misc{t5-small-artgen-bidirectional,
author = {mitchins},
title = {T5-Small Art Generation Bidirectional Prompt Converter},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/mitchins/t5-small-artgen-bidirectional}
}
- Downloads last month
- 10
Model tree for Mitchins/t5-small-artgen-bidirectional
Base model
google-t5/t5-small