Mitchins's picture
Upload folder using huggingface_hub
a6cf977 verified
metadata
license: apache-2.0
base_model: Mitchins/t5-base-artgen-multi-instruct
tags:
  - text2text-generation
  - prompt-enhancement
  - ai-art
  - openvino
  - t5
  - art-generation
  - stable-diffusion
  - intel
language:
  - en
library_name: optimum-intel
pipeline_tag: text-generation
model-index:
  - name: t5-base-artgen-multi-instruct-OpenVINO
    results: []
datasets:
  - art-prompts
widget:
  - text: 'Enhance this prompt: robot in space'
    example_title: Standard Enhancement
  - text: 'Enhance this prompt (no lora): beautiful landscape'
    example_title: Clean Enhancement
  - text: 'Enhance this prompt (with lora): anime girl'
    example_title: Technical Enhancement
  - text: 'Simplify this prompt: A stunning, highly detailed masterpiece'
    example_title: Simplification

T5 Base Art Generation Multi-Instruct OpenVINO

OpenVINO version of Mitchins/t5-base-artgen-multi-instruct for optimized Intel hardware inference.

Model Details

  • Base Model: T5-base (Google)
  • Training Samples: 297,282
  • Parameters: 222M
  • Format: OpenVINO IR (FP32)
  • Optimization: Intel CPU/GPU/VPU optimized

Quad-Instruction Capabilities

  1. Standard Enhancement: Enhance this prompt: {text}
  2. Clean Enhancement: Enhance this prompt (no lora): {text}
  3. Technical Enhancement: Enhance this prompt (with lora): {text}
  4. Simplification: Simplify this prompt: {text}

Usage

from optimum.intel import OVModelForSeq2SeqLM
from transformers import T5Tokenizer

# Load OpenVINO model
model = OVModelForSeq2SeqLM.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-OpenVINO")
tokenizer = T5Tokenizer.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-OpenVINO")

# Example usage
text = "woman in red dress"
prompt = f"Enhance this prompt (no lora): {text}"

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=80)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)

Performance

Optimized for Intel hardware (CPU, integrated GPU, VPU) with significant speedup compared to standard PyTorch inference.

Deployment

Perfect for Intel NUC and other Intel-based edge devices.