black-forest-labs/FLUX.1-schnell - AMD Optimized ONNX

Original Model

https://huggingface.co/black-forest-labs/FLUX.1-schnell

_io32/16

_io32: model input is fp32, model will convert the input to fp16, perform ops in fp16 and write the final result in fp32

_io16: model input is fp16, perform ops in fp16 and write the final result in fp16

Running

1. Using Amuse GUI Application

Use Amuse GUI application to run it: https://www.amuse-ai.com/

use _io32 model to run with Amuse application

2. Inference Demo

https://github.com/TensorStack-AI/OnnxStack

// csharp example
// Create Pipeline
var pipeline = FluxPipeline.CreatePipeline("D:\\Models\\Flux.1-schnell_amdgpu");
// Prompt
var promptOptions = new PromptOptions
{
    Prompt = "a majestic Royal Bengal Tiger on the mountain top overlooking beatiful Lake Tahoe snowy mountains and deep blue lake, deep blue sky, ultra hd, 8k, photorealistic"
};
// Scheduler Options
var schedulerOptions = pipeline.DefaultSchedulerOptions with
{  
    InferenceSteps = 4,
    GuidanceScale = 1.0f,
    SchedulerType = SchedulerType.FlowMatchEulerDiscrete,
};

// Run pipeline
var result = await pipeline.GenerateImageAsync(promptOptions, schedulerOptions);

// Save Image Result
await result.SaveAsync("Result.png");

Inference Result

Intro Image

Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for amd/FLUX.1-schnell_io16_amdgpu

Quantized
(17)
this model

Collection including amd/FLUX.1-schnell_io16_amdgpu