Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Flux Lite 8B – 1024×1024 (Tensor Parallelism 4, AWS Inf2)
|
2 |
|
3 |
🚀 This repository contains the **compiled NeuronX graph** for running [Freepik’s Flux.1-Lite-8B](https://huggingface.co/Freepik/flux.1-lite-8B) model on **AWS Inferentia2 (Inf2)** instances, optimized for **1024×1024 image generation** with **tensor parallelism = 4**.
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
library_name: optimum.neuron
|
4 |
+
tags:
|
5 |
+
- diffusion
|
6 |
+
- image-generation
|
7 |
+
- aws
|
8 |
+
- neuronx
|
9 |
+
- inf2
|
10 |
+
- flux
|
11 |
+
- compiled
|
12 |
+
- bfloat16
|
13 |
+
license: creativeml-openrail-m
|
14 |
+
datasets:
|
15 |
+
- n/a
|
16 |
+
pipeline_tag: text-to-image
|
17 |
+
base_model: Freepik/flux.1-lite-8B
|
18 |
+
---
|
19 |
+
|
20 |
# Flux Lite 8B – 1024×1024 (Tensor Parallelism 4, AWS Inf2)
|
21 |
|
22 |
🚀 This repository contains the **compiled NeuronX graph** for running [Freepik’s Flux.1-Lite-8B](https://huggingface.co/Freepik/flux.1-lite-8B) model on **AWS Inferentia2 (Inf2)** instances, optimized for **1024×1024 image generation** with **tensor parallelism = 4**.
|