File size: 2,419 Bytes
d272c62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd094b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c39fb3
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
language: en
library_name: optimum.neuron
tags:
- diffusion
- image-generation
- aws
- neuronx
- inf2
- flux
- compiled
- bfloat16
license: creativeml-openrail-m
datasets:
- n/a
pipeline_tag: text-to-image
base_model: Freepik/flux.1-lite-8B
---

# Flux Lite 8B – 1024×1024 (Tensor Parallelism 4, AWS Inf2)

🚀 This repository contains the **compiled NeuronX graph** for running [Freepik’s Flux.1-Lite-8B](https://huggingface.co/Freepik/flux.1-lite-8B) model on **AWS Inferentia2 (Inf2)** instances, optimized for **1024×1024 image generation** with **tensor parallelism = 4**.

The model has been compiled using [🤗 Optimum Neuron](https://huggingface.co/docs/optimum/neuron/index) to leverage AWS NeuronCores for efficient inference at scale.

---

## 🔧 Compilation Details
- **Base model:** `Freepik/flux.1-lite-8B`
- **Framework:** [optimum-neuron](https://github.com/huggingface/optimum-neuron)
- **Tensor Parallelism:** `4` (splits model across 4 NeuronCores)
- **Input resolution:** `1024 × 1024`
- **Batch size:** `1`
- **Precision:** `bfloat16`
- **Auto-casting:** disabled (`auto_cast="none"`)

---

## 📥 Installation

Make sure you are running on an **AWS Inf2 instance** with the [AWS Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/neuron-intro.html) installed.

```bash
pip install "optimum[neuron]" torch torchvision
```

---


# 🚀 Usage

from optimum.neuron import NeuronFluxPipeline

# Load compiled pipeline from Hugging Face
```bash
pipe = NeuronFluxPipeline.from_pretrained(
    "kutayozbay/flux-lite-8B-1024x1024-tp4",
    device="neuron",              # run on AWS Inf2 NeuronCores
    torch_dtype="bfloat16",
    batch_size=1,
    height=1024,
    width=1024,
    tensor_parallel_size=4,
)
```

# Generate an image

```bash
prompt = "A futuristic city skyline at sunset"
image = pipe(prompt).images[0]
image.save("flux_output.png")

```


# 🛠 Re-compilation Example

To compile this model yourself:

```bash

from optimum.neuron import NeuronFluxPipeline

compiler_args = {"auto_cast": "none"}
input_shapes = {"batch_size": 1, "height": 1024, "width": 1024}

pipe = NeuronFluxPipeline.from_pretrained(
    "Freepik/flux.1-lite-8B",
    torch_dtype="bfloat16",
    export=True,
    tensor_parallel_size=4,
    **compiler_args,
    **input_shapes,
)

pipe.save_pretrained("flux_lite_neuronx_1024_tp4/")

```


---
license: apache-2.0
---