FP4 Inference tensorRT error
#4
by
Ashoka74
- opened
[I] Initializing Flux txt2img demo using TensorRT
[I] Autoselected scheduler: FlowMatchEuler
[I] Load Scheduler FlowMatchEulerDiscreteScheduler from: pytorch_model/flux.1-dev/TXT2IMG/flowmatcheulerdiscretescheduler/scheduler
[I] Load CLIPTokenizer model from: pytorch_model/flux.1-dev/TXT2IMG/tokenizer
[I] Load T5TokenizerFast model from: pytorch_model/flux.1-dev/TXT2IMG/tokenizer_2
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading TensorRT engine to cpu bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/clip/engine_trt10.12.0.36.plan
[I] Loading bytes from /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/clip/engine_trt10.12.0.36.plan
Loading TensorRT engine from bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/clip/engine_trt10.12.0.36.plan
Loading TensorRT engine to cpu bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/t5/engine_trt10.12.0.36.plan
[I] Loading bytes from /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/t5/engine_trt10.12.0.36.plan
Loading TensorRT engine from bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/t5/engine_trt10.12.0.36.plan
Loading TensorRT engine to cpu bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/transformer_fp4/engine_trt10.12.0.36.plan
[I] Loading bytes from /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/transformer_fp4/engine_trt10.12.0.36.plan
Loading TensorRT engine from bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/transformer_fp4/engine_trt10.12.0.36.plan
Loading TensorRT engine to cpu bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/vae/engine_trt10.12.0.36.plan
[I] Loading bytes from /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/vae/engine_trt10.12.0.36.plan
Loading TensorRT engine from bytes: /workspace2/TensorRT/demo/Diffusion/engines/FluxPipeline_flux.1-dev/vae/engine_trt10.12.0.36.plan
/workspace2/TensorRT/demo/Diffusion/demo_diffusion/pipeline/diffusion_pipeline.py:865: DeprecationWarning: Use Deprecated in TensorRT 10.1. Superseded by get_device_memory_size_v2 instead.
max_device_memory = max(max_device_memory, engine.engine.device_memory_size)
/workspace2/TensorRT/demo/Diffusion/demo_diffusion/engine.py:276: DeprecationWarning: Use create_execution_context instead.
self.context = self.engine.create_execution_context_without_device_memory()
[I] Warming up ..
Traceback (most recent call last):
File "/workspace2/TensorRT/demo/Diffusion/demo_txt2img_flux.py", line 160, in <module>
demo.run(**kwargs_run_demo)
File "/workspace2/TensorRT/demo/Diffusion/demo_diffusion/pipeline/flux_pipeline.py", line 863, in run
self.infer(prompt, prompt2, height, width, warmup=True, **kwargs)
File "/workspace2/TensorRT/demo/Diffusion/demo_diffusion/pipeline/flux_pipeline.py", line 693, in infer
assert len(prompt) == len(prompt2)
AssertionError```
I manage to convert the FP4 files to TensorRT successfully, but then got this error running inference.
Are there differences on the vae? I had to take it from Flux.1-dev-onnx since there was no vae in this repo.