Regarding the loading of the model

#2
by someshijun - opened

Hello, I'd like to take the liberty of asking. If I use the latest INT8 (W8A8) model loading node for loading, how should I set the model_type? Can you provide a screenshot for reference? Thank you.

The latest ComfyUI-Flux2-INT8 has on-the-fly option, and if you enable it, the model you selected will be quantized on-the-fly as model_type.

In other word, you don't need to set model_type (keep it as default - Flux2 afaik) when you use prequantized model - just disable on-the-fly option.

This comment has been hidden (marked as Off-Topic)
someshijun changed discussion status to closed
someshijun changed discussion status to open

It doesn't work! Can you provide examples?

You can use int8 model (Tensorwide) model with custom_node https://github.com/BobJohnson24/ComfyUI-Flux2-INT8
If you want to use int8rowwise model, You should check the custom node is in Quip branch.

~/Servers/ComfyUI/custom_nodes/ComfyUI-Flux2-INT8$ git branch
* Quip
  main

Then, use the model as below.
image

Sign up or log in to comment