5090 GTX CUSTOM COMFY BUILD
Hi there @Jt-Zhang ,
Thanks again for early access to SageAttention 2.1 — it looks amazing, and I’m really eager to integrate it into my pipeline.
Before I do, I wanted to check if it’s compatible with my current setup — or if I need to 'downgrade or upgrad'* anything to avoid breakage (last time I tried an attention kernel mismatch, it wrecked my custom comfy build 😅).
Here’s my current system:
- GPU: NVIDIA RTX 5090 (SM_120, Blackwell, 32GB VRAM)
- VRAM Mode: NORMAL or HIGH VRAM
- Python: 3.10.9 (ComfyUI embedded)
- PyTorch: 2.6.0.dev20241112+cu121
- Triton: Included with this nightly PyTorch
- CUDA: 12.5
- ComfyUI Version: 0.3.43 (run via
run_nvidia_gpu.bat
, custom setup)
My goal is to integrate SageAttention2++ for speed and performance inside this workflow, just want to make sure my current environment won’t conflict with the kernel requirements or need a shift.
Let me know if anything should be adjusted on my end!
Really appreciate your hard work, and thanks again.
DJ
Hi again,
Quick follow-up — I’m thinking of updating my stack to:
- **Python: 3.10.9
- **PyTorch:2.7.1 (stable release)
- **CUDA: 12.8
- **GPU: RTX 5090 (Blackwell, sm_120)
Would that configuration be fully compatible with SageAttention 2.1 / 2++?
Appreciate your input — just want to be sure this new combo gives me the most stable performance possible with Blackwell and SageAttention!
Thanks so much.