Furkan Gözükara

MonsterMMORPG

AI & ML interests

Check out my youtube page SECourses for Stable Diffusion tutorials. They will help you tremendously in every topic

Articles

Organizations

Posts 46

view post
Post
15
Kohya brought massive improvements to FLUX LoRA (as low as 4 GB GPUs) and DreamBooth / Fine-Tuning (as low as 6 GB GPUs) training - check attached images in full size to see full details

You can download all configs and full instructions

> https://www.patreon.com/posts/112099700 - Fine Tuning post

> https://www.patreon.com/posts/110879657 - LoRA post

Kohya brought massive improvements to FLUX LoRA and DreamBooth / Fine-Tuning (min 6GB GPU) training.

Now as low as 4GB GPUs can train FLUX LoRA with decent quality and 24GB and below GPUs got a huge speed boost when doing Full DreamBooth / Fine-Tuning training

You need minimum 4GB GPU to do a FLUX LoRA training and minimum 6 GB GPU to do FLUX DreamBooth / Full Fine-Tuning training. It is just mind blowing.

You can download all configs and full instructions > https://www.patreon.com/posts/112099700

The above post also has 1-click installers and downloaders for Windows, RunPod and Massed Compute

The model downloader scripts also updated and downloading 30+GB models takes total 1 minute on Massed Compute

You can read the recent updates here : https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#recent-updates

This is the Kohya GUI branch : https://github.com/bmaltais/kohya_ss/tree/sd3-flux.1

Key thing to reduce VRAM usage is using block swap

Kohya implemented the logic of OneTrainer to improve block swapping speed significantly and now it is supported for LoRAs as well

Now you can do FP16 training with LoRAs on 24 GB and below GPUs

Now you can train a FLUX LoRA on a 4 GB GPU - key is FP8, block swap and using certain layers training (remember single layer LoRA training)

It took me more than 1 day to test all newer configs, their VRAM demands, their relative step speeds and prepare the configs :)
view post
Post
3856
How To Use Mochi 1 Open Source Video Generation Model On Your Windows PC, RunPod and Massed Compute

Tutorial Link : https://youtu.be/iqBV7bCbDJY

Mochi 1 from Genmo is the newest state-of-the-art Open Source video generation model that you can use for free on your computer. This model is a breakthrough like the very first Stable Diffusion model but this time it is starting for the video generation models. In this tutorial, I am going to show you how to use Genmo Mochi 1 video generation model on your computer, on windows, locally with the most advanced and very easy to use SwarmUI. SwarmUI as fast as ComfyUI but also as easy as using Automatic1111 Stable Diffusion web UI. Moreover, if you don’t have a powerful GPU to run this model locally, I am going to show you how to use this model on the best cloud providers RunPod and Massed Compute.

🔗 Public Open Access Article Used in Video ⤵️
▶️ https://www.patreon.com/posts/106135985

Amazing Ultra Important Tutorials with Chapters and Manually Written Subtitles / Captions
Stable Diffusion 3.5 Large How To Use Tutorial With Best Configuration and Comparison With FLUX DEV : https://youtu.be/-zOKhoO9a5s

FLUX Full Fine-Tuning / DreamBooth Tutorial That Shows A Lot Info Regarding SwarmUI Latest : https://youtu.be/FvpWy1x5etM

Full FLUX Tutorial — FLUX Beats Midjourney for Real : https://youtu.be/bupRePUOA18

Main Windows SwarmUI Tutorial (Watch To Learn How to Use)

How to install and use. You have to watch this to learn how to use SwarmUI
Has 70 chapters and manually fixed captions : https://youtu.be/HKX8_F1Er_w