Kaizhao Liang PRO

kz919

AI & ML interests

Search = AGI?

Recent Activity

liked a dataset about 1 hour ago
open-thoughts/OpenThoughts-114k
liked a model about 1 hour ago
deepseek-ai/DeepSeek-R1
View all activity

Organizations

SambaNova Systems's profile picture Ontocord's M*DEL's profile picture Sambanova-Gradio-Hackathon's profile picture

kz919's activity

reacted to m-ric's post with โž•๐Ÿค—๐Ÿš€โค๏ธ๐Ÿ”ฅ 2 days ago
view post
Post
2778
๐—ง๐—ต๐—ฒ ๐—›๐˜‚๐—ฏ ๐˜„๐—ฒ๐—น๐—ฐ๐—ผ๐—บ๐—ฒ๐˜€ ๐—ฒ๐˜…๐˜๐—ฒ๐—ฟ๐—ป๐—ฎ๐—น ๐—ถ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ฝ๐—ฟ๐—ผ๐˜ƒ๐—ถ๐—ฑ๐—ฒ๐—ฟ๐˜€!

โœ… Hosting our own inference was not enough: now the Hub 4 new inference providers: fal, Replicate, SambaNova Systems, & Together AI.

Check model cards on the Hub: you can now, in 1 click, use inference from various providers (cf video demo)

Their inference can also be used through our Inference API client. There, you can use either your custom provider key, or your HF token, then billing will be handled directly on your HF account, as a way to centralize all expenses.

๐Ÿ’ธ Also, PRO users get 2$ inference credits per month!

Read more in the announcement ๐Ÿ‘‰ https://huggingface.co/blog/inference-providers
  • 1 reply
ยท
reacted to their post with ๐Ÿ‘ 16 days ago
posted an update 21 days ago
reacted to rwightman's post with ๐Ÿ”ฅ๐Ÿš€ about 2 months ago
view post
Post
1415
There's a new timm release, v 1.0.12, with a focus on optimizers. The optimizer factory has been refactored, there's now a timm.optim.list_optimizers() and new way to register optimizers and their attributes. As always you can use an timm optimizer like a torch one, just replace torch.optim with timm.optim

New optimizers include:
* AdafactorBigVision - adfactorbv
* ADOPT - adopt / adoptw (decoupled decay)
* MARS - mars
* LaProp - laprop
* Cautious Optimizers - a modification to all of the above, prefix with c as well as cadamw, cnadamw, csgdw, clamb, crmsproptf

I shared some caution comparisons in this model repo: rwightman/timm-optim-caution

For details, references, see the code: https://github.com/huggingface/pytorch-image-models/tree/main/timm/optim

  • 3 replies
ยท
reacted to di-zhang-fdu's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
3062
  • 3 replies
ยท
reacted to MonsterMMORPG's post with ๐Ÿ‘๐Ÿค๐Ÿคฏ๐Ÿง โž•๐Ÿ˜Ž๐Ÿค—โค๏ธ๐Ÿ‘€๐Ÿš€ 4 months ago
view post
Post
2951
Huge FLUX LoRA vs Fine Tuning / DreamBooth Experiments Completed, Moreover Batch Size 1 vs 7 Fully Tested as Well, Not Only for Realism But Also for Stylization - 15 vs 256 images having datasets compared as well (expressions / emotions tested too) - Used Kohya GUI for training

Full files and article : https://www.patreon.com/posts/112099700

Download images in full resolution to see prompts and model names

All trainings are done with Kohya GUI, perfectly can be done locally on Windows, and all trainings were 1024x1024 pixels

Fine Tuning / DreamBooth works as low as 6 GB GPUs (0 quality degrade totally same as 48 GB config)

Best quality of LoRA requires 48 GB GPUs , 24 GB also works really good and minimum 8 GB GPU is necessary for LoRA (lots of quality degrade)

Full size grids are also shared for the followings: https://www.patreon.com/posts/112099700

Additionally, I have shared full training entire logs that you can see each checkpoint took time. I have shared best checkpoints, their step count and took time according to being either LoRA, Fine Tuning or Batch size 1 or 7 or 15 images or 256 images, so a very detailed article regarding completed.

Check the images to see all shared files in the post.

Furthermore, a very very detailed analysis having article written and all latest DreamBooth / Fine Tuning configs and LoRA configs are shared with Kohya GUI installers for both Windows, Runpod and Massed Compute.

Moreover, I have shared new 28 realism and 37 stylization testing prompts.

Current tutorials are as below:

Windows requirements CUDA, Python, cuDNN, and such : https://youtu.be/DrhUHnYfwC0

How to use SwarmUI : https://youtu.be/HKX8_F1Er_w

How to use FLUX on SwarmUI : https://youtu.be/bupRePUOA18

How to use Kohya GUI for FLUX training : https://youtu.be/nySGu12Y05k

How to use Kohya GUI for FLUX training on Cloud (RunPod and Massed Compute) : https://youtu.be/-uhL2nW7Ddw