Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
0.5
TFLOPS
254
190
717
Nishith Jain
KingNish
Follow
radames's profile picture
saiydero's profile picture
Xlowaz's profile picture
1223 followers
·
106 following
kingnish24
KingNish24
AI & ML interests
AI is fun actually.
Recent Activity
updated
a Space
9 minutes ago
KingNish/Realtime-FLUX
reacted
to
a-r-r-o-w
's
post
with 🧠
14 minutes ago
Caching is an essential technique used in diffusion inference serving for speeding up image/video generations. Diffusers just added support for another caching method: First Block Cache - a technique developed by @chengzeyi building upon the ideas of TeaCache. The idea in short is: if the model predictions do not vary much over successive inference steps, we can skip certain steps where the prediction difference is small. To figure out whether an inference step will make a significant improvement to the overall velocity/noise prediction, we calculate the relative difference of the output of the first transformer block at timestep $t$ with $t-1$, and compare it against a selected threshold. If the difference is lower than the threshold, we skip the step. A higher threshold will lead to more steps being skipped. However, skipping many steps is bad because it can throw off the model predictions, and so we need to test and select the threshold based on level of quality-speed tradeoff for every model we use it with. Diffusers usage with CogView4: ```python import torch from diffusers import CogView4Pipeline from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16) pipe.to("cuda") apply_first_block_cache(pipe.transformer, FirstBlockCacheConfig(threshold=0.2)) prompt = "A photo of an astronaut riding a horse on mars" image = pipe(prompt, generator=torch.Generator().manual_seed(42)).images[0] image.save("output.png") ``` Below, you'll find the benchmarks and visualizations of the predicted output at different blocks of the Flux DiT. Docs: https://huggingface.co/docs/diffusers/main/en/optimization/cache PR: https://github.com/huggingface/diffusers/pull/11180 References: - First Block Cache: https://github.com/chengzeyi/ParaAttention - TeaCache: https://github.com/ali-vilab/TeaCache
reacted
to
a-r-r-o-w
's
post
with 🔥
14 minutes ago
Caching is an essential technique used in diffusion inference serving for speeding up image/video generations. Diffusers just added support for another caching method: First Block Cache - a technique developed by @chengzeyi building upon the ideas of TeaCache. The idea in short is: if the model predictions do not vary much over successive inference steps, we can skip certain steps where the prediction difference is small. To figure out whether an inference step will make a significant improvement to the overall velocity/noise prediction, we calculate the relative difference of the output of the first transformer block at timestep $t$ with $t-1$, and compare it against a selected threshold. If the difference is lower than the threshold, we skip the step. A higher threshold will lead to more steps being skipped. However, skipping many steps is bad because it can throw off the model predictions, and so we need to test and select the threshold based on level of quality-speed tradeoff for every model we use it with. Diffusers usage with CogView4: ```python import torch from diffusers import CogView4Pipeline from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16) pipe.to("cuda") apply_first_block_cache(pipe.transformer, FirstBlockCacheConfig(threshold=0.2)) prompt = "A photo of an astronaut riding a horse on mars" image = pipe(prompt, generator=torch.Generator().manual_seed(42)).images[0] image.save("output.png") ``` Below, you'll find the benchmarks and visualizations of the predicted output at different blocks of the Flux DiT. Docs: https://huggingface.co/docs/diffusers/main/en/optimization/cache PR: https://github.com/huggingface/diffusers/pull/11180 References: - First Block Cache: https://github.com/chengzeyi/ParaAttention - TeaCache: https://github.com/ali-vilab/TeaCache
View all activity
Organizations
KingNish
's datasets
18
Sort: Recently updated
KingNish/FineMath-3plus
Viewer
•
Updated
15 days ago
•
21.4M
•
398
•
2
KingNish/OpenThoughts3-100k
Viewer
•
Updated
23 days ago
•
100k
•
121
•
1
KingNish/Finetome-Deepseek-100k
Viewer
•
Updated
25 days ago
•
100
•
101
KingNish/Wikipedia-Convo-10k
Viewer
•
Updated
Jun 7
•
10k
•
157
•
2
KingNish/Wikipedia-10k
Viewer
•
Updated
May 28
•
10k
•
34
KingNish/OpenHermes_filtered
Viewer
•
Updated
May 23
•
524k
•
29
KingNish/reasoning-base-20k
Viewer
•
Updated
May 15
•
19.9k
•
573
•
224
KingNish/deny-harmful-behaviour
Viewer
•
Updated
May 1
•
416
•
21
•
3
KingNish/AIME-COD
Viewer
•
Updated
Apr 27
•
933
•
50
•
3
KingNish/mini_reasoning_1k
Viewer
•
Updated
Apr 27
•
1k
•
35
•
5
KingNish/libritts_r_clean_100_snac_codes
Viewer
•
Updated
Apr 10
•
33.2k
•
27
KingNish/svarah_snac_codes
Viewer
•
Updated
Apr 10
•
6.66k
•
38
KingNish/libritts_r_dev_clean_snac_codes
Viewer
•
Updated
Apr 10
•
5.74k
•
32
KingNish/YARD
Viewer
•
Updated
Oct 22, 2024
•
1.46k
•
34
•
4
KingNish/my-distiset
Viewer
•
Updated
Sep 18, 2024
•
100
•
16
•
1
KingNish/Image-Gen-or-Image-Editing
Viewer
•
Updated
Aug 25, 2024
•
8.05k
•
92
•
3
KingNish/huggingface-docs
Updated
Aug 22, 2024
•
143
•
1
KingNish/test-half-data
Viewer
•
Updated
Jun 17, 2024
•
6.7k
•
17