Custom `generate` methods discussion

#10
by joaogante - opened
Transformers Community org

Hi everyone ๐Ÿ‘‹

This thread aims to centralize the discussion about custom generate methods. Questions about how it works, improvement suggestions, and requests for new generation methods are welcome!

Resources:
๐Ÿ‘‰ docs
๐Ÿ‘‰ all custom generate methods

Transformers Community org
โ€ข
edited 8 days ago

Hi @joaogante

We want to integrate our method into the transformers-community (advised by your official team.). See details in https://github.com/huggingface/transformers/pull/38824

Could you please provide us with the detailed procedures on how to create a new pull request to this transformers-community?

Summary:
We want to merge https://huggingface.co/Gausson/sep_cache [ICML 2025] into transformers-community/sep_cache but we couldnโ€™t find a pull request option on the Hugging Face Hubโ€”could you guide me on the correct procedure?

Best Regards

Transformers Community org

Hi @Gausson , amazing contribution! We'll invite you as a Contributor to the organization so you can transfer your repo using the "Rename or transfer this model" section of your repo's settings. This way, the model URL will be automatically redirected, and downloads and like counts will be preserved.

Transformers Community org
โ€ข
edited 1 day ago

Hey @Gausson !

Veeery cool repo -- I'm impressed with the complexity of what you've build with custom_generate, and with how well documented it is ๐Ÿ”ฅ

I'm going to invite you as a contributor to transformers-community, so you can move your repo there. In a nutshell, you would still have complete control over the Hub repo, but it gets higher visibility because of the org. After it is moved, we're going to create a new collection of community-contributed custom_generate repos. Let us know if you have any questions! ๐Ÿค—


Meanwhile, two minor comments to the repo itself:

  • On transformers==4.54, we've released a cache refator: caches are built as a composition of layers, as opposed to being isolated objects. Sadly, I think your current abstraction doesn't work with it. You might want to update references from transformers>=4.53 to transformers==4.53;
  • On your demo script in README.md, I suggest adding torch_dtype=torch.bfloat16 when loading the model, so that the demo is immediately compatible with 24GB GPUs.

Discussion centralized from this GH issue and this Hub issue.

Transformers Community org

@Gausson you should have an invite to transformers-community

Transformers Community org

@joaogante @pcuenq

Thank you very much for your reply.

I have merged sep_cache into the transformers-community repo. Next, I will modify my README.md according to your suggestions later. When you have time, please kindly add sep_cache to the collection of custom_generate, etc. Thank you very much. ๐Ÿ˜Š๐Ÿ˜Š๐Ÿ˜Š

Transformers Community org

@Gausson added to a new collection here ๐Ÿค—

Sign up or log in to comment