source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache | .md | >>> out = model.generate(**inputs, do_sample=False, max_new_tokens=20, cache_implementation="quantized", cache_config={"nbits": 4, "backend": "quanto"})
>>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
I like rock music because it's loud and energetic. It's a great way to express myself and rel | 6_5_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache | .md | >>> out = model.generate(**inputs, do_sample=False, max_new_tokens=20)
>>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
I like rock music because it's loud and energetic. I like to listen to it when I'm feeling
``` | 6_5_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | Similarly to KV cache quantization, [`~OffloadedCache`] strategy aims to reduce GPU VRAM usage.
It does so by moving the KV cache for most layers to the CPU.
As the model's `forward()` method iterates over the layers, this strategy maintains the current layer cache on the GPU.
At the same time it asynchronously prefetches the next layer cache as well as sending the previous layer cache back to the CPU. | 6_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | At the same time it asynchronously prefetches the next layer cache as well as sending the previous layer cache back to the CPU.
Unlike KV cache quantization, this strategy always produces the same result as the default KV cache implementation.
Thus, it can serve as a drop-in replacement or a fallback for it.
Depending on your model and the characteristics of your generation task (size of context, number of generated tokens, number of beams, etc.) | 6_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | you may notice a small degradation in generation throughput compared to the default KV cache implementation.
To enable KV cache offloading, pass `cache_implementation="offloaded"` in the `generation_config` or directly to the `generate()` call.
Use `cache_implementation="offloaded_static"` for an offloaded static cache (see also [Offloaded Static Cache](#offloaded-static-cache) below).
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM | 6_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | ```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> ckpt = "microsoft/Phi-3-mini-4k-instruct" | 6_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | >>> tokenizer = AutoTokenizer.from_pretrained(ckpt)
>>> model = AutoModelForCausalLM.from_pretrained(ckpt, torch_dtype=torch.float16).to("cuda:0")
>>> inputs = tokenizer("Fun fact: The shortest", return_tensors="pt").to(model.device)
>>> out = model.generate(**inputs, do_sample=False, max_new_tokens=23, cache_implementation="offloaded")
>>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
Fun fact: The shortest war in history was between Britain and Zanzibar on August 27, 1896. | 6_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | >>> out = model.generate(**inputs, do_sample=False, max_new_tokens=23)
>>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
Fun fact: The shortest war in history was between Britain and Zanzibar on August 27, 1896.
```
<Tip warning={true}>
Cache offloading requires a CUDA GPU and can be slower than dynamic KV cache. Use it if you are getting CUDA out of memory errors.
</Tip>
The example below shows how KV cache offloading can be used as a fallback strategy.
```python | 6_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | </Tip>
The example below shows how KV cache offloading can be used as a fallback strategy.
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> def resilient_generate(model, *args, **kwargs):
... oom = False
... try:
... return model.generate(*args, **kwargs)
... except torch.cuda.OutOfMemoryError as e:
... print(e)
... print("retrying with cache_implementation='offloaded'")
... oom = True
... if oom: | 6_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | ... print(e)
... print("retrying with cache_implementation='offloaded'")
... oom = True
... if oom:
... torch.cuda.empty_cache()
... kwargs["cache_implementation"] = "offloaded"
... return model.generate(*args, **kwargs)
...
...
>>> ckpt = "microsoft/Phi-3-mini-4k-instruct"
>>> tokenizer = AutoTokenizer.from_pretrained(ckpt)
>>> model = AutoModelForCausalLM.from_pretrained(ckpt, torch_dtype=torch.float16).to("cuda:0") | 6_6_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | >>> model = AutoModelForCausalLM.from_pretrained(ckpt, torch_dtype=torch.float16).to("cuda:0")
>>> prompt = ["okay "*1000 + "Fun fact: The most"]
>>> inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
>>> beams = { "num_beams": 40, "num_beam_groups": 40, "num_return_sequences": 40, "diversity_penalty": 1.0, "max_new_tokens": 23, "early_stopping": True, }
>>> out = resilient_generate(model, **inputs, **beams)
>>> responses = tokenizer.batch_decode(out[:,-28:], skip_special_tokens=True)
``` | 6_6_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | .md | >>> responses = tokenizer.batch_decode(out[:,-28:], skip_special_tokens=True)
```
On a GPU with 50 GB of RAM, running this code will print
```
CUDA out of memory. Tried to allocate 4.83 GiB. GPU
retrying with cache_implementation='offloaded'
```
before successfully generating 40 beams. | 6_6_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#static-cache | .md | Since the "DynamicCache" dynamically grows with each generation step, it prevents you from taking advantage of JIT optimizations. The [`~StaticCache`] pre-allocates
a specific maximum size for the keys and values, allowing you to generate up to the maximum length without having to modify cache size. Check the below usage example.
For more examples with Static Cache and JIT compilation, take a look at [StaticCache & torchcompile](./llm_optims#static-kv-cache-and-torchcompile)
```python
>>> import torch | 6_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#static-cache | .md | ```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM | 6_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#static-cache | .md | >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.float16, device_map="auto")
>>> inputs = tokenizer("Hello, my name is", return_tensors="pt").to(model.device) | 6_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#static-cache | .md | >>> # simply pass the cache implementation="static"
>>> out = model.generate(**inputs, do_sample=False, max_new_tokens=20, cache_implementation="static")
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"Hello, my name is [Your Name], and I am a [Your Profession] with [Number of Years] of"
``` | 6_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-static-cache | .md | Like [`~OffloadedCache`] exists for offloading a "DynamicCache", there is also an offloaded static cache. It fully supports
JIT optimizations. Just pass `cache_implementation="offloaded_static"` in the `generation_config` or directly to the `generate()` call.
This will use the [`~OffloadedStaticCache`] implementation instead.
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM | 6_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-static-cache | .md | >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.float16, device_map="auto")
>>> inputs = tokenizer("Hello, my name is", return_tensors="pt").to(model.device) | 6_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-static-cache | .md | >>> # simply pass the cache implementation="static"
>>> out = model.generate(**inputs, do_sample=False, max_new_tokens=20, cache_implementation="offloaded_static")
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"Hello, my name is [Your Name], and I am a [Your Profession] with [Number of Years] of"
```
Cache offloading requires a CUDA GPU. | 6_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sliding-window-cache | .md | As the name suggests, this cache type implements a sliding window over previous keys and values, retaining only the last `sliding_window` tokens. It should be used with models like Mistral that support sliding window attention. Additionally, similar to Static Cache, this one is JIT-friendly and can be used with the same compile tecniques as Static Cache.
Note that you can use this cache only for models that support sliding window, e.g. Mistral models.
```python
>>> import torch | 6_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sliding-window-cache | .md | Note that you can use this cache only for models that support sliding window, e.g. Mistral models.
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache | 6_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sliding-window-cache | .md | >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16).to("cuda:0")
>>> inputs = tokenizer("Yesterday I was on a rock concert and.", return_tensors="pt").to(model.device) | 6_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sliding-window-cache | .md | >>> # can be used by passing in cache implementation
>>> out = model.generate(**inputs, do_sample=False, max_new_tokens=30, cache_implementation="sliding_window")
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"Yesterday I was on a rock concert and. I was so excited to see my favorite band. I was so excited that I was jumping up and down and screaming. I was so excited that I"
``` | 6_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | .md | Sink Cache was introduced in ["Efficient Streaming Language Models with Attention Sinks"](https://arxiv.org/abs/2309.17453). It allows you to generate long sequences of text ("infinite length" according to the paper) without any fine-tuning. That is achieved by smart handling of previous keys and values, specifically it retains a few initial tokens from the sequence, called "sink tokens". This is based on the observation that these initial tokens attract a significant portion of attention scores during the | 6_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | .md | This is based on the observation that these initial tokens attract a significant portion of attention scores during the generation process. Tokens that come after "sink tokens" are discarded on a sliding windowed basis, keeping only the latest `window_size` tokens. By keeping these initial tokens as "attention sinks," the model maintains stable performance even when dealing with very long texts, thus discarding most of the previous knowledge. | 6_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | .md | Unlike other cache classes, this one can't be used directly by indicating a `cache_implementation`. You have to initialize the Cache before calling on `generate()` as follows.
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache | 6_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | .md | >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.float16).to("cuda:0")
>>> inputs = tokenizer("This is a long story about unicorns, fairies and magic.", return_tensors="pt").to(model.device) | 6_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | .md | >>> # get our cache, specify number of sink tokens and window size
>>> # Note that window size already includes sink tokens, so has to be larger
>>> past_key_values = SinkCache(window_length=256, num_sink_tokens=4)
>>> out = model.generate(**inputs, do_sample=False, max_new_tokens=30, past_key_values=past_key_values)
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0] | 6_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | .md | >>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"This is a long story about unicorns, fairies and magic. It is a fantasy world where unicorns and fairies live together in harmony. The story follows a young girl named Lily"
``` | 6_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#encoder-decoder-cache | .md | The [`~EncoderDecoderCache`] is a wrapper designed to handle the caching needs of encoder-decoder models. This cache type is specifically built to manage both self-attention and cross-attention caches, ensuring storage and retrieval of past key/values required for these complex models. Cool thing about Encoder-Decoder Cache is that you can set different cache types for the encoder and for the decoder, depending on your use case. Currently this cache is only supported in [Whisper](./model_doc/whisper) models | 6_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#encoder-decoder-cache | .md | for the decoder, depending on your use case. Currently this cache is only supported in [Whisper](./model_doc/whisper) models but we will be adding more models soon. | 6_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#encoder-decoder-cache | .md | In terms of usage, there is nothing special to be done and calling `generate()` or `forward()` will handle everything for you. | 6_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#model-specific-cache-classes | .md | Some models require storing previous keys, values, or states in a specific way, and the above cache classes cannot be used. For such cases, we have several specialized cache classes that are designed for specific models. These models only accept their own dedicated cache classes and do not support using any other cache types. Some examples include [`~HybridCache`] for [Gemma2](./model_doc/gemma2) series models or [`~MambaCache`] for [Mamba](./model_doc/mamba) architecture models. | 6_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | We have seen how to use each of the cache types when generating. What if you want to use cache in iterative generation setting, for example in applications like chatbots, where interactions involve multiple turns and continuous back-and-forth exchanges. Iterative generation with cache allows these systems to handle ongoing conversations effectively without reprocessing the entire context at each step. But there are some tips that you should know before you start implementing: | 6_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | The general format when doing iterative generation is as below. First you have to initialize an empty cache of the type you want, and you can start feeding in new prompts iteratively. Keeping track of dialogues history and formatting can be done with chat templates, read more on that in [chat_templating](./chat_templating) | 6_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | In case you are using Sink Cache, you have to crop your inputs to that maximum length because Sink Cache can generate text longer than its maximum window size, but it expects the first input to not exceed the maximum cache length.
```python
>>> import torch
>>> from transformers import AutoTokenizer,AutoModelForCausalLM
>>> from transformers.cache_utils import (
>>> DynamicCache,
>>> SinkCache,
>>> StaticCache,
>>> SlidingWindowCache,
>>> QuantoQuantizedCache, | 6_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | >>> DynamicCache,
>>> SinkCache,
>>> StaticCache,
>>> SlidingWindowCache,
>>> QuantoQuantizedCache,
>>> QuantizedCacheConfig,
>>> ) | 6_13_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | >>> model_id = "meta-llama/Llama-2-7b-chat-hf"
>>> model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map='auto')
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> user_prompts = ["Hello, what's your name?", "Btw, yesterday I was on a rock concert."]
>>> past_key_values = DynamicCache()
>>> max_cache_length = past_key_values.get_max_length() | 6_13_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | >>> messages = []
>>> for prompt in user_prompts:
... messages.append({"role": "user", "content": prompt})
... inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
... if isinstance(past_key_values, SinkCache):
... inputs = {k: v[:, -max_cache_length:] for k, v in inputs.items()}
...
... input_length = inputs["input_ids"].shape[1]
... | 6_13_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | ...
... input_length = inputs["input_ids"].shape[1]
...
... outputs = model.generate(**inputs, do_sample=False, max_new_tokens=256, past_key_values=past_key_values)
... completion = tokenizer.decode(outputs[0, input_length: ], skip_special_tokens=True)
... messages.append({"role": "assistant", "content": completion}) | 6_13_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | .md | print(messages)
[{'role': 'user', 'content': "Hello, what's your name?"}, {'role': 'assistant', 'content': " Hello! My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. 😊"}, {'role': 'user', 'content': 'Btw, yesterday I was on a rock concert.'}, {'role': 'assistant', 'content': ' Oh, cool! That sounds like a lot of fun! 🎉 Did you enjoy the concert? What was the band like? 🤔'}]
``` | 6_13_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#re-use-cache-to-continue-generation | .md | Sometimes you would want to first fill-in cache object with key/values for certain prefix prompt and re-use it several times to generate different sequences from it. In that case you can construct a `Cache` object that will hold the instruction prompt, and re-use it several times with different text sequences.
```python
>>> import copy
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, DynamicCache, StaticCache | 6_14_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#re-use-cache-to-continue-generation | .md | >>> model_id = "meta-llama/Llama-2-7b-chat-hf"
>>> model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda")
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> # Init StaticCache with big enough max-length (1024 tokens for the below example)
>>> # You can also init a DynamicCache, if that suits you better
>>> prompt_cache = StaticCache(config=model.config, max_batch_size=1, max_cache_len=1024, device="cuda", dtype=torch.bfloat16) | 6_14_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#re-use-cache-to-continue-generation | .md | >>> INITIAL_PROMPT = "You are a helpful assistant. "
>>> inputs_initial_prompt = tokenizer(INITIAL_PROMPT, return_tensors="pt").to("cuda")
>>> # This is the common prompt cached, we need to run forward without grad to be abel to copy
>>> with torch.no_grad():
... prompt_cache = model(**inputs_initial_prompt, past_key_values = prompt_cache).past_key_values | 6_14_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#re-use-cache-to-continue-generation | .md | >>> prompts = ["Help me to write a blogpost about travelling.", "What is the capital of France?"]
>>> responses = []
>>> for prompt in prompts:
... new_inputs = tokenizer(INITIAL_PROMPT + prompt, return_tensors="pt").to("cuda")
... past_key_values = copy.deepcopy(prompt_cache)
... outputs = model.generate(**new_inputs, past_key_values=past_key_values,max_new_tokens=20)
... response = tokenizer.batch_decode(outputs)[0]
... responses.append(response) | 6_14_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#re-use-cache-to-continue-generation | .md | >>> print(responses)
['<s> You are a helpful assistant. Help me to write a blogpost about travelling.\n\nTitle: The Ultimate Guide to Travelling: Tips, Tricks, and', '<s> You are a helpful assistant. What is the capital of France?\n\nYes, the capital of France is Paris.</s>']
``` | 6_14_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#legacy-cache-format | .md | Prior to the introduction of the `Cache` object, the cache of LLMs used to be a tuple of tuples of tensors. The legacy
format has a dynamic size, growing as we generate text -- very similar to `DynamicCache`. If your project depend on
this legacy format, you can seamlessly convert it to a `DynamicCache` and back.
```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, DynamicCache | 6_15_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#legacy-cache-format | .md | >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.float16, device_map="auto")
>>> inputs = tokenizer("Hello, my name is", return_tensors="pt").to(model.device) | 6_15_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#legacy-cache-format | .md | >>> # `return_dict_in_generate=True` is required to return the cache. `return_legacy_cache` forces the returned cache
>>> # to be of the legacy type
>>> generation_outputs = model.generate(**inputs, return_dict_in_generate=True, return_legacy_cache=True, max_new_tokens=5) | 6_15_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#legacy-cache-format | .md | >>> # We can convert a legacy cache to a DynamicCache -- and the other way around. This is helpful if you have custom
>>> # logic to manipulate a cache in a specific format.
>>> cache = DynamicCache.from_legacy_cache(generation_outputs.past_key_values)
>>> legacy_format_cache = cache.to_legacy_cache()
``` | 6_15_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 7_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 7_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#distributed-training-with--accelerate | .md | As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the [🤗 Accelerate](https://huggingface.co/docs/accelerate) library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop | 7_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#distributed-training-with--accelerate | .md | machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment. | 7_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#setup | .md | Get started by installing 🤗 Accelerate:
```bash
pip install accelerate
```
Then import and create an [`~accelerate.Accelerator`] object. The [`~accelerate.Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device.
```py
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
``` | 7_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#prepare-to-accelerate | .md | The next step is to pass all the relevant training objects to the [`~accelerate.Accelerator.prepare`] method. This includes your training and evaluation DataLoaders, a model and an optimizer:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
``` | 7_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#backward | .md | The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`~accelerate.Accelerator.backward`] method:
```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss) | 7_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#backward | .md | ... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training!
```diff
+ from accelerate import Accelerator
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
+ accelerator = Accelerator() | 7_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#backward | .md | + accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=3e-5)
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+ train_dataloader, eval_dataloader, model, optimizer
+ ) | 7_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#backward | .md | num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
- batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
- loss.backward()
+ accelerator.backward(loss) | 7_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#backward | .md | optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
``` | 7_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#train | .md | Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. | 7_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#train-with-a-script | .md | If you are running your training from a script, run the following command to create and save a configuration file:
```bash
accelerate config
```
Then launch your training with:
```bash
accelerate launch train.py
``` | 7_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#train-with-a-notebook | .md | 🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [`~accelerate.notebook_launcher`]:
```py
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
```
For more information about 🤗 Accelerate and its rich features, refer to the [documentation](https://huggingface.co/docs/accelerate). | 7_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/ | .md | <!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 8_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community | .md | This page regroups resources around 🤗 Transformers developed by the community. | 8_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-resources | .md | | Resource | Description | Author |
|:----------|:-------------|------:| | 8_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-resources | .md | | [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | A set of flashcards based on the [Transformers Docs Glossary](glossary) that has been put into a form which can be easily learned/revised using [Anki](https://apps.ankiweb.net/) an open source, cross platform app specifically designed for long term knowledge retention. See this [Introductory video on how to use the flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | | 8_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-resources | .md | knowledge retention. See this [Introductory video on how to use the flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | | 8_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | Notebook | Description | Author | |
|:----------|:-------------|:-------------|------:| | 8_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |:----------|:-------------|:-------------|------:|
| [Fine-tune a pre-trained Transformer to generate lyrics](https://github.com/AlekseyKorshuk/huggingartists) | How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | 8_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Train T5 in Tensorflow 2](https://github.com/snapthat/TF-T5-text-to-text) | How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | 8_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Train T5 on TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | How to train T5 on SQUAD with Transformers and Nlp | [Suraj Patil](https://github.com/patil-suraj) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | 8_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune T5 for Classification and Multiple Choice](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | 8_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune DialoGPT on New Datasets and Languages](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots | [Nathan Cooper](https://github.com/ncoop57) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | 8_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Long Sequence Modeling with Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | How to train on sequences as long as 500,000 tokens with Reformer | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | 8_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune BART for Summarization](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | How to fine-tune BART for summarization with fastai using blurr | [Wayde Gilliam](https://ohmeow.com/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | | 8_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune a pre-trained Transformer on anyone's tweets](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | 8_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Optimize 🤗 Hugging Face models with Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | A complete tutorial showcasing W&B integration with Hugging Face | [Boris Dayma](https://github.com/borisdayma) | [![Open In | 8_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | A complete tutorial showcasing W&B integration with Hugging Face | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | 8_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Pretrain Longformer](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | How to build a "long" version of existing pretrained models | [Iz Beltagy](https://beltagy.net) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | 8_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune Longformer for QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | How to fine-tune longformer model for QA task | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | 8_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Evaluate Model with 🤗nlp](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | How to evaluate longformer on TriviaQA with `nlp` | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | 8_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune T5 for Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning | [Lorenzo Ampil](https://github.com/enzoampil) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | 8_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune DistilBert for Multiclass Classification](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | How to fine-tune DistilBert for multiclass classification with PyTorch | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In | 8_3_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | DistilBert for multiclass classification with PyTorch | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| | 8_3_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune BERT for Multi-label Classification](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|How to fine-tune BERT for multi-label classification using PyTorch|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| | 8_3_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune T5 for Summarization](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)|How to fine-tune T5 for summarization in PyTorch and track experiments with WandB|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| | 8_3_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)|How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing|[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| | 8_3_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Pretrain Reformer for Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| How to train a Reformer model with bi-directional self-attention layers | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| | 8_3_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Expand and Fine Tune Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| | 8_3_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine Tune BlenderBotSmall for Summarization using the Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| | 8_3_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune Electra and interpret with Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients | [Eliza Szczechla](https://elsanns.github.io) | [![Open In | 8_3_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | analysis and interpret predictions with Captum Integrated Gradients | [Eliza Szczechla](https://elsanns.github.io) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| | 8_3_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[fine-tune a non-English GPT-2 Model with Trainer class](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | How to fine-tune a non-English GPT-2 Model with Trainer class | [Philipp Schmid](https://www.philschmid.de) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| | 8_3_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune a DistilBERT Model for Multi Label Classification task](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | How to fine-tune a DistilBERT Model for Multi Label Classification task | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| | 8_3_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune ALBERT for sentence-pair classification](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In | 8_3_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | BERT-based model for the sentence-pair classification task | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| | 8_3_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune Roberta for sentiment analysis](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | How to fine-tune a Roberta model for sentiment analysis | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| | 8_3_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Evaluating Question Generation Models](https://github.com/flexudy-pipe/qugeev) | How accurate are the answers to questions generated by your seq2seq transformer model? | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| | 8_3_30 |