source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | google-bert/bert-base-uncased 8 32 1281
google-bert/bert-base-uncased 8 128 1307
google-bert/bert-base-uncased 8 512 1539
-------------------------------------------------------------------------------- | 14_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== ENVIRONMENT INFORMATION ==================== | 14_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.4.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 08:58:43.371351
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</pt>
<tf>
```bash | 14_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</pt>
<tf>
```bash
python examples/tensorflow/benchmarking/run_benchmark_tf.py --help
```
An instantiated benchmark object can then simply be run by calling `benchmark.run()`.
```py
>>> results = benchmark.run()
>>> print(results)
>>> results = benchmark.run()
>>> print(results)
==================== INFERENCE - SPEED - RESULT ==================== | 14_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | >>> print(results)
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
google-bert/bert-base-uncased 8 8 0.005
google-bert/bert-base-uncased 8 32 0.008 | 14_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | google-bert/bert-base-uncased 8 32 0.008
google-bert/bert-base-uncased 8 128 0.022
google-bert/bert-base-uncased 8 512 0.105
-------------------------------------------------------------------------------- | 14_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
google-bert/bert-base-uncased 8 8 1330
google-bert/bert-base-uncased 8 32 1330 | 14_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | google-bert/bert-base-uncased 8 32 1330
google-bert/bert-base-uncased 8 128 1330
google-bert/bert-base-uncased 8 512 1770
-------------------------------------------------------------------------------- | 14_2_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== ENVIRONMENT INFORMATION ==================== | 14_2_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - transformers_version: 2.11.0
- framework: Tensorflow
- use_xla: False
- framework_version: 2.2.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:26:35.617317
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</tf>
</frameworkcontent> | 14_2_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</tf>
</frameworkcontent>
By default, the _time_ and the _required memory_ for _inference_ are benchmarked. In the example output above the first
two sections show the result corresponding to _inference time_ and _inference memory_. In addition, all relevant
information about the computing environment, _e.g._ the GPU type, the system, the library versions, etc... are printed | 14_2_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | information about the computing environment, _e.g._ the GPU type, the system, the library versions, etc... are printed
out in the third section under _ENVIRONMENT INFORMATION_. This information can optionally be saved in a _.csv_ file
when adding the argument `save_to_csv=True` to [`PyTorchBenchmarkArguments`] and
[`TensorFlowBenchmarkArguments`] respectively. In this case, every section is saved in a separate | 14_2_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | [`TensorFlowBenchmarkArguments`] respectively. In this case, every section is saved in a separate
_.csv_ file. The path to each _.csv_ file can optionally be defined via the argument data classes.
Instead of benchmarking pre-trained models via their model identifier, _e.g._ `google-bert/bert-base-uncased`, the user can
alternatively benchmark an arbitrary configuration of any available model class. In this case, a `list` of
configurations must be inserted with the benchmark args as follows. | 14_2_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | configurations must be inserted with the benchmark args as follows.
<frameworkcontent>
<pt>
```py
>>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig | 14_2_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | >>> args = PyTorchBenchmarkArguments(
... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> config_base = BertConfig()
>>> config_384_hid = BertConfig(hidden_size=384)
>>> config_6_lay = BertConfig(num_hidden_layers=6) | 14_2_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | >>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
>>> benchmark.run()
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base 8 128 0.006 | 14_2_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-base 8 128 0.006
bert-base 8 512 0.006
bert-base 8 128 0.018
bert-base 8 512 0.088
bert-384-hid 8 8 0.006
bert-384-hid 8 32 0.006
bert-384-hid 8 128 0.011
bert-384-hid 8 512 0.054 | 14_2_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-384-hid 8 128 0.011
bert-384-hid 8 512 0.054
bert-6-lay 8 8 0.003
bert-6-lay 8 32 0.004
bert-6-lay 8 128 0.009
bert-6-lay 8 512 0.044
-------------------------------------------------------------------------------- | 14_2_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base 8 8 1277
bert-base 8 32 1281
bert-base 8 128 1307 | 14_2_28 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-base 8 32 1281
bert-base 8 128 1307
bert-base 8 512 1539
bert-384-hid 8 8 1005
bert-384-hid 8 32 1027
bert-384-hid 8 128 1035
bert-384-hid 8 512 1255
bert-6-lay 8 8 1097 | 14_2_29 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-384-hid 8 512 1255
bert-6-lay 8 8 1097
bert-6-lay 8 32 1101
bert-6-lay 8 128 1127
bert-6-lay 8 512 1359
-------------------------------------------------------------------------------- | 14_2_30 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== ENVIRONMENT INFORMATION ==================== | 14_2_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.4.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:35:25.143267
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</pt>
<tf>
```py | 14_2_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</pt>
<tf>
```py
>>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig | 14_2_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | >>> args = TensorFlowBenchmarkArguments(
... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> config_base = BertConfig()
>>> config_384_hid = BertConfig(hidden_size=384)
>>> config_6_lay = BertConfig(num_hidden_layers=6) | 14_2_34 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | >>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
>>> benchmark.run()
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base 8 8 0.005 | 14_2_35 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-base 8 8 0.005
bert-base 8 32 0.008
bert-base 8 128 0.022
bert-base 8 512 0.106
bert-384-hid 8 8 0.005
bert-384-hid 8 32 0.007
bert-384-hid 8 128 0.018
bert-384-hid 8 512 0.064 | 14_2_36 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-384-hid 8 128 0.018
bert-384-hid 8 512 0.064
bert-6-lay 8 8 0.002
bert-6-lay 8 32 0.003
bert-6-lay 8 128 0.0011
bert-6-lay 8 512 0.074
-------------------------------------------------------------------------------- | 14_2_37 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base 8 8 1330
bert-base 8 32 1330
bert-base 8 128 1330 | 14_2_38 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-base 8 32 1330
bert-base 8 128 1330
bert-base 8 512 1770
bert-384-hid 8 8 1330
bert-384-hid 8 32 1330
bert-384-hid 8 128 1330
bert-384-hid 8 512 1540
bert-6-lay 8 8 1330 | 14_2_39 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | bert-384-hid 8 512 1540
bert-6-lay 8 8 1330
bert-6-lay 8 32 1330
bert-6-lay 8 128 1330
bert-6-lay 8 512 1540
-------------------------------------------------------------------------------- | 14_2_40 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | ==================== ENVIRONMENT INFORMATION ==================== | 14_2_41 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - transformers_version: 2.11.0
- framework: Tensorflow
- use_xla: False
- framework_version: 2.2.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:38:15.487125
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</tf>
</frameworkcontent> | 14_2_42 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#how-to-benchmark--transformers-models | .md | - gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</tf>
</frameworkcontent>
Again, _inference time_ and _required memory_ for _inference_ are measured, but this time for customized configurations
of the `BertModel` class. This feature can especially be helpful when deciding for which configuration the model
should be trained. | 14_2_43 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#benchmark-best-practices | .md | This section lists a couple of best practices one should be aware of when benchmarking a model.
- Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user
specifies on which device the code should be run by setting the `CUDA_VISIBLE_DEVICES` environment variable in the
shell, _e.g._ `export CUDA_VISIBLE_DEVICES=0` before running the code.
- The option `no_multi_processing` should only be set to `True` for testing and debugging. To ensure accurate | 14_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#benchmark-best-practices | .md | - The option `no_multi_processing` should only be set to `True` for testing and debugging. To ensure accurate
memory measurement it is recommended to run each memory benchmark in a separate process by making sure
`no_multi_processing` is set to `True`.
- One should always state the environment information when sharing the results of a model benchmark. Results can vary
heavily between different GPU devices, library versions, etc., as a consequence, benchmark results on their own are not very | 14_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#benchmark-best-practices | .md | heavily between different GPU devices, library versions, etc., as a consequence, benchmark results on their own are not very
useful for the community. | 14_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#sharing-your-benchmark | .md | Previously all available core models (10 at the time) have been benchmarked for _inference time_, across many different
settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were
done across CPUs (except for TensorFlow XLA) and GPUs.
The approach is detailed in the [following blogpost](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2) and the results are | 14_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md | https://huggingface.co/docs/transformers/en/benchmarks/#sharing-your-benchmark | .md | available [here](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit?usp=sharing).
With the new _benchmark_ tools, it is easier than ever to share your benchmark results with the community
- [PyTorch Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/pytorch/benchmarking/README.md).
- [TensorFlow Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/benchmarking/README.md). | 14_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 15_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 15_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#text-generation-strategies | .md | Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and
more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text
and vision-to-text. Some of the models that can generate text include
GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper.
Check out a few examples that use [`~generation.GenerationMixin.generate`] method to produce
text outputs for different tasks: | 15_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#text-generation-strategies | .md | Check out a few examples that use [`~generation.GenerationMixin.generate`] method to produce
text outputs for different tasks:
* [Text summarization](./tasks/summarization#inference)
* [Image captioning](./model_doc/git#transformers.GitForCausalLM.forward.example)
* [Audio transcription](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example)
Note that the inputs to the generate method depend on the model's modality. They are returned by the model's preprocessor | 15_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#text-generation-strategies | .md | Note that the inputs to the generate method depend on the model's modality. They are returned by the model's preprocessor
class, such as AutoTokenizer or AutoProcessor. If a model's preprocessor creates more than one kind of input, pass all
the inputs to generate(). You can learn more about the individual model's preprocessor in the corresponding model's documentation.
The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy | 15_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#text-generation-strategies | .md | The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy
that the `generate()` method will use. Modifying a decoding strategy does not change the values of any trainable parameters.
However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text
and make it more coherent.
This guide describes:
* default generation configuration
* common decoding strategies and their main parameters | 15_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#text-generation-strategies | .md | This guide describes:
* default generation configuration
* common decoding strategies and their main parameters
* saving and sharing custom generation configurations with your fine-tuned model on 🤗 Hub | 15_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#default-text-generation-configuration | .md | A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference
within a [`pipeline`], the models call the `PreTrainedModel.generate()` method that applies a default generation
configuration under the hood. The default configuration is also used when no custom configuration has been saved with
the model.
When you load a model explicitly, you can inspect the generation configuration that comes with it through
`model.generation_config`:
```python | 15_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#default-text-generation-configuration | .md | `model.generation_config`:
```python
>>> from transformers import AutoModelForCausalLM | 15_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#default-text-generation-configuration | .md | >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> model.generation_config
GenerationConfig {
"bos_token_id": 50256,
"eos_token_id": 50256
}
<BLANKLINE>
```
Printing out the `model.generation_config` reveals only the values that are different from the default generation
configuration, and does not list any of the default values.
The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 | 15_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#default-text-generation-configuration | .md | The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20
tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks
and small output sizes this works well. However, when used to generate longer outputs, greedy search can start
producing highly repetitive results. | 15_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | You can override any `generation_config` by passing the parameters and their values directly to the [`generate`] method:
```python
>>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP
```
Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the
commonly adjusted parameters include:
- `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not | 15_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not
including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose
to stop generation whenever the full generation exceeds some amount of time. To learn more, check [`StoppingCriteria`].
- `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to | 15_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to
beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that
has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability | 15_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability
sequences that start with a lower probability initial tokens and would've been ignored by the greedy search. Visualize how it works [here](https://huggingface.co/spaces/m-ric/beam_search_visualizer).
- `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search | 15_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | - `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search
multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability
distribution over the entire vocabulary with various strategy-specific adjustments.
- `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for | 15_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | - `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for
the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding
strategies like greedy search and contrastive search return a single output sequence.
It is also possible to extend `generate()` with external libraries or handcrafted code. The `logits_processor` argument | 15_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#customize-text-generation | .md | It is also possible to extend `generate()` with external libraries or handcrafted code. The `logits_processor` argument
allows you to pass custom [`LogitsProcessor`] instances, allowing you to manipulate the next token probability
distributions. Likewise, the `stopping_criteria` argument lets you set custom [`StoppingCriteria`] to stop text generation.
The [`logits-processor-zoo`](https://github.com/NVIDIA/logits-processor-zoo) library contains examples of external
`generate()`-compatible extensions. | 15_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | If you would like to share your fine-tuned model with a specific generation configuration, you can:
* Create a [`GenerationConfig`] class instance
* Specify the decoding strategy parameters
* Save your generation configuration with [`GenerationConfig.save_pretrained`], making sure to leave its `config_file_name` argument empty
* Set `push_to_hub` to `True` to upload your config to the model's repo
```python
>>> from transformers import AutoModelForCausalLM, GenerationConfig | 15_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP
>>> generation_config = GenerationConfig(
... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id
... )
>>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP
```
You can also store several generation configurations in a single directory, making use of the `config_file_name` | 15_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | ```
You can also store several generation configurations in a single directory, making use of the `config_file_name`
argument in [`GenerationConfig.save_pretrained`]. You can later instantiate them with [`GenerationConfig.from_pretrained`]. This is useful if you want to
store several generation configurations for a single model (e.g. one for creative text generation with sampling, and
one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model. | 15_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig | 15_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small")
>>> translation_generation_config = GenerationConfig(
... num_beams=4,
... early_stopping=True,
... decoder_start_token_id=0,
... eos_token_id=model.config.eos_token_id,
... pad_token=model.config.pad_token_id,
... ) | 15_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | >>> # Tip: add `push_to_hub=True` to push to the Hub
>>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") | 15_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#save-a-custom-decoding-strategy-with-your-model | .md | >>> # You could then use the named generation config file to parameterize generation
>>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json")
>>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt")
>>> outputs = model.generate(**inputs, generation_config=generation_config)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Les fichiers de configuration sont faciles à utiliser!']
``` | 15_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#streaming | .md | The `generate()` supports streaming, through its `streamer` input. The `streamer` input is compatible with any instance
from a class that has the following methods: `put()` and `end()`. Internally, `put()` is used to push new tokens and
`end()` is used to flag the end of text generation.
<Tip warning={true}>
The API for the streamer classes is still under development and may change in the future.
</Tip> | 15_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#streaming | .md | <Tip warning={true}>
The API for the streamer classes is still under development and may change in the future.
</Tip>
In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes
ready for you to use. For example, you can use the [`TextStreamer`] class to stream the output of `generate()` into
your screen, one word at a time:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer | 15_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#streaming | .md | >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextStreamer(tok) | 15_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#streaming | .md | >>> # Despite returning the usual output, the streamer will also print the generated text to stdout.
>>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20)
An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,
``` | 15_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | The `generate()` supports watermarking the generated text by randomly marking a portion of tokens as "green".
When generating the "green" will have a small 'bias' value added to their logits, thus having a higher chance to be generated.
The watermarked text can be detected by calculating the proportion of "green" tokens in the text and estimating how likely it is
statistically to obtain that amount of "green" tokens for human-generated text. This watermarking strategy was proposed in the paper | 15_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | ["On the Reliability of Watermarks for Large Language Models"](https://arxiv.org/abs/2306.04634). For more information on
the inner functioning of watermarking, it is recommended to refer to the paper.
The watermarking can be used with any generative model in `tranformers` and does not require an extra classification model
to detect watermarked text. To trigger watermarking, pass in a [`WatermarkingConfig`] with needed arguments directly to the | 15_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | to detect watermarked text. To trigger watermarking, pass in a [`WatermarkingConfig`] with needed arguments directly to the
`.generate()` method or add it to the [`GenerationConfig`]. Watermarked text can be later detected with a [`WatermarkDetector`].
<Tip warning={true}>
The WatermarkDetector internally relies on the proportion of "green" tokens, and whether generated text follows the coloring pattern. | 15_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | That is why it is recommended to strip off the prompt text, if it is much longer than the generated text.
This also can have an effect when one sequence in the batch is a lot longer causing other rows to be padded.
Additionally, the detector **must** be initiated with identical watermark configuration arguments used when generating.
</Tip>
Let's generate some text with watermarking. In the below code snippet, we set the bias to 2.5 which is a value that | 15_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | </Tip>
Let's generate some text with watermarking. In the below code snippet, we set the bias to 2.5 which is a value that
will be added to "green" tokens' logits. After generating watermarked text, we can pass it directly to the `WatermarkDetector`
to check if the text is machine-generated (outputs `True` for machine-generated and `False` otherwise).
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkDetector, WatermarkingConfig | 15_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> tok.pad_token_id = tok.eos_token_id
>>> tok.padding_side = "left"
>>> inputs = tok(["This is the beginning of a long story", "Alice and Bob are"], padding=True, return_tensors="pt")
>>> input_len = inputs["input_ids"].shape[-1] | 15_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#watermarking | .md | >>> watermarking_config = WatermarkingConfig(bias=2.5, seeding_scheme="selfhash")
>>> out = model.generate(**inputs, watermarking_config=watermarking_config, do_sample=False, max_length=20)
>>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config=watermarking_config)
>>> detection_out = detector(out, return_dict=True)
>>> detection_out.prediction
array([True, True])
``` | 15_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#decoding-strategies | .md | Certain combinations of the `generate()` parameters, and ultimately `generation_config`, can be used to enable specific
decoding strategies. If you are new to this concept, we recommend reading
[this blog post that illustrates how common decoding strategies work](https://huggingface.co/blog/how-to-generate).
Here, we'll show some of the parameters that control the decoding strategies and illustrate how you can use them.
<Tip> | 15_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#decoding-strategies | .md | Here, we'll show some of the parameters that control the decoding strategies and illustrate how you can use them.
<Tip>
Selecting a given decoding strategy is not the only way you can influence the outcome of `generate()` with your model.
The decoding strategies act based (mostly) on the logits, the distribution of probabilities for the next token, and
thus selecting a good logits manipulation strategy can go a long way! In other words, manipulating the logits is another | 15_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#decoding-strategies | .md | thus selecting a good logits manipulation strategy can go a long way! In other words, manipulating the logits is another
dimension you can act upon, in addition to selecting a decoding strategy. Popular logits manipulation strategies include
`top_p`, `min_p`, and `repetition_penalty` -- you can check the full list in the [`GenerationConfig`] class.
</Tip> | 15_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#greedy-search | .md | [`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`.
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> prompt = "I look forward to"
>>> checkpoint = "distilbert/distilgpt2"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt") | 15_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#greedy-search | .md | >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n']
``` | 15_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#contrastive-search | .md | The contrastive search decoding strategy was proposed in the 2022 paper [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417).
It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search
works, check out [this blog post](https://huggingface.co/blog/introducing-csearch).
The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`:
```python | 15_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#contrastive-search | .md | The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM | 15_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#contrastive-search | .md | >>> checkpoint = "openai-community/gpt2-large"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> prompt = "Hugging Face Company is"
>>> inputs = tokenizer(prompt, return_tensors="pt") | 15_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#contrastive-search | .md | >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best
in the business and our customer service is second to none.\n\nIf you have any questions about our
products or services, feel free to contact us at any time. We look forward to hearing from you!']
``` | 15_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#multinomial-sampling | .md | As opposed to greedy search that always chooses a token with the highest probability as the
next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire
vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the
risk of repetition.
To enable multinomial sampling set `do_sample=True` and `num_beams=1`.
```python | 15_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#multinomial-sampling | .md | risk of repetition.
To enable multinomial sampling set `do_sample=True` and `num_beams=1`.
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(0) # For reproducibility | 15_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#multinomial-sampling | .md | >>> checkpoint = "openai-community/gpt2-large"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> prompt = "Today was an amazing day because"
>>> inputs = tokenizer(prompt, return_tensors="pt") | 15_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#multinomial-sampling | .md | >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True) | 15_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#multinomial-sampling | .md | ["Today was an amazing day because we received these wonderful items by the way of a gift shop. The box arrived on a Thursday and I opened it on Monday afternoon to receive the gifts. Both bags featured pieces from all the previous years!\n\nThe box had lots of surprises in it, including some sweet little mini chocolate chips! I don't think I'd eat all of these. This was definitely one of the most expensive presents I have ever got, I actually got most of them for free!\n\nThe first package came"]
``` | 15_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-decoding | .md | Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses
the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability
sequences that start with lower probability initial tokens and would've been ignored by the greedy search.
<a href="https://huggingface.co/spaces/m-ric/beam_search_visualizer" class="flex flex-col justify-center"> | 15_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-decoding | .md | <a href="https://huggingface.co/spaces/m-ric/beam_search_visualizer" class="flex flex-col justify-center">
<img style="max-width: 90%; margin: auto;" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beam_search.png"/>
</a>
You can visualize how beam-search decoding works in [this interactive demo](https://huggingface.co/spaces/m-ric/beam_search_visualizer): type your input sentence, and play with the parameters to see how the decoding beams change. | 15_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-decoding | .md | To enable this decoding strategy, specify the `num_beams` (aka number of hypotheses to keep track of) that is greater than 1.
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer | 15_11_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-decoding | .md | >>> prompt = "It is astonishing how one can"
>>> checkpoint = "openai-community/gpt2-medium"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint) | 15_11_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-decoding | .md | >>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of
time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have']
``` | 15_11_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-multinomial-sampling | .md | As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify
the `num_beams` greater than 1, and set `do_sample=True` to use this decoding strategy.
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed
>>> set_seed(0) # For reproducibility
>>> prompt = "translate English to German: The house is wonderful."
>>> checkpoint = "google-t5/t5-small" | 15_12_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#beam-search-multinomial-sampling | .md | >>> prompt = "translate English to German: The house is wonderful."
>>> checkpoint = "google-t5/t5-small"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs, num_beams=5, do_sample=True)
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Das Haus ist wunderbar.'
``` | 15_12_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#diverse-beam-search-decoding | .md | The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse
set of beam sequences to choose from. To learn how it works, refer to [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf).
This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`. | 15_13_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#diverse-beam-search-decoding | .md | This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`.
The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group.
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM | 15_13_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#diverse-beam-search-decoding | .md | >>> checkpoint = "google/pegasus-xsum"
>>> prompt = (
... "The Permaculture Design Principles are a set of universal design principles "
... "that can be applied to any location, climate and culture, and they allow us to design "
... "the most efficient and sustainable human habitation and food production systems. "
... "Permaculture is a design system that encompasses a wide variety of disciplines, such " | 15_13_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md | https://huggingface.co/docs/transformers/en/generation_strategies/#diverse-beam-search-decoding | .md | ... "Permaculture is a design system that encompasses a wide variety of disciplines, such "
... "as ecology, landscape design, environmental science and energy conservation, and the "
... "Permaculture design principles are drawn from these various disciplines. Each individual "
... "design principle itself embodies a complete conceptual framework based on sound "
... "scientific principles. When we bring all these separate principles together, we can " | 15_13_3 |