source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#logging | .md | verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
- `transformers.logging.CRITICAL` or `transformers.logging.FATAL` (int value, 50): only report the most
critical errors.
- `transformers.logging.ERROR` (int value, 40): only report errors.
- `transformers.logging.WARNING` or `transformers.logging.WARN` (int value, 30): only reports error and
warnings. This is the default level used by the library. | 464_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#logging | .md | warnings. This is the default level used by the library.
- `transformers.logging.INFO` (int value, 20): reports error, warnings and basic information.
- `transformers.logging.DEBUG` (int value, 10): report all information.
By default, `tqdm` progress bars will be displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] can be used to suppress or unsuppress this behavior. | 464_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#logging-vs-warnings | .md | Python has two logging systems that are often used in conjunction: `logging`, which is explained above, and `warnings`,
which allows further classification of warnings in specific buckets, e.g., `FutureWarning` for a feature or path
that has already been deprecated and `DeprecationWarning` to indicate an upcoming deprecation.
We use both in the `transformers` library. We leverage and adapt `logging`'s `captureWarnings` method to allow
management of these warning messages by the verbosity setters above. | 464_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#logging-vs-warnings | .md | management of these warning messages by the verbosity setters above.
What does that mean for developers of the library? We should respect the following heuristics:
- `warnings` should be favored for developers of the library and libraries dependent on `transformers`
- `logging` should be used for end-users of the library using it in every-day projects
See reference of the `captureWarnings` method below.
If capture is true, redirect all warnings to the logging package. | 464_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#logging-vs-warnings | .md | See reference of the `captureWarnings` method below.
If capture is true, redirect all warnings to the logging package.
If capture is False, ensure that warnings are not redirected to logging
but to their original destinations. | 464_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#base-setters | .md | [[autodoc]] logging.set_verbosity: No module named 'transformers.logging'_error: No module named 'transformers.logging'
[[autodoc]] logging.set_verbosity: No module named 'transformers.logging'_warning: No module named 'transformers.logging'
[[autodoc]] logging.set_verbosity: No module named 'transformers.logging'_info: No module named 'transformers.logging'
[[autodoc]] logging.set_verbosity: No module named 'transformers.logging'_debug: No module named 'transformers.logging' | 464_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#other-functions | .md | [[autodoc]] logging.get_verbosity: No module named 'transformers.logging'
[[autodoc]] logging.set_verbosity: No module named 'transformers.logging'
[[autodoc]] logging.get_logger: No module named 'transformers.logging'
[[autodoc]] logging.enable_default_handler: No module named 'transformers.logging'
[[autodoc]] logging.disable_default_handler: No module named 'transformers.logging'
[[autodoc]] logging.enable_explicit_format: No module named 'transformers.logging' | 464_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/logging.md | https://huggingface.co/docs/transformers/en/main_classes/logging/#other-functions | .md | [[autodoc]] logging.enable_explicit_format: No module named 'transformers.logging'
[[autodoc]] logging.reset_format: No module named 'transformers.logging'
[[autodoc]] logging.enable_progress_bar: No module named 'transformers.logging'
[[autodoc]] logging.disable_progress_bar: No module named 'transformers.logging' | 464_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 465_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 465_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks. | 465_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | Fast image processors are available for a few models and more will be added in the future. They are based on the [torchvision](https://pytorch.org/vision/stable/index.html) library and provide a significant speed-up, especially when processing on GPU.
They have the same API as the base image processors and can be used as drop-in replacements.
To use a fast image processor, you need to install the `torchvision` library, and set the `use_fast` argument to `True` when instantiating the image processor: | 465_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | ```python
from transformers import AutoImageProcessor | 465_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50", use_fast=True)
```
Note that `use_fast` will be set to `True` by default in a future release.
When using a fast image processor, you can also set the `device` argument to specify the device on which the processing should be done. By default, the processing is done on the same device as the inputs if the inputs are tensors, or on the CPU otherwise.
```python
from torchvision.io import read_image | 465_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | ```python
from torchvision.io import read_image
from transformers import DetrImageProcessorFast | 465_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | images = read_image("image.jpg")
processor = DetrImageProcessorFast.from_pretrained("facebook/detr-resnet-50")
images_processed = processor(images, return_tensors="pt", device="cuda")
```
Here are some speed comparisons between the base and fast image processors for the `DETR` and `RT-DETR` models, and how they impact overall inference time:
<div class="flex"> | 465_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | <div class="flex">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_detr_fast_padded.png" />
</div>
<div class="flex">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_detr_fast_batched_compiled.png" />
</div>
<div class="flex"> | 465_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | </div>
<div class="flex">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_rt_detr_fast_single.png" />
</div>
<div class="flex">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_rt_detr_fast_batched.png" />
</div> | 465_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#image-processor | .md | </div>
These benchmarks were run on an [AWS EC2 g5.2xlarge instance](https://aws.amazon.com/ec2/instance-types/g5/), utilizing an NVIDIA A10G Tensor Core GPU. | 465_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#imageprocessingmixin | .md | image_processing_utils.ImageProcessingMixin
This is an image processor mixin used to provide saving/loading functionality for sequential and image feature
extractors.
- from_pretrained
- save_pretrained | 465_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#batchfeature | .md | Holds the output of the [`~SequenceFeatureExtractor.pad`] and feature extractor specific `__call__` methods.
This class is derived from a python dictionary and can be used as a dictionary.
Args:
data (`dict`, *optional*):
Dictionary of lists/arrays/tensors returned by the __call__/pad methods ('input_values', 'attention_mask',
etc.).
tensor_type (`Union[None, str, TensorType]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at | 465_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#batchfeature | .md | You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at
initialization. | 465_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#baseimageprocessor | .md | image_processing_utils.BaseImageProcessor | 465_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/image_processor.md | https://huggingface.co/docs/transformers/en/main_classes/image_processor/#baseimageprocessorfast | .md | image_processing_utils_fast.BaseImageProcessorFast | 465_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 466_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 466_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | Callbacks are objects that can customize the behavior of the training loop in the PyTorch
[`Trainer`] (this feature is not yet implemented in TensorFlow) that can inspect the training loop
state (for progress reporting, logging on TensorBoard or other ML platforms...) and take decisions (like early
stopping).
Callbacks are "read only" pieces of code, apart from the [`TrainerControl`] object they return, they | 466_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | stopping).
Callbacks are "read only" pieces of code, apart from the [`TrainerControl`] object they return, they
cannot change anything in the training loop. For customizations that require changes in the training loop, you should
subclass [`Trainer`] and override the methods you need (see [trainer](trainer) for examples).
By default, `TrainingArguments.report_to` is set to `"all"`, so a [`Trainer`] will use the following callbacks. | 466_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | By default, `TrainingArguments.report_to` is set to `"all"`, so a [`Trainer`] will use the following callbacks.
- [`DefaultFlowCallback`] which handles the default behavior for logging, saving and evaluation.
- [`PrinterCallback`] or [`ProgressCallback`] to display progress and print the
logs (the first one is used if you deactivate tqdm through the [`TrainingArguments`], otherwise
it's the second one).
- [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4 | 466_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | it's the second one).
- [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4
or tensorboardX).
- [`~integrations.WandbCallback`] if [wandb](https://www.wandb.com/) is installed.
- [`~integrations.CometCallback`] if [comet_ml](https://www.comet.com/site/) is installed.
- [`~integrations.MLflowCallback`] if [mlflow](https://www.mlflow.org/) is installed.
- [`~integrations.NeptuneCallback`] if [neptune](https://neptune.ai/) is installed. | 466_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | - [`~integrations.NeptuneCallback`] if [neptune](https://neptune.ai/) is installed.
- [`~integrations.AzureMLCallback`] if [azureml-sdk](https://pypi.org/project/azureml-sdk/) is
installed.
- [`~integrations.CodeCarbonCallback`] if [codecarbon](https://pypi.org/project/codecarbon/) is
installed.
- [`~integrations.ClearMLCallback`] if [clearml](https://github.com/allegroai/clearml) is installed.
- [`~integrations.DagsHubCallback`] if [dagshub](https://dagshub.com/) is installed. | 466_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | - [`~integrations.DagsHubCallback`] if [dagshub](https://dagshub.com/) is installed.
- [`~integrations.FlyteCallback`] if [flyte](https://flyte.org/) is installed.
- [`~integrations.DVCLiveCallback`] if [dvclive](https://dvc.org/doc/dvclive) is installed.
If a package is installed but you don't wish to use the accompanying integration, you can change `TrainingArguments.report_to` to a list of just those integrations you want to use (e.g. `["azure_ml", "wandb"]`). | 466_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#callbacks | .md | The main class that implements callbacks is [`TrainerCallback`]. It gets the
[`TrainingArguments`] used to instantiate the [`Trainer`], can access that
Trainer's internal state via [`TrainerState`], and can take some actions on the training loop via
[`TrainerControl`]. | 466_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | Here is the list of the available [`TrainerCallback`] in the library:
integrations.CometCallback
A [`TrainerCallback`] that sends the logs to [Comet ML](https://www.comet.com/site/).
- setup
A [`TrainerCallback`] that handles the default flow of the training loop for logs, evaluation and checkpoints.
A bare [`TrainerCallback`] that just prints the logs.
A [`TrainerCallback`] that displays the progress of training or evaluation. | 466_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | A [`TrainerCallback`] that displays the progress of training or evaluation.
You can modify `max_str_len` to control how long strings are truncated when logging.
A [`TrainerCallback`] that handles early stopping.
Args:
early_stopping_patience (`int`):
Use with `metric_for_best_model` to stop training when the specified metric worsens for
`early_stopping_patience` evaluation calls.
early_stopping_threshold(`float`, *optional*): | 466_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | `early_stopping_patience` evaluation calls.
early_stopping_threshold(`float`, *optional*):
Use with TrainingArguments `metric_for_best_model` and `early_stopping_patience` to denote how much the
specified metric must improve to satisfy early stopping conditions. `
This callback depends on [`TrainingArguments`] argument *load_best_model_at_end* functionality to set best_metric
in [`TrainerState`]. Note that if the [`TrainingArguments`] argument *save_steps* differs from *eval_steps*, the | 466_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | in [`TrainerState`]. Note that if the [`TrainingArguments`] argument *save_steps* differs from *eval_steps*, the
early stopping will not occur until the next save step.
integrations.TensorBoardCallback
A [`TrainerCallback`] that sends the logs to [TensorBoard](https://www.tensorflow.org/tensorboard).
Args:
tb_writer (`SummaryWriter`, *optional*):
The writer to use. Will instantiate one if not set.
integrations.WandbCallback | 466_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | tb_writer (`SummaryWriter`, *optional*):
The writer to use. Will instantiate one if not set.
integrations.WandbCallback
A [`TrainerCallback`] that logs metrics, media, model checkpoints to [Weight and Biases](https://www.wandb.com/).
- setup
integrations.MLflowCallback
A [`TrainerCallback`] that sends the logs to [MLflow](https://www.mlflow.org/). Can be disabled by setting
environment variable `DISABLE_MLFLOW_INTEGRATION = TRUE`.
- setup
integrations.AzureMLCallback | 466_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | environment variable `DISABLE_MLFLOW_INTEGRATION = TRUE`.
- setup
integrations.AzureMLCallback
A [`TrainerCallback`] that sends the logs to [AzureML](https://pypi.org/project/azureml-sdk/).
integrations.CodeCarbonCallback
A [`TrainerCallback`] that tracks the CO2 emission of training.
integrations.NeptuneCallback
TrainerCallback that sends the logs to [Neptune](https://app.neptune.ai).
Args:
api_token (`str`, *optional*): Neptune API token obtained upon registration. | 466_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | Args:
api_token (`str`, *optional*): Neptune API token obtained upon registration.
You can leave this argument out if you have saved your token to the `NEPTUNE_API_TOKEN` environment
variable (strongly recommended). See full setup instructions in the
[docs](https://docs.neptune.ai/setup/installation).
project (`str`, *optional*): Name of an existing Neptune project, in the form "workspace-name/project-name". | 466_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | project (`str`, *optional*): Name of an existing Neptune project, in the form "workspace-name/project-name".
You can find and copy the name in Neptune from the project settings -> Properties. If None (default), the
value of the `NEPTUNE_PROJECT` environment variable is used.
name (`str`, *optional*): Custom name for the run.
base_namespace (`str`, *optional*, defaults to "finetuning"): In the Neptune run, the root namespace
that will contain all of the metadata logged by the callback. | 466_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | that will contain all of the metadata logged by the callback.
log_parameters (`bool`, *optional*, defaults to `True`):
If True, logs all Trainer arguments and model parameters provided by the Trainer.
log_checkpoints (`str`, *optional*): If "same", uploads checkpoints whenever they are saved by the Trainer.
If "last", uploads only the most recently saved checkpoint. If "best", uploads the best checkpoint (among
the ones saved by the Trainer). If `None`, does not upload checkpoints. | 466_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | the ones saved by the Trainer). If `None`, does not upload checkpoints.
run (`Run`, *optional*): Pass a Neptune run object if you want to continue logging to an existing run.
Read more about resuming runs in the [docs](https://docs.neptune.ai/logging/to_existing_object).
**neptune_run_kwargs (*optional*):
Additional keyword arguments to be passed directly to the
[`neptune.init_run()`](https://docs.neptune.ai/api/neptune#init_run) function when a new run is created. | 466_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | [`neptune.init_run()`](https://docs.neptune.ai/api/neptune#init_run) function when a new run is created.
For instructions and examples, see the [Transformers integration
guide](https://docs.neptune.ai/integrations/transformers) in the Neptune documentation.
integrations.ClearMLCallback
A [`TrainerCallback`] that sends the logs to [ClearML](https://clear.ml/).
Environment:
- **CLEARML_PROJECT** (`str`, *optional*, defaults to `HuggingFace Transformers`):
ClearML project name. | 466_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | Environment:
- **CLEARML_PROJECT** (`str`, *optional*, defaults to `HuggingFace Transformers`):
ClearML project name.
- **CLEARML_TASK** (`str`, *optional*, defaults to `Trainer`):
ClearML task name.
- **CLEARML_LOG_MODEL** (`bool`, *optional*, defaults to `False`):
Whether to log models as artifacts during training.
integrations.DagsHubCallback
A [`TrainerCallback`] that logs to [DagsHub](https://dagshub.com/). Extends [`MLflowCallback`]
integrations.FlyteCallback | 466_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | A [`TrainerCallback`] that logs to [DagsHub](https://dagshub.com/). Extends [`MLflowCallback`]
integrations.FlyteCallback
A [`TrainerCallback`] that sends the logs to [Flyte](https://flyte.org/).
NOTE: This callback only works within a Flyte task.
Args:
save_log_history (`bool`, *optional*, defaults to `True`):
When set to True, the training logs are saved as a Flyte Deck.
sync_checkpoints (`bool`, *optional*, defaults to `True`): | 466_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | When set to True, the training logs are saved as a Flyte Deck.
sync_checkpoints (`bool`, *optional*, defaults to `True`):
When set to True, checkpoints are synced with Flyte and can be used to resume training in the case of an
interruption.
Example:
```python
# Note: This example skips over some setup steps for brevity.
from flytekit import current_context, task | 466_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | @task
def train_hf_transformer():
cp = current_context().checkpoint
trainer = Trainer(..., callbacks=[FlyteCallback()])
output = trainer.train(resume_from_checkpoint=cp.restore())
```
integrations.DVCLiveCallback
A [`TrainerCallback`] that sends the logs to [DVCLive](https://www.dvc.org/doc/dvclive).
Use the environment variables below in `setup` to configure the integration. To customize this callback beyond | 466_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | Use the environment variables below in `setup` to configure the integration. To customize this callback beyond
those environment variables, see [here](https://dvc.org/doc/dvclive/ml-frameworks/huggingface).
Args:
live (`dvclive.Live`, *optional*, defaults to `None`):
Optional Live instance. If None, a new instance will be created using **kwargs.
log_model (Union[Literal["all"], bool], *optional*, defaults to `None`): | 466_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#available-callbacks | .md | log_model (Union[Literal["all"], bool], *optional*, defaults to `None`):
Whether to use `dvclive.Live.log_artifact()` to log checkpoints created by [`Trainer`]. If set to `True`,
the final checkpoint is logged at the end of training. If set to `"all"`, the entire
[`TrainingArguments`]'s `output_dir` is logged at each checkpoint.
- setup | 466_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | A class for objects that will inspect the state of the training loop at some events and take some decisions. At
each of those events the following arguments are available:
Args:
args ([`TrainingArguments`]):
The training arguments used to instantiate the [`Trainer`].
state ([`TrainerState`]):
The current state of the [`Trainer`].
control ([`TrainerControl`]):
The object that is returned to the [`Trainer`] and can be used to make some decisions.
model ([`PreTrainedModel`] or `torch.nn.Module`): | 466_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | model ([`PreTrainedModel`] or `torch.nn.Module`):
The model being trained.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer used for encoding the data. This is deprecated in favour of `processing_class`.
processing_class ([`PreTrainedTokenizer` or `BaseImageProcessor` or `ProcessorMixin` or `FeatureExtractionMixin`]):
The processing class used for encoding the data. Can be a tokenizer, a processor, an image processor or a feature extractor.
optimizer (`torch.optim.Optimizer`): | 466_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | optimizer (`torch.optim.Optimizer`):
The optimizer used for the training steps.
lr_scheduler (`torch.optim.lr_scheduler.LambdaLR`):
The scheduler used for setting the learning rate.
train_dataloader (`torch.utils.data.DataLoader`, *optional*):
The current dataloader used for training.
eval_dataloader (`torch.utils.data.DataLoader`, *optional*):
The current dataloader used for evaluation.
metrics (`Dict[str, float]`):
The metrics computed by the last evaluation phase. | 466_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | The current dataloader used for evaluation.
metrics (`Dict[str, float]`):
The metrics computed by the last evaluation phase.
Those are only accessible in the event `on_evaluate`.
logs (`Dict[str, float]`):
The values to log.
Those are only accessible in the event `on_log`.
The `control` object is the only one that can be changed by the callback, in which case the event that changes it
should return the modified version. | 466_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | should return the modified version.
The argument `args`, `state` and `control` are positionals for all events, all the others are grouped in `kwargs`.
You can unpack the ones you need in the signature of the event using them. As an example, see the code of the
simple [`~transformers.PrinterCallback`].
Example:
```python
class PrinterCallback(TrainerCallback):
def on_log(self, args, state, control, logs=None, **kwargs):
_ = logs.pop("total_flos", None)
if state.is_local_process_zero:
print(logs)
``` | 466_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | _ = logs.pop("total_flos", None)
if state.is_local_process_zero:
print(logs)
```
Here is an example of how to register a custom callback with the PyTorch [`Trainer`]:
```python
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training" | 466_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | def on_train_begin(self, args, state, control, **kwargs):
print("Starting training") | 466_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercallback | .md | trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback())
)
```
Another way to register a callback is to call `trainer.add_callback()` as follows:
```python
trainer = Trainer(...)
trainer.add_callback(MyCallback)
# Alternatively, we can pass an instance of the callback class
trainer.add_callback(MyCallback())
``` | 466_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | A class containing the [`Trainer`] inner state that will be saved along the model and optimizer when checkpointing
and passed to the [`TrainerCallback`].
<Tip>
In all this class, one step is to be understood as one update step. When using gradient accumulation, one update
step may require several forward and backward passes: if you use `gradient_accumulation_steps=n`, then one update
step requires going through *n* batches.
</Tip>
Args:
epoch (`float`, *optional*): | 466_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | step requires going through *n* batches.
</Tip>
Args:
epoch (`float`, *optional*):
Only set during training, will represent the epoch the training is at (the decimal part being the
percentage of the current epoch completed).
global_step (`int`, *optional*, defaults to 0):
During training, represents the number of update steps completed.
max_steps (`int`, *optional*, defaults to 0):
The number of update steps to do during the current training.
logging_steps (`int`, *optional*, defaults to 500): | 466_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | The number of update steps to do during the current training.
logging_steps (`int`, *optional*, defaults to 500):
Log every X updates steps
eval_steps (`int`, *optional*):
Run an evaluation every X steps.
save_steps (`int`, *optional*, defaults to 500):
Save checkpoint every X updates steps.
train_batch_size (`int`, *optional*):
The batch size for the training dataloader. Only needed when
`auto_find_batch_size` has been used.
num_input_tokens_seen (`int`, *optional*, defaults to 0): | 466_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | `auto_find_batch_size` has been used.
num_input_tokens_seen (`int`, *optional*, defaults to 0):
When tracking the inputs tokens, the number of tokens seen during training (number of input tokens, not the
number of prediction tokens).
total_flos (`float`, *optional*, defaults to 0):
The total number of floating operations done by the model since the beginning of training (stored as floats
to avoid overflow).
log_history (`List[Dict[str, float]]`, *optional*): | 466_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | to avoid overflow).
log_history (`List[Dict[str, float]]`, *optional*):
The list of logs done since the beginning of training.
best_metric (`float`, *optional*):
When tracking the best model, the value of the best metric encountered so far.
best_model_checkpoint (`str`, *optional*):
When tracking the best model, the value of the name of the checkpoint for the best model encountered so
far.
is_local_process_zero (`bool`, *optional*, defaults to `True`): | 466_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | far.
is_local_process_zero (`bool`, *optional*, defaults to `True`):
Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on
several machines) main process.
is_world_process_zero (`bool`, *optional*, defaults to `True`):
Whether or not this process is the global main process (when training in a distributed fashion on several
machines, this is only going to be `True` for one process).
is_hyper_param_search (`bool`, *optional*, defaults to `False`): | 466_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainerstate | .md | machines, this is only going to be `True` for one process).
is_hyper_param_search (`bool`, *optional*, defaults to `False`):
Whether we are in the process of a hyper parameter search using Trainer.hyperparameter_search. This will
impact the way data will be logged in TensorBoard.
stateful_callbacks (`List[StatefulTrainerCallback]`, *optional*):
Callbacks attached to the `Trainer` that should have their states be saved or restored.
Relevent callbacks should implement a `state` and `from_state` function. | 466_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercontrol | .md | A class that handles the [`Trainer`] control flow. This class is used by the [`TrainerCallback`] to activate some
switches in the training loop.
Args:
should_training_stop (`bool`, *optional*, defaults to `False`):
Whether or not the training should be interrupted.
If `True`, this variable will not be set back to `False`. The training will just stop.
should_epoch_stop (`bool`, *optional*, defaults to `False`):
Whether or not the current epoch should be interrupted. | 466_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercontrol | .md | should_epoch_stop (`bool`, *optional*, defaults to `False`):
Whether or not the current epoch should be interrupted.
If `True`, this variable will be set back to `False` at the beginning of the next epoch.
should_save (`bool`, *optional*, defaults to `False`):
Whether or not the model should be saved at this step.
If `True`, this variable will be set back to `False` at the beginning of the next step.
should_evaluate (`bool`, *optional*, defaults to `False`): | 466_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/callback.md | https://huggingface.co/docs/transformers/en/main_classes/callback/#trainercontrol | .md | should_evaluate (`bool`, *optional*, defaults to `False`):
Whether or not the model should be evaluated at this step.
If `True`, this variable will be set back to `False` at the beginning of the next step.
should_log (`bool`, *optional*, defaults to `False`):
Whether or not the logs should be reported at this step.
If `True`, this variable will be set back to `False` at the beginning of the next step. | 466_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 467_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 467_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#backbone | .md | A backbone is a model used for feature extraction for higher level computer vision tasks such as object detection and image classification. Transformers provides an [`AutoBackbone`] class for initializing a Transformers backbone from pretrained model weights, and two utility classes:
* [`~utils.BackboneMixin`] enables initializing a backbone from Transformers or [timm](https://hf.co/docs/timm/index) and includes functions for returning the output features and indices. | 467_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#backbone | .md | * [`~utils.BackboneConfigMixin`] sets the output features and indices of the backbone configuration.
[timm](https://hf.co/docs/timm/index) models are loaded with the [`TimmBackbone`] and [`TimmBackboneConfig`] classes.
Backbones are supported for the following models:
* [BEiT](../model_doc/beit)
* [BiT](../model_doc/bit)
* [ConvNext](../model_doc/convnext)
* [ConvNextV2](../model_doc/convnextv2)
* [DiNAT](../model_doc/dinat)
* [DINOV2](../model_doc/dinov2)
* [FocalNet](../model_doc/focalnet) | 467_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#backbone | .md | * [DiNAT](../model_doc/dinat)
* [DINOV2](../model_doc/dinov2)
* [FocalNet](../model_doc/focalnet)
* [MaskFormer](../model_doc/maskformer)
* [NAT](../model_doc/nat)
* [ResNet](../model_doc/resnet)
* [Swin Transformer](../model_doc/swin)
* [Swin Transformer v2](../model_doc/swinv2)
* [ViTDet](../model_doc/vitdet) | 467_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#autobackbone | .md | AutoBackbone | 467_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#backbonemixin | .md | utils.BackboneMixin | 467_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#backboneconfigmixin | .md | utils.BackboneConfigMixin
A Mixin to support handling the `out_features` and `out_indices` attributes for the backbone configurations. | 467_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#timmbackbone | .md | models.timm_backbone.TimmBackbone
Wrapper class for timm models to be used as backbones. This enables using the timm models interchangeably with the
other models in the library keeping the same API. | 467_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/backbones.md | https://huggingface.co/docs/transformers/en/main_classes/backbones/#timmbackboneconfig | .md | models.timm_backbone.TimmBackbone
Wrapper class for timm models to be used as backbones. This enables using the timm models interchangeably with the
other models in the library keeping the same API.
Config | 467_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 468_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 468_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#quantization | .md | Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.
Quantization techniques that aren't supported in Transformers can be added with the [`HfQuantizer`] class. | 468_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#quantization | .md | Quantization techniques that aren't supported in Transformers can be added with the [`HfQuantizer`] class.
<Tip>
Learn how to quantize models in the [Quantization](../quantization) guide.
</Tip> | 468_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#quantoconfig | .md | This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using `quanto`.
Args:
weights (`str`, *optional*, defaults to `"int8"`):
The target dtype for the weights after quantization. Supported values are ("float8","int8","int4","int2")
activations (`str`, *optional*):
The target dtype for the activations after quantization. Supported values are (None,"int8","float8")
modules_to_not_convert (`list`, *optional*, default to `None`): | 468_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#quantoconfig | .md | modules_to_not_convert (`list`, *optional*, default to `None`):
The list of modules to not quantize, useful for quantizing models that explicitly require to have
some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers). | 468_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#aqlmconfig | .md | This is a wrapper class about `aqlm` parameters.
Args:
in_group_size (`int`, *optional*, defaults to 8):
The group size along the input dimension.
out_group_size (`int`, *optional*, defaults to 1):
The group size along the output dimension. It's recommended to always use 1.
num_codebooks (`int`, *optional*, defaults to 1):
Number of codebooks for the Additive Quantization procedure.
nbits_per_codebook (`int`, *optional*, defaults to 16): | 468_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#aqlmconfig | .md | Number of codebooks for the Additive Quantization procedure.
nbits_per_codebook (`int`, *optional*, defaults to 16):
Number of bits encoding a single codebook vector. Codebooks size is 2**nbits_per_codebook.
linear_weights_not_to_quantize (`Optional[List[str]]`, *optional*):
List of full paths of `nn.Linear` weight parameters that shall not be quantized.
kwargs (`Dict[str, Any]`, *optional*):
Additional parameters from which to initialize the configuration object. | 468_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#vptqconfig | .md | This is a wrapper class about `vptq` parameters.
Args:
enable_proxy_error (`bool`, *optional*, defaults to `False`): calculate proxy error for each layer
config_for_layers (`Dict`, *optional*, defaults to `{}`): quantization params for each layer
shared_layer_config (`Dict`, *optional*, defaults to `{}`): shared quantization params among layers
modules_to_not_convert (`list`, *optional*, default to `None`):
The list of modules to not quantize, useful for quantizing models that explicitly require to have | 468_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#vptqconfig | .md | The list of modules to not quantize, useful for quantizing models that explicitly require to have
some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).
kwargs (`Dict[str, Any]`, *optional*):
Additional parameters from which to initialize the configuration object. | 468_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#awqconfig | .md | This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using `auto-awq` library awq quantization relying on auto_awq backend.
Args:
bits (`int`, *optional*, defaults to 4):
The number of bits to quantize to.
group_size (`int`, *optional*, defaults to 128):
The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
zero_point (`bool`, *optional*, defaults to `True`): | 468_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#awqconfig | .md | zero_point (`bool`, *optional*, defaults to `True`):
Whether to use zero point quantization.
version (`AWQLinearVersion`, *optional*, defaults to `AWQLinearVersion.GEMM`):
The version of the quantization algorithm to use. GEMM is better for big batch_size (e.g. >= 8) otherwise,
GEMV is better (e.g. < 8 ). GEMM models are compatible with Exllama kernels.
backend (`AwqBackendPackingMethod`, *optional*, defaults to `AwqBackendPackingMethod.AUTOAWQ`): | 468_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#awqconfig | .md | backend (`AwqBackendPackingMethod`, *optional*, defaults to `AwqBackendPackingMethod.AUTOAWQ`):
The quantization backend. Some models might be quantized using `llm-awq` backend. This is useful for users
that quantize their own models using `llm-awq` library.
do_fuse (`bool`, *optional*, defaults to `False`):
Whether to fuse attention and mlp layers together for faster inference
fuse_max_seq_len (`int`, *optional*):
The Maximum sequence length to generate when using fusing. | 468_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#awqconfig | .md | fuse_max_seq_len (`int`, *optional*):
The Maximum sequence length to generate when using fusing.
modules_to_fuse (`dict`, *optional*, default to `None`):
Overwrite the natively supported fusing scheme with the one specified by the users.
modules_to_not_convert (`list`, *optional*, default to `None`):
The list of modules to not quantize, useful for quantizing models that explicitly require to have
some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers). | 468_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#awqconfig | .md | some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).
Note you cannot quantize directly with transformers, please refer to `AutoAWQ` documentation for quantizing HF models.
exllama_config (`Dict[str, Any]`, *optional*):
You can specify the version of the exllama kernel through the `version` key, the maximum sequence
length through the `max_input_len` key, and the maximum batch size through the `max_batch_size` key. | 468_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#awqconfig | .md | length through the `max_input_len` key, and the maximum batch size through the `max_batch_size` key.
Defaults to `{"version": 2, "max_input_len": 2048, "max_batch_size": 8}` if unset. | 468_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#eetqconfig | .md | This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using `eetq`.
Args:
weights (`str`, *optional*, defaults to `"int8"`):
The target dtype for the weights. Supported value is only "int8"
modules_to_not_convert (`list`, *optional*, default to `None`):
The list of modules to not quantize, useful for quantizing models that explicitly require to have
some modules left in their original precision. | 468_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#gptqconfig | .md | This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using `optimum` api for gptq quantization relying on auto_gptq backend.
Args:
bits (`int`):
The number of bits to quantize to, supported numbers are (2, 3, 4, 8).
tokenizer (`str` or `PreTrainedTokenizerBase`, *optional*):
The tokenizer used to process the dataset. You can pass either:
- A custom tokenizer object. | 468_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#gptqconfig | .md | The tokenizer used to process the dataset. You can pass either:
- A custom tokenizer object.
- A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co.
- A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved
using the [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`.
dataset (`Union[List[str]]`, *optional*): | 468_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#gptqconfig | .md | dataset (`Union[List[str]]`, *optional*):
The dataset used for quantization. You can provide your own dataset in a list of string or just use the
original datasets used in GPTQ paper ['wikitext2','c4','c4-new']
group_size (`int`, *optional*, defaults to 128):
The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
damp_percent (`float`, *optional*, defaults to 0.1):
The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1. | 468_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#gptqconfig | .md | The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1.
desc_act (`bool`, *optional*, defaults to `False`):
Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly
speed up inference but the perplexity may become slightly worse. Also known as act-order.
sym (`bool`, *optional*, defaults to `True`):
Whether to use symetric quantization.
true_sequential (`bool`, *optional*, defaults to `True`): | 468_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/quantization.md | https://huggingface.co/docs/transformers/en/main_classes/quantization/#gptqconfig | .md | Whether to use symetric quantization.
true_sequential (`bool`, *optional*, defaults to `True`):
Whether to perform sequential quantization even within a single Transformer block. Instead of quantizing
the entire block at once, we perform layer-wise quantization. As a result, each layer undergoes
quantization using inputs that have passed through the previously quantized layers.
use_cuda_fp16 (`bool`, *optional*, defaults to `False`): | 468_7_4 |