source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#-optimum
.md
```py from optimum.onnxruntime import ORTModelForSequenceClassification
3_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#-optimum
.md
ort_model = ORTModelForSequenceClassification.from_pretrained( "distilbert/distilbert-base-uncased-finetuned-sst-2-english", export=True, provider="CUDAExecutionProvider", ) ``` Now you're free to use the model for inference: ```py from optimum.pipelines import pipeline from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased-finetuned-sst-2-english")
3_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#-optimum
.md
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased-finetuned-sst-2-english") pipeline = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0") result = pipeline("Both the music and visual were astounding, not to mention the actors performance.") ```
3_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#combine-optimizations
.md
It is often possible to combine several of the optimization techniques described above to get the best inference performance possible for your model. For example, you can load a model in 4-bit, and then enable BetterTransformer with FlashAttention: ```py import torch from torch.nn.attention import SDPBackend, sdpa_kernel from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
3_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#combine-optimizations
.md
# load model in 4-bit quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype="auto", quantization_config=quantization_config) # enable BetterTransformer model = model.to_bettertransformer() input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
3_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md
https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#combine-optimizations
.md
input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") # enable FlashAttention with sdpa_kernel(SDPBackend.FLASH_ATTENTION): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
3_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
4_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
4_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#efficient-training-on-multiple-cpus
.md
When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently on [bare metal](#usage-in-trainer) and [Kubernetes](#usage-with-kubernetes).
4_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch
.md
[Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) and [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html).
4_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch
.md
Module `oneccl_bindings_for_pytorch` (`torch_ccl` before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now Check more detailed information for [oneccl_bind_pt](https://github.com/intel/torch-ccl).
4_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch-installation
.md
Wheel files are available for the following Python versions: | Extension Version | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | Python 3.11 | | :---------------: | :--------: | :--------: | :--------: | :---------: | :---------: | | 2.5.0 | | √ | √ | √ | √ | | 2.4.0 | | √ | √ | √ | √ | | 2.3.0 | | √ | √ | √ | √ |
4_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch-installation
.md
| 2.3.0 | | √ | √ | √ | √ | | 2.2.0 | | √ | √ | √ | √ | Please run `pip list | grep torch` to get your `pytorch_version`. ```bash pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` where `{pytorch_version}` should be your PyTorch version, for instance 2.4.0.
4_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch-installation
.md
``` where `{pytorch_version}` should be your PyTorch version, for instance 2.4.0. Check more approaches for [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Versions of oneCCL and PyTorch must match.
4_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-mpi-library
.md
Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit. oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it. ```bash oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ```
4_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-extension-for-pytorch-installation
.md
Intel Extension for PyTorch (IPEX) provides performance optimizations for CPU training with both Float32 and BFloat16 (refer to the [single CPU section](./perf_train_cpu) to learn more). The following "Usage in Trainer" takes mpirun in Intel® MPI library as an example.
4_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer
.md
To enable multi CPU distributed training in the Trainer with the ccl backend, users should add **`--ddp_backend ccl`** in the command arguments. Let's see an example with the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance.
4_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer
.md
```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 examples/pytorch/question-answering/run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ```
4_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer
.md
--max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance.
4_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer
.md
In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` Now, run the following command in node0 and **4DDP** will be enabled in node0 and node1 with BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \
4_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer
.md
```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 examples/pytorch/question-answering/run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \
4_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer
.md
--max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
4_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-with-kubernetes
.md
The same distributed training job from the previous section can be deployed to a Kubernetes cluster using the [Kubeflow PyTorchJob training operator](https://www.kubeflow.org/docs/components/training/user-guides/pytorch).
4_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#setup
.md
This example assumes that you have: * Access to a Kubernetes cluster with [Kubeflow installed](https://www.kubeflow.org/docs/started/installing-kubeflow) * [`kubectl`](https://kubernetes.io/docs/tasks/tools) installed and configured to access the Kubernetes cluster * A [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes) that can be used to store datasets and model files. There are multiple options for setting up the PVC including using an NFS
4_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#setup
.md
to store datasets and model files. There are multiple options for setting up the PVC including using an NFS [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes) or a cloud storage bucket. * A Docker container that includes your model training script and all the dependencies needed to run the script. For distributed CPU training jobs, this typically includes PyTorch, Transformers, Intel Extension for PyTorch, Intel
4_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#setup
.md
distributed CPU training jobs, this typically includes PyTorch, Transformers, Intel Extension for PyTorch, Intel oneCCL Bindings for PyTorch, and OpenSSH to communicate between the containers. The snippet below is an example of a Dockerfile that uses a base image that supports distributed CPU training and then extracts a Transformers release to the `/workspace` directory, so that the example scripts are included in the image: ```dockerfile FROM intel/intel-optimized-pytorch:2.4.0-pip-multinode
4_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#setup
.md
RUN apt-get update -y && \ apt-get install -y --no-install-recommends --fix-missing \ google-perftools \ libomp-dev WORKDIR /workspace
4_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#setup
.md
WORKDIR /workspace # Download and extract the transformers code ARG HF_TRANSFORMERS_VER="4.46.0" RUN pip install --no-cache-dir \ transformers==${HF_TRANSFORMERS_VER} && \ mkdir transformers && \ curl -sSL --retry 5 https://github.com/huggingface/transformers/archive/refs/tags/v${HF_TRANSFORMERS_VER}.tar.gz | tar -C transformers --strip-components=1 -xzf - ``` The image needs to be built and copied to the cluster's nodes or pushed to a container registry prior to deploying the PyTorchJob to the cluster.
4_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
The [Kubeflow PyTorchJob](https://www.kubeflow.org/docs/components/training/user-guides/pytorch) is used to run the distributed training job on the cluster. The yaml file for the PyTorchJob defines parameters such as: * The name of the PyTorchJob * The number of replicas (workers) * The python script and it's parameters that will be used to run the training job * The types of resources (node selector, memory, and CPU) needed for each worker * The image/tag for the Docker container to use
4_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
* The image/tag for the Docker container to use * Environment variables * A volume mount for the PVC The volume mount defines a path where the PVC will be mounted in the container for each worker pod. This location can be used for the dataset, checkpoint files, and the saved model after training completes. The snippet below is an example of a yaml file for a PyTorchJob with 4 workers running the
4_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
The snippet below is an example of a yaml file for a PyTorchJob with 4 workers running the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering). ```yaml apiVersion: "kubeflow.org/v1" kind: PyTorchJob metadata: name: transformers-pytorchjob spec: elasticPolicy: rdzvBackend: c10d minReplicas: 1 maxReplicas: 4 maxRestarts: 10 pytorchReplicaSpecs: Worker: replicas: 4 # The number of worker pods restartPolicy: OnFailure template: spec:
4_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
maxRestarts: 10 pytorchReplicaSpecs: Worker: replicas: 4 # The number of worker pods restartPolicy: OnFailure template: spec: containers: - name: pytorch image: <image name>:<tag> # Specify the docker image to use for the worker pods imagePullPolicy: IfNotPresent command: ["/bin/bash", "-c"] args: - >- cd /workspace/transformers; pip install -r /workspace/transformers/examples/pytorch/question-answering/requirements.txt;
4_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
- >- cd /workspace/transformers; pip install -r /workspace/transformers/examples/pytorch/question-answering/requirements.txt; source /usr/local/lib/python3.10/dist-packages/oneccl_bindings_for_pytorch/env/setvars.sh; torchrun /workspace/transformers/examples/pytorch/question-answering/run_qa.py \ --model_name_or_path distilbert/distilbert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \
4_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
--do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/pvc-mount/output_$(date +%Y%m%d_%H%M%S) \ --no_cuda \ --ddp_backend ccl \ --bf16 \ --use_ipex; env: - name: LD_PRELOAD value: "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4.5.9:/usr/local/lib/libiomp5.so" - name: TRANSFORMERS_CACHE value: "/tmp/pvc-mount/transformers_cache" - name: HF_DATASETS_CACHE value: "/tmp/pvc-mount/hf_datasets_cache" - name: LOGLEVEL
4_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
value: "/tmp/pvc-mount/transformers_cache" - name: HF_DATASETS_CACHE value: "/tmp/pvc-mount/hf_datasets_cache" - name: LOGLEVEL value: "INFO" - name: CCL_WORKER_COUNT value: "1" - name: OMP_NUM_THREADS # Can be tuned for optimal performance value: "240" resources: limits: cpu: 240 # Update the CPU and memory limit values based on your nodes memory: 128Gi requests: cpu: 240 # Update the CPU and memory request values based on your nodes memory: 128Gi volumeMounts: - name: pvc-volume
4_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
cpu: 240 # Update the CPU and memory request values based on your nodes memory: 128Gi volumeMounts: - name: pvc-volume mountPath: /tmp/pvc-mount - mountPath: /dev/shm name: dshm restartPolicy: Never nodeSelector: # Optionally use nodeSelector to match a certain node label for the worker pods node-type: gnr volumes: - name: pvc-volume persistentVolumeClaim: claimName: transformers-pvc - name: dshm emptyDir: medium: Memory ```
4_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
volumes: - name: pvc-volume persistentVolumeClaim: claimName: transformers-pvc - name: dshm emptyDir: medium: Memory ``` To run this example, update the yaml based on your training script and the nodes in your cluster. <Tip> The CPU resource limits/requests in the yaml are defined in [cpu units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu)
4_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
[cpu units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu) where 1 CPU unit is equivalent to 1 physical CPU core or 1 virtual core (depending on whether the node is a physical host or a VM). The amount of CPU and memory limits/requests defined in the yaml should be less than the amount of available CPU/memory capacity on a single machine. It is usually a good idea to not use the entire machine's capacity in
4_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file
.md
available CPU/memory capacity on a single machine. It is usually a good idea to not use the entire machine's capacity in order to leave some resources for the kubelet and OS. In order to get ["guaranteed"](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#guaranteed) [quality of service](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod) for the worker pods, set the same CPU and memory amounts for both the resource limits and requests. </Tip>
4_9_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#deploy
.md
After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed to the cluster using: ```bash export NAMESPACE=<specify your namespace>
4_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#deploy
.md
kubectl create -f pytorchjob.yaml -n ${NAMESPACE} ``` The `kubectl get pods -n ${NAMESPACE}` command can then be used to list the pods in your namespace. You should see the worker pods for the PyTorchJob that was just deployed. At first, they will probably have a status of "Pending" as the containers get pulled and created, then the status should change to "Running". ``` NAME READY STATUS RESTARTS AGE ...
4_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#deploy
.md
``` NAME READY STATUS RESTARTS AGE ... transformers-pytorchjob-worker-0 1/1 Running 0 7m37s transformers-pytorchjob-worker-1 1/1 Running 0 7m37s transformers-pytorchjob-worker-2 1/1 Running 0 7m37s
4_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#deploy
.md
transformers-pytorchjob-worker-2 1/1 Running 0 7m37s transformers-pytorchjob-worker-3 1/1 Running 0 7m37s ... ``` The logs for worker can be viewed using `kubectl logs <pod name> -n ${NAMESPACE}`. Add `-f` to stream the logs, for example: ```bash kubectl logs transformers-pytorchjob-worker-0 -n ${NAMESPACE} -f ```
4_10_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#deploy
.md
```bash kubectl logs transformers-pytorchjob-worker-0 -n ${NAMESPACE} -f ``` After the training job completes, the trained model can be copied from the PVC or storage location. When you are done with the job, the PyTorchJob resource can be deleted from the cluster using `kubectl delete -f pytorchjob.yaml -n ${NAMESPACE}`.
4_10_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md
https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#summary
.md
This guide covered running distributed PyTorch training jobs using multiple CPUs on bare metal and on a Kubernetes cluster. Both cases utilize Intel Extension for PyTorch and Intel oneCCL Bindings for PyTorch for optimal training performance, and can be used as a template to run your own workload on multiple nodes.
4_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
5_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
5_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special **padding token** to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences.
5_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to know are: `padding`, `truncation` and `max_length`. The `padding` argument controls padding. It can be a boolean or a string: - `True` or `'longest'`: pad to the longest sequence in the batch (no padding is applied if you only provide a single sequence).
5_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
a single sequence). - `'max_length'`: pad to a length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). Padding will still be applied if you only provide a single sequence. - `False` or `'do_not_pad'`: no padding is applied. This is the default behavior. The `truncation` argument controls truncation. It can be a boolean or a string:
5_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
The `truncation` argument controls truncation. It can be a boolean or a string: - `True` or `'longest_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair until the proper length is reached. - `'only_second'`: truncate to a maximum length specified by the `max_length` argument or the maximum
5_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
reached. - `'only_second'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `'only_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum
5_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
- `'only_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior.
5_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
- `False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior. The `max_length` argument controls the length of the padding and truncation. It can be an integer or `None`, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to `max_length` is deactivated.
5_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace `truncation=True` by a `STRATEGY` selected in `['only_first', 'only_second', 'longest_first']`, i.e. `truncation='only_second'` or `truncation='longest_first'` to control how both sequences in the pair are truncated as detailed before.
5_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| Truncation | Padding | Instruction | |--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------| | no truncation | no padding | `tokenizer(batch_sentences)` |
5_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or | | | | `tokenizer(batch_sentences, padding='longest')` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` |
5_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | padding to a multiple of a value | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8)` | | truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or |
5_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` |
5_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | padding to specific length | Not possible |
5_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or |
5_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | padding to max model input length | Not possible | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or |
5_1_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md
https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation
.md
| | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
5_1_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
6_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
6_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#best-practices-for-generation-with-cache
.md
Efficient caching is crucial for optimizing the performance of models in various generative tasks, including text generation, translation, summarization and other transformer-based applications. Effective caching helps reduce computation time and improve response rates, especially in real-time or resource-intensive applications. Transformers support various caching methods, leveraging "Cache" classes to abstract and manage the caching logic.
6_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#best-practices-for-generation-with-cache
.md
Transformers support various caching methods, leveraging "Cache" classes to abstract and manage the caching logic. This document outlines best practices for using these classes to maximize performance and efficiency. Check out all the available `Cache` classes in the [API documentation](./internal/generation_utils).
6_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#what-is-cache-and-why-we-should-care
.md
Imagine you’re having a conversation with someone, and instead of remembering what was said previously, you have to start from scratch every time you respond. This would be slow and inefficient, right? In the world of Transformer models, a similar concept applies, and that's where Caching keys and values come into play. From now on, I'll refer to the concept as KV Cache.
6_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#what-is-cache-and-why-we-should-care
.md
KV cache is needed to optimize the generation in autoregressive models, where the model predicts text token by token. This process can be slow since the model can generate only one token at a time, and each new prediction is dependent on the previous context. That means, to predict token number 1000 in the generation, you need information from the previous 999 tokens, which comes in the form of some matrix multiplications across the representations of those tokens. But to predict token number 1001, you
6_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#what-is-cache-and-why-we-should-care
.md
in the form of some matrix multiplications across the representations of those tokens. But to predict token number 1001, you also need the same information from the first 999 tokens, plus additional information from token number 1000. That is where key-value cache is used to optimize the sequential generation process by storing previous calculations to reuse in subsequent tokens, so they don't need to be computed again.
6_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#what-is-cache-and-why-we-should-care
.md
More concretely, key-value cache acts as a memory bank for these generative models, where the model stores key-value pairs derived from self-attention layers for previously processed tokens. By storing this information, the model can avoid redundant computations and instead retrieve keys and values of previous tokens from the cache. Note that caching can be used only in inference and should be disabled when training, otherwise it might cause unexpected errors. <details>
6_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#what-is-cache-and-why-we-should-care
.md
<details> <summary><em>For the Curious Minds Who Like to Dive Deep</em></summary>
6_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
When utilizing a cache object in the input, the Attention module performs several critical steps to integrate past and present information seamlessly.
6_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
The Attention module concatenates the current key-values with the past key-values stored in the cache. This results in attention weights of shape `(new_tokens_length, past_kv_length + new_tokens_length)`. Essentially, the past and current key-values are combined to compute attention scores, ensuring that the model considers both previous context and new input. The concatenated key-values are used to compute the attention scores resulting in attention weights of shape `(new_tokens_length, past_kv_length +
6_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
are used to compute the attention scores resulting in attention weights of shape `(new_tokens_length, past_kv_length + new_tokens_length)`.
6_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
Therefore, when iteratively calling `forward()` instead of the `generate()` method, it’s crucial to ensure that the attention mask shape matches the combined length of past and current key-values. The attention mask should have the shape `(batch_size, past_kv_length + new_tokens_length)`. This is usually handled internally when you call `generate()` method. If you want to implement your own generation loop with Cache classes, take this into consideration and prepare the attention mask to hold values to
6_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
your own generation loop with Cache classes, take this into consideration and prepare the attention mask to hold values to current and past tokens.
6_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
<Tip warning={true}>
6_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
One important concept you need to know when writing your own generation loop, is `cache_position`. In case you want to reuse an already filled Cache object by calling `forward()`, you have to pass in a valid `cache_position` which will indicate the positions of inputs in the sequence. Note that `cache_position` is not affected by padding, and always adds one more position for each token. For example, if key/value cache contains 10 tokens (no matter how many of it is a pad token), the cache position for the
6_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
token. For example, if key/value cache contains 10 tokens (no matter how many of it is a pad token), the cache position for the next token should be `torch.tensor([10])`.
6_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
</Tip> See an example below for how to implement your own generation loop. ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForCausalLM, DynamicCache
6_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
>>> model_id = "meta-llama/Llama-2-7b-chat-hf" >>> model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda:0") >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> past_key_values = DynamicCache() >>> messages = [{"role": "user", "content": "Hello, what's your name."}] >>> inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda:0")
6_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
>>> generated_ids = inputs.input_ids >>> cache_position = torch.arange(inputs.input_ids.shape[1], dtype=torch.int64, device="cuda:0") >>> max_new_tokens = 10
6_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
>>> for _ in range(max_new_tokens): ... outputs = model(**inputs, cache_position=cache_position, past_key_values=past_key_values, use_cache=True) ... # Greedily sample one next token ... next_token_ids = outputs.logits[:, -1:].argmax(-1) ... generated_ids = torch.cat([generated_ids, next_token_ids], dim=-1) ... ... # Prepare inputs for the next generation step by leaaving unprocessed tokens, in our case we have only one new token
6_3_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
... # Prepare inputs for the next generation step by leaaving unprocessed tokens, in our case we have only one new token ... # and expanding attn mask for the new token, as explained above ... attention_mask = inputs["attention_mask"] ... attention_mask = torch.cat([attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1) ... inputs = {"input_ids": next_token_ids, "attention_mask": attention_mask}
6_3_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
... inputs = {"input_ids": next_token_ids, "attention_mask": attention_mask} ... cache_position = cache_position[-1:] + 1 # add one more position for the next token
6_3_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism
.md
>>> print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]) "[INST] Hello, what's your name. [/INST] Hello! My name is LLaMA," ``` </details>
6_3_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
In 🤗 Transformers, we support various Cache types to optimize the performance across different models and tasks. By default, all models generate with caching, with the [`~DynamicCache`] class being the default cache for most models. It allows us to dynamically grow cache size, by saving more and more keys and values as we generate. If for some reason you don't want to use caches, you can pass `use_cache=False` into the `generate()` method.
6_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
Refer to the table below to see the difference between cache types and choose the one that suits best for your use-case. Models for which initialization is recommended should be initialized before calling the model and passed to model as a kwarg. In all other cases you can simply define desired `cache_implementation` and we take care of the rest for you. | Cache Type | Memory Efficient | Supports torch.compile() | Initialization Recommended | Latency | Long Context Generation |
6_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
|------------------------|------------------|--------------------------|----------------------------|---------|-------------------------| | Dynamic Cache | No | No | No | Mid | No | | Static Cache | No | Yes | Yes | High | No |
6_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
| Offloaded Cache | Yes | No | No | Low | Yes | | Offloaded Static Cache | No | Yes | Yes | High | Yes | | Quantized Cache | Yes | No | No | Low | Yes |
6_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
| Sliding Window Cache | No | Yes | Yes | High | No | | Sink Cache | Yes | No | Yes | Mid | Yes |
6_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
These cache classes can be set with a `cache_implementation` argument when generating. To learn about the available options for the cache_implementation flag, please refer to the [API Documentation](./main_classes/text_generation#transformers.GenerationConfig). Now, let's explore each cache type in detail and see how to use them. Note that the below examples are for decoder-only Tranformer-based models. We also support ["Model-Specific Cache"] classes for models such as Mamba or Jamba, keep reading for
6_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache
.md
Tranformer-based models. We also support ["Model-Specific Cache"] classes for models such as Mamba or Jamba, keep reading for more details.
6_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache
.md
The key and value cache can occupy a large portion of memory, becoming a [bottleneck for long-context generation](https://huggingface.co/blog/llama31#inference-memory-requirements), especially for Large Language Models. Quantizing the cache when using `generate()` can significantly reduce memory requirements at the cost of speed.
6_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache
.md
Quantizing the cache when using `generate()` can significantly reduce memory requirements at the cost of speed. KV Cache quantization in `transformers` is largely inspired by the paper ["KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache"](https://arxiv.org/abs/2402.02750) and currently supports [`~QuantoQuantizedCache`] and [`~HQQQuantizedCache`] classes. For more information on the inner workings see the paper.
6_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache
.md
To enable quantization of the key-value cache, one needs to indicate `cache_implementation="quantized"` in the `generation_config`. Quantization related arguments should be passed to the `generation_config` either as a `dict` or an instance of a [`~QuantizedCacheConfig`] class. One has to indicate which quantization backend to use in the [`~QuantizedCacheConfig`], the default is `quanto`.
6_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache
.md
One has to indicate which quantization backend to use in the [`~QuantizedCacheConfig`], the default is `quanto`. It is recommended to set `axis-key/axis-value` parameters in the cache config to `0` if you're using the `quanto` backend and to `1` if you're using the `HQQ` backend. For other config values, please use the defaults unless you're running out of memory. In that case, you may consider decreasing the residual length. <Tip warning={true}>
6_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache
.md
<Tip warning={true}> Cache quantization can be detrimental in terms of latency if the context length is short and there is enough GPU VRAM available to run without cache quantization. It is recommended to seek balance between memory efficiency and latency. </Tip> ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForCausalLM
6_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md
https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache
.md
>>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.float16).to("cuda:0") >>> inputs = tokenizer("I like rock music because", return_tensors="pt").to(model.device)
6_5_5