source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Classify text with DistilBERT and Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | How to fine-tune DistilBERT for text classification in TensorFlow | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| | 8_3_31 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | How to warm-start a *EncoderDecoderModel* with a *google-bert/bert-base-uncased* checkpoint for summarization on CNN/Dailymail | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In | 8_3_32 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | checkpoint for summarization on CNN/Dailymail | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| | 8_3_33 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | How to warm-start a shared *EncoderDecoderModel* with a *FacebookAI/roberta-base* checkpoint for summarization on BBC/XSum | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In | 8_3_34 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | checkpoint for summarization on BBC/XSum | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| | 8_3_35 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune TAPAS on Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | How to fine-tune *TapasForQuestionAnswering* with a *tapas-base* checkpoint on the Sequential Question Answering (SQA) dataset | [Niels Rogge](https://github.com/nielsrogge) | [![Open In | 8_3_36 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | checkpoint on the Sequential Question Answering (SQA) dataset | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| | 8_3_37 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Evaluate TAPAS on Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | How to evaluate a fine-tuned *TapasForSequenceClassification* with a *tapas-base-finetuned-tabfact* checkpoint using a combination of the π€ datasets and π€ transformers libraries | [Niels Rogge](https://github.com/nielsrogge) | [![Open In | 8_3_38 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | using a combination of the π€ datasets and π€ transformers libraries | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| | 8_3_39 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tuning mBART for translation](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| | 8_3_40 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune LayoutLM on FUNSD (a form understanding dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | How to fine-tune *LayoutLMForTokenClassification* on the FUNSD dataset for information extraction from scanned documents | [Niels Rogge](https://github.com/nielsrogge) | [![Open In | 8_3_41 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | the FUNSD dataset for information extraction from scanned documents | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| | 8_3_42 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-Tune DistilGPT2 and Generate Text](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | How to fine-tune DistilGPT2 and generate text | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| | 8_3_43 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-Tune LED on up to 8K tokens](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | How to fine-tune LED on pubmed for long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In | 8_3_44 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | fine-tune LED on pubmed for long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| | 8_3_45 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Evaluate LED on Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | How to effectively evaluate LED on long-range summarization | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| | 8_3_46 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | How to fine-tune *LayoutLMForSequenceClassification* on the RVL-CDIP dataset for scanned document classification | [Niels Rogge](https://github.com/nielsrogge) | [![Open In | 8_3_47 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | on the RVL-CDIP dataset for scanned document classification | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| | 8_3_48 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Wav2Vec2 CTC decoding with GPT2 adjustment](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | How to decode CTC sequence with language model adjustment | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| | 8_3_49 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Fine-tune BART for summarization in two languages with Trainer class](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | How to fine-tune BART for summarization in two languages with Trainer class | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| | 8_3_50 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | |[Evaluate Big Bird on Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | How to evaluate BigBird on long document question answering on Trivia QA | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | 8_3_51 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Create video captions using Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | How to create YouTube captions from any video by transcribing the audio with Wav2Vec | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | 8_3_52 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning | [Niels Rogge](https://github.com/nielsrogge) |[![Open In | 8_3_53 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | using HuggingFace Transformers, Datasets and PyTorch Lightning | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | 8_3_54 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune the Vision Transformer on CIFAR-10 using the π€ Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the π€ Trainer | [Niels Rogge](https://github.com/nielsrogge) |[![Open In | 8_3_55 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | CIFAR-10 using HuggingFace Transformers, Datasets and the π€ Trainer | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | 8_3_56 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Evaluate LUKE on Open Entity, an entity typing dataset](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | How to evaluate *LukeForEntityClassification* on the Open Entity dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | 8_3_57 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Evaluate LUKE on TACRED, a relation extraction dataset](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | How to evaluate *LukeForEntityPairClassification* on the TACRED dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | 8_3_58 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Evaluate LUKE on CoNLL-2003, an important NER benchmark](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | How to evaluate *LukeForEntitySpanClassification* on the CoNLL-2003 dataset | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | 8_3_59 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Evaluate BigBird-Pegasus on PubMed dataset](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | How to evaluate *BigBirdPegasusForConditionalGeneration* on PubMed dataset | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | 8_3_60 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Speech Emotion Classification with Wav2Vec2](https://github.com/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In | 8_3_61 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | Wav2Vec2 model for Emotion Classification on the MEGA dataset | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | 8_3_62 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Detect objects in an image with DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | How to use a trained *DetrForObjectDetection* model to detect objects in an image and visualize attention | [Niels Rogge](https://github.com/NielsRogge) | [![Open In | 8_3_63 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | model to detect objects in an image and visualize attention | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | 8_3_64 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-tune DETR on a custom object detection dataset](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | How to fine-tune *DetrForObjectDetection* on a custom object detection dataset | [Niels Rogge](https://github.com/NielsRogge) | [![Open In | 8_3_65 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | *DetrForObjectDetection* on a custom object detection dataset | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | 8_3_66 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Finetune T5 for Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | How to fine-tune *T5* on a Named Entity Recognition Task | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) | | 8_3_67 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | | [Fine-Tuning Open-Source LLM using QLoRA with MLflow and PEFT](https://github.com/mlflow/mlflow/blob/master/docs/source/llms/transformers/tutorials/fine-tuning/transformers-peft.ipynb) | How to use [QLoRA](https://github.com/artidoro/qlora) and [PEFT](https://huggingface.co/docs/peft/en/index) to fine-tune an LLM in a memory-efficient way, while using [MLflow](https://mlflow.org/docs/latest/llms/transformers/index.html) to manage experiment tracking | [Yuki Watanabe](https://github.com/B-Step62) | | 8_3_68 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | .md | to manage experiment tracking | [Yuki Watanabe](https://github.com/B-Step62) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mlflow/mlflow/blob/master/docs/source/llms/transformers/tutorials/fine-tuning/transformers-peft.ipynb) | | 8_3_69 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/ | .md | <!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | 9_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/ | .md | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 9_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#troubleshoot | .md | Sometimes errors occur, but we are here to help! This guide covers some of the most common issues we've seen and how you can resolve them. However, this guide isn't meant to be a comprehensive collection of every π€ Transformers issue. For more help with troubleshooting your issue, try:
<Youtube id="S2EEG3JIt2A"/> | 9_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#troubleshoot | .md | <Youtube id="S2EEG3JIt2A"/>
1. Asking for help on the [forums](https://discuss.huggingface.co/). There are specific categories you can post your question to, like [Beginners](https://discuss.huggingface.co/c/beginners/5) or [π€ Transformers](https://discuss.huggingface.co/c/transformers/9). Make sure you write a good descriptive forum post with some reproducible code to maximize the likelihood that your problem is solved!
<Youtube id="_PAli-V4wj0"/> | 9_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#troubleshoot | .md | <Youtube id="_PAli-V4wj0"/>
2. Create an [Issue](https://github.com/huggingface/transformers/issues/new/choose) on the π€ Transformers repository if it is a bug related to the library. Try to include as much information describing the bug as possible to help us better figure out what's wrong and how we can fix it.
3. Check the [Migration](migration) guide if you use an older version of π€ Transformers since some important changes have been introduced between versions. | 9_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#troubleshoot | .md | For more details about troubleshooting and getting help, take a look at [Chapter 8](https://huggingface.co/course/chapter8/1?fw=pt) of the Hugging Face course. | 9_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#firewalled-environments | .md | Some GPU instances on cloud and intranet setups are firewalled to external connections, resulting in a connection error. When your script attempts to download model weights or datasets, the download will hang and then timeout with the following message:
```
ValueError: Connection error, and we cannot find the requested files in the cached path.
Please try again or make sure your Internet connection is on.
``` | 9_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#firewalled-environments | .md | Please try again or make sure your Internet connection is on.
```
In this case, you should try to run π€ Transformers on [offline mode](installation#offline-mode) to avoid the connection error. | 9_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-out-of-memory | .md | Training large models with millions of parameters can be challenging without the appropriate hardware. A common error you may encounter when the GPU runs out of memory is:
```
CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch)
```
Here are some potential solutions you can try to lessen memory use: | 9_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-out-of-memory | .md | ```
Here are some potential solutions you can try to lessen memory use:
- Reduce the [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) value in [`TrainingArguments`].
- Try using [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps) in [`TrainingArguments`] to effectively increase overall batch size.
<Tip> | 9_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-out-of-memory | .md | <Tip>
Refer to the Performance [guide](performance) for more details about memory-saving techniques.
</Tip> | 9_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#unable-to-load-a-saved-tensorflow-model | .md | TensorFlow's [model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) method will save the entire model - architecture, weights, training configuration - in a single file. However, when you load the model file again, you may run into an error because π€ Transformers may not load all the TensorFlow-related objects in the model file. To avoid issues with saving and loading TensorFlow models, we recommend you: | 9_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#unable-to-load-a-saved-tensorflow-model | .md | - Save the model weights as a `h5` file extension with [`model.save_weights`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) and then reload the model with [`~TFPreTrainedModel.from_pretrained`]:
```py
>>> from transformers import TFPreTrainedModel
>>> from tensorflow import keras | 9_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#unable-to-load-a-saved-tensorflow-model | .md | >>> model.save_weights("some_folder/tf_model.h5")
>>> model = TFPreTrainedModel.from_pretrained("some_folder")
```
- Save the model with [`~TFPretrainedModel.save_pretrained`] and load it again with [`~TFPreTrainedModel.from_pretrained`]:
```py
>>> from transformers import TFPreTrainedModel
>>> model.save_pretrained("path_to/model")
>>> model = TFPreTrainedModel.from_pretrained("path_to/model")
``` | 9_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#importerror | .md | Another common error you may encounter, especially if it is a newly released model, is `ImportError`:
```
ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location)
```
For these error types, check to make sure you have the latest version of π€ Transformers installed to access the most recent models:
```bash
pip install transformers --upgrade
``` | 9_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-error-device-side-assert-triggered | .md | Sometimes you may run into a generic CUDA error about an error in the device code.
```
RuntimeError: CUDA error: device-side assert triggered
```
You should try to run the code on a CPU first to get a more descriptive error message. Add the following environment variable to the beginning of your code to switch to a CPU:
```py
>>> import os | 9_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-error-device-side-assert-triggered | .md | >>> os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
Another option is to get a better traceback from the GPU. Add the following environment variable to the beginning of your code to get the traceback to point to the source of the error:
```py
>>> import os
>>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
``` | 9_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#incorrect-output-when-padding-tokens-arent-masked | .md | In some cases, the output `hidden_state` may be incorrect if the `input_ids` include padding tokens. To demonstrate, load a model and tokenizer. You can access a model's `pad_token_id` to see its value. The `pad_token_id` may be `None` for some models, but you can always manually set it.
```py
>>> from transformers import AutoModelForSequenceClassification
>>> import torch | 9_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#incorrect-output-when-padding-tokens-arent-masked | .md | >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")
>>> model.config.pad_token_id
0
```
The following example shows the output without masking the padding tokens:
```py
>>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
[ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>)
```
Here is the actual output of the second sequence:
```py | 9_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#incorrect-output-when-padding-tokens-arent-masked | .md | [ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>)
```
Here is the actual output of the second sequence:
```py
>>> input_ids = torch.tensor([[7592]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)
```
Most of the time, you should provide an `attention_mask` to your model to ignore the padding tokens to avoid this silent error. Now the output of the second sequence matches its actual output:
<Tip> | 9_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#incorrect-output-when-padding-tokens-arent-masked | .md | <Tip>
By default, the tokenizer creates an `attention_mask` for you based on your specific tokenizer's defaults.
</Tip>
```py
>>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]])
>>> output = model(input_ids, attention_mask=attention_mask)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)
```
π€ Transformers doesn't automatically create an `attention_mask` to mask a padding token if it is provided because: | 9_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#incorrect-output-when-padding-tokens-arent-masked | .md | ```
π€ Transformers doesn't automatically create an `attention_mask` to mask a padding token if it is provided because:
- Some models don't have a padding token.
- For some use-cases, users want a model to attend to a padding token. | 9_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel | .md | Generally, we recommend using the [`AutoModel`] class to load pretrained instances of models. This class
can automatically infer and load the correct architecture from a given checkpoint based on the configuration. If you see
this `ValueError` when loading a model from a checkpoint, this means the Auto class couldn't find a mapping from
the configuration in the given checkpoint to the kind of model you are trying to load. Most commonly, this happens when a
checkpoint doesn't support a given task. | 9_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel | .md | checkpoint doesn't support a given task.
For instance, you'll see this error in the following example because there is no GPT2 for question answering:
```py
>>> from transformers import AutoProcessor, AutoModelForQuestionAnswering | 9_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel | .md | >>> processor = AutoProcessor.from_pretrained("openai-community/gpt2-medium")
>>> model = AutoModelForQuestionAnswering.from_pretrained("openai-community/gpt2-medium")
ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ...
``` | 9_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 10_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 10_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | Deploying π€ Transformers models in production environments often requires, or can benefit from exporting the models into
a serialized format that can be loaded and executed on specialized runtimes and hardware.
π€ Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats
such as ONNX and TFLite through its `exporters` module. π€ Optimum also provides a set of performance optimization tools to train | 10_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | and run models on targeted hardware with maximum efficiency.
This guide demonstrates how you can export π€ Transformers models to ONNX with π€ Optimum, for the guide on exporting models to TFLite,
please refer to the [Export to TFLite page](tflite). | 10_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | [ONNX (Open Neural Network eXchange)](http://onnx.ai) is an open standard that defines a common set of operators and a
common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network. | 10_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | represents the flow of data through the neural network.
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Once exported to ONNX format, a model can be: | 10_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | ONNX format and then imported in TensorFlow (and vice versa).
Once exported to ONNX format, a model can be:
- optimized for inference via techniques such as [graph optimization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) and [quantization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization).
- run with ONNX Runtime via [`ORTModelForXXX` classes](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort), | 10_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | which follow the same `AutoModel` API as the one you are used to in π€ Transformers.
- run with [optimized inference pipelines](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines),
which has the same API as the [`pipeline`] function in π€ Transformers.
π€ Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come | 10_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | π€ Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come
ready-made for a number of model architectures, and are designed to be easily extendable to other architectures.
For the list of ready-made configurations, please refer to [π€ Optimum documentation](https://huggingface.co/docs/optimum/exporters/onnx/overview).
There are two ways to export a π€ Transformers model to ONNX, here we show both:
- export with π€ Optimum via CLI. | 10_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | .md | There are two ways to export a π€ Transformers model to ONNX, here we show both:
- export with π€ Optimum via CLI.
- export with π€ Optimum with `optimum.onnxruntime`. | 10_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | To export a π€ Transformers model to ONNX, first install an extra dependency:
```bash
pip install optimum[exporters]
```
To check out all available arguments, refer to the [π€ Optimum docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli),
or view help in command line:
```bash
optimum-cli export onnx --help
``` | 10_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | or view help in command line:
```bash
optimum-cli export onnx --help
```
To export a model's checkpoint from the π€ Hub, for example, `distilbert/distilbert-base-uncased-distilled-squad`, run the following command:
```bash
optimum-cli export onnx --model distilbert/distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/
```
You should see the logs indicating progress and showing where the resulting `model.onnx` is saved, like this:
```bash | 10_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | ```
You should see the logs indicating progress and showing where the resulting `model.onnx` is saved, like this:
```bash
Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx...
-[β] ONNX model output names match reference model (start_logits, end_logits)
- Validating ONNX Model output "start_logits":
-[β] (2, 16) matches (2, 16)
-[β] all values close (atol: 0.0001)
- Validating ONNX Model output "end_logits":
-[β] (2, 16) matches (2, 16)
-[β] all values close (atol: 0.0001) | 10_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | - Validating ONNX Model output "end_logits":
-[β] (2, 16) matches (2, 16)
-[β] all values close (atol: 0.0001)
The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx
```
The example above illustrates exporting a checkpoint from π€ Hub. When exporting a local model, first make sure that you
saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the | 10_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the
`local_path` to the `model` argument instead of the checkpoint name on π€ Hub and provide the `--task` argument.
You can review the list of supported tasks in the [π€ Optimum documentation](https://huggingface.co/docs/optimum/exporters/task_manager).
If `task` argument is not provided, it will default to the model architecture without any task specific head.
```bash | 10_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | If `task` argument is not provided, it will default to the model architecture without any task specific head.
```bash
optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/
```
The resulting `model.onnx` file can then be run on one of the [many
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
Runtime](https://onnxruntime.ai/) as follows:
```python | 10_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | standard. For example, we can load and run the model with [ONNX
Runtime](https://onnxruntime.ai/) as follows:
```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering | 10_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt")
>>> outputs = model(**inputs)
```
The process is identical for TensorFlow checkpoints on the Hub. For instance, here's how you would | 10_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | .md | ```
The process is identical for TensorFlow checkpoints on the Hub. For instance, here's how you would
export a pure TensorFlow checkpoint from the [Keras organization](https://huggingface.co/keras-io):
```bash
optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/
``` | 10_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-optimumonnxruntime | .md | Alternative to CLI, you can export a π€ Transformers model to ONNX programmatically like so:
```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer
>>> model_checkpoint = "distilbert_base_uncased_squad"
>>> save_directory = "onnx/" | 10_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-optimumonnxruntime | .md | >>> model_checkpoint = "distilbert_base_uncased_squad"
>>> save_directory = "onnx/"
>>> # Load a model from transformers and export it to ONNX
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
>>> # Save the onnx model and tokenizer
>>> ort_model.save_pretrained(save_directory)
>>> tokenizer.save_pretrained(save_directory)
``` | 10_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-for-an-unsupported-architecture | .md | If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is
supported in [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview),
and if it is not, [contribute to π€ Optimum](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute)
directly. | 10_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | <Tip warning={true}>
`transformers.onnx` is no longer maintained, please export models with π€ Optimum as described above. This section will be removed in the future versions.
</Tip>
To export a π€ Transformers model to ONNX with `transformers.onnx`, install extra dependencies:
```bash
pip install transformers[onnx]
```
Use `transformers.onnx` package as a Python module to export a checkpoint using a ready-made configuration:
```bash | 10_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | ```
Use `transformers.onnx` package as a Python module to export a checkpoint using a ready-made configuration:
```bash
python -m transformers.onnx --model=distilbert/distilbert-base-uncased onnx/
```
This exports an ONNX graph of the checkpoint defined by the `--model` argument. Pass any checkpoint on the π€ Hub or one that's stored locally.
The resulting `model.onnx` file can then be run on one of the many accelerators that support the ONNX standard. For example, | 10_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | The resulting `model.onnx` file can then be run on one of the many accelerators that support the ONNX standard. For example,
load and run the model with ONNX Runtime as follows:
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession | 10_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
The required output names (like `["last_hidden_state"]`) can be obtained by taking a look at the ONNX configuration of | 10_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | ```
The required output names (like `["last_hidden_state"]`) can be obtained by taking a look at the ONNX configuration of
each model. For example, for DistilBERT we have:
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig | 10_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | >>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
```
The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so:
```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
To export a model that's stored locally, save the model's weights and tokenizer files in the same directory (e.g. `local-pt-checkpoint`), | 10_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-with-transformersonnx | .md | then export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory:
```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
``` | 10_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 11_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
β οΈ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 11_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/#fine-tune-a-pretrained-model | .md | [[open-in-colab]] | 11_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/#fine-tune-a-pretrained-model | .md | There are significant benefits to using a pretrained model. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. π€ Transformers provides access to thousands of pretrained models for a wide range of tasks. When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a | 11_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/#fine-tune-a-pretrained-model | .md | known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: | 11_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/#fine-tune-a-pretrained-model | .md | * Fine-tune a pretrained model with π€ Transformers [`Trainer`].
* Fine-tune a pretrained model in TensorFlow with Keras.
* Fine-tune a pretrained model in native PyTorch.
<a id='data-processing'></a> | 11_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md | https://huggingface.co/docs/transformers/en/training/#prepare-a-dataset | .md | <Youtube id="_BZearw7f0w"/>
Before you can fine-tune a pretrained model, download a dataset and prepare it for training. The previous tutorial showed you how to process data for training, and now you get an opportunity to put those skills to the test!
Begin by loading the [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full) dataset:
```py
>>> from datasets import load_dataset | 11_2_0 |