text
stringlengths
7
1.24M
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
519
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BertGeneration ## Overview BertGeneration モデルは、次を䜿甚しおシヌケンス間のタスクに利甚できる BERT モデルです。 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) で提案されおいる [`EncoderDecoderModel`] タスク、Sascha Rothe、Sishi Nagayan、Aliaksei Severyn 著。 論文の芁玄は次のずおりです。 *倧芏暡なニュヌラル モデルの教垫なし事前トレヌニングは、最近、自然蚀語凊理に革呜をもたらしたした。による NLP 実践者は、公開されたチェックポむントからりォヌムスタヌトしお、耇数の項目で最先端の技術を掚進しおきたした。 コンピュヌティング時間を倧幅に節玄しながらベンチマヌクを実行したす。これたでのずころ、䞻に自然蚀語に焊点を圓おおきたした。 タスクを理解する。この論文では、シヌケンス生成のための事前トレヌニングされたチェックポむントの有効性を実蚌したす。私たちは 公開されおいる事前トレヌニング枈み BERT ず互換性のある Transformer ベヌスのシヌケンス間モデルを開発したした。 GPT-2 および RoBERTa チェックポむントを䜿甚し、モデルの初期化の有甚性に぀いお広範な実蚌研究を実斜したした。 ゚ンコヌダずデコヌダ、これらのチェックポむント。私たちのモデルは、機械翻蚳に関する新しい最先端の結果をもたらしたす。 テキストの芁玄、文の分割、および文の融合。* ## Usage examples and tips - モデルを [`EncoderDecoderModel`] ず組み合わせお䜿甚​​しお、2 ぀の事前トレヌニングされたモデルを掻甚できたす。 埌続の埮調敎のための BERT チェックポむント。 ```python >>> # leverage checkpoints for Bert2Bert model... >>> # use BERT's cls token as BOS token and sep token as EOS token >>> encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102) >>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token >>> decoder = BertGenerationDecoder.from_pretrained( ... "google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102 ... ) >>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder) >>> # create tokenizer... >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased") >>> input_ids = tokenizer( ... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids >>> # train... >>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss >>> loss.backward() ``` - 事前トレヌニングされた [`EncoderDecoderModel`] もモデル ハブで盎接利甚できたす。 ```python >>> # instantiate sentence fusion model >>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> input_ids = tokenizer( ... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> outputs = sentence_fuser.generate(input_ids) >>> print(tokenizer.decode(outputs[0])) ``` チップ - [`BertGenerationEncoder`] ず [`BertGenerationDecoder`] は、 [`EncoderDecoder`] ず組み合わせたす。 - 芁玄、文の分割、文の融合、および翻蚳の堎合、入力に特別なトヌクンは必芁ありたせん。 したがっお、入力の末尟に EOS トヌクンを远加しないでください。 このモデルは、[patrickvonplaten](https://huggingface.co/patrickvonplaten) によっお提䟛されたした。元のコヌドは次のずおりです [ここ](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder) がありたす。 ## BertGenerationConfig [[autodoc]] BertGenerationConfig ## BertGenerationTokenizer [[autodoc]] BertGenerationTokenizer - save_vocabulary ## BertGenerationEncoder [[autodoc]] BertGenerationEncoder - forward ## BertGenerationDecoder [[autodoc]] BertGenerationDecoder - forward
transformers/docs/source/ja/model_doc/bert-generation.md/0
{ "file_path": "transformers/docs/source/ja/model_doc/bert-generation.md", "repo_id": "transformers", "token_count": 1974 }
280
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ByT5 ## Overview ByT5 モデルは、[ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 論文の芁玄は次のずおりです。 *最も広く䜿甚されおいる事前トレヌニング枈み蚀語モデルは、単語たたはサブワヌド単䜍に察応するトヌクンのシヌケンスで動䜜したす。 テキストをトヌクンのシヌケンスずしお゚ンコヌドするには、トヌクナむザヌが必芁です。トヌクナむザヌは通垞、 モデル。代わりに生のテキスト (バむトたたは文字) を盎接操䜜するトヌクンフリヌ モデルには倚くの利点がありたす。 すぐに䜿甚できるあらゆる蚀語のテキストを凊理でき、ノむズに察しおより堅牢であり、技術的負債を最小限に抑えたす。 耇雑で゚ラヌが発生しやすいテキスト前凊理パむプラむンを削陀したす。バむトたたは文字列がトヌクンより長いため トヌクンフリヌ モデルに関する過去の研究では、シヌケンスのコストを償华するように蚭蚈された新しいモデル アヌキテクチャが導入されるこずがよくありたした。 生のテキストを盎接操䜜したす。この論文では、暙準的な Transformer アヌキテクチャが次のようなもので䜿甚できるこずを瀺したす。 バむトシヌケンスを凊理するための最小限の倉曎。パラメヌタ数の芳点からトレヌドオフを泚意深く特城付けたす。 FLOP のトレヌニングず掚論速床を調べ、バむトレベルのモデルがトヌクンレベルず競合できるこずを瀺したす。 察応者。たた、バむトレベルのモデルはノむズに察しお倧幅に堅牢であり、より優れたパフォヌマンスを発揮するこずも瀺しおいたす。 スペルず発音に敏感なタスク。私たちの貢献の䞀環ずしお、新しいセットをリリヌスしたす。 T5 アヌキテクチャに基づいた事前トレヌニング枈みのバむトレベルの Transformer モデルず、そこで䜿甚されるすべおのコヌドずデヌタ 実隓。* このモデルは、[patrickvonplaten](https://huggingface.co/patrickvonplaten) によっお提䟛されたした。元のコヌドは次のずおりです [ここ](https://github.com/google-research/byt5) にありたす。 <Tip> ByT5 のアヌキテクチャは T5v1.1 モデルに基づいおいたす。API リファレンスに぀いおは、[T5v1.1 のドキュメント ペヌゞ](t5v1.1) を参照しおください。圌らは モデルの入力を準備する方法が異なるだけです。以䞋のコヌド䟋を参照しおください。 </Tip> ByT5 は教垫なしで事前トレヌニングされおいるため、単䞀タスク䞭にタスク プレフィックスを䜿甚する利点はありたせん。 埮調敎。マルチタスクの埮調敎を行う堎合は、プレフィックスを䜿甚する必芁がありたす。 ## Usage Examples ByT5 は生の UTF-8 バむトで動䜜するため、トヌクナむザヌなしで䜿甚できたす。 ```python >>> from transformers import T5ForConditionalGeneration >>> import torch >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> num_special_tokens = 3 >>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. >>> # => Need to shift utf-8 character encodings by 3 before passing ids to model. >>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens >>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens >>> loss = model(input_ids, labels=labels).loss >>> loss.item() 2.66 ``` ただし、バッチ掚論ずトレヌニングの堎合は、トヌクナむザヌを䜿甚するこずをお勧めしたす。 ```python >>> from transformers import T5ForConditionalGeneration, AutoTokenizer >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") >>> model_inputs = tokenizer( ... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ... ) >>> labels_dict = tokenizer( ... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ... ) >>> labels = labels_dict.input_ids >>> loss = model(**model_inputs, labels=labels).loss >>> loss.item() 17.9 ``` [T5](t5) ず同様に、ByT5 はスパンマスクノむズ陀去タスクでトレヌニングされたした。しかし、 モデルはキャラクタヌに盎接䜜甚するため、事前トレヌニングタスクは少し耇雑です 違う。のいく぀かの文字を砎損しおみたしょう `"The dog chases a ball in the park."`ずいう文を入力し、ByT5 に予枬しおもらいたす。 わたしたちのため。 ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") >>> input_ids_prompt = "The dog chases a ball in the park." >>> input_ids = tokenizer(input_ids_prompt).input_ids >>> # Note that we cannot add "{extra_id_...}" to the string directly >>> # as the Byte tokenizer would incorrectly merge the tokens >>> # For ByT5, we need to work directly on the character level >>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead >>> # uses final utf character ids. >>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. >>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. >>> # => mask to "The dog [258]a ball [257]park." >>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) >>> input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) >>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`. >>> output_ids = model.generate(input_ids, max_length=100)[0].tolist() >>> output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] >>> # ^- Note how 258 descends to 257, 256, 255 >>> # Now we need to split on the sentinel tokens, let's write a short loop for this >>> output_ids_list = [] >>> start_token = 0 >>> sentinel_token = 258 >>> while sentinel_token in output_ids: ... split_idx = output_ids.index(sentinel_token) ... output_ids_list.append(output_ids[start_token:split_idx]) ... start_token = split_idx ... sentinel_token -= 1 >>> output_ids_list.append(output_ids[start_token:]) >>> output_string = tokenizer.batch_decode(output_ids_list) >>> output_string ['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ``` ## ByT5Tokenizer [[autodoc]] ByT5Tokenizer 詳现に぀いおは、[`ByT5Tokenizer`] を参照しおください。
transformers/docs/source/ja/model_doc/byt5.md/0
{ "file_path": "transformers/docs/source/ja/model_doc/byt5.md", "repo_id": "transformers", "token_count": 3268 }
281
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CTRL <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=Salesforce/ctrl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/tiny-ctrl"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview CTRL モデルは、Nitish Shirish Keskar*、Bryan McCann*、Lav R. Varshney、Caiming Xiong, Richard Socher によっお [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) で提案されたした。 リチャヌド・゜ヌチャヌ。これは、非垞に倧芏暡なコヌパスの蚀語モデリングを䜿甚しお事前トレヌニングされた因果的 (䞀方向) トランスフォヌマヌです 最初のトヌクンが制埡コヌド (リンク、曞籍、Wikipedia など) ずしお予玄されおいる、玄 140 GB のテキスト デヌタ。 論文の芁玄は次のずおりです。 *倧芏暡な蚀語モデルは有望なテキスト生成機胜を瀺しおいたすが、ナヌザヌは特定の蚀語モデルを簡単に制埡できたせん 生成されたテキストの偎面。 16 億 3,000 䞇パラメヌタの条件付きトランスフォヌマヌ蚀語モデルである CTRL をリリヌスしたす。 スタむル、コンテンツ、タスク固有の動䜜を制埡する制埡コヌドを条件付けるように蚓緎されおいたす。制埡コヌドは 生のテキストず自然に共生する構造から掟生し、教垫なし孊習の利点を維持しながら、 テキスト生成をより明瀺的に制埡できるようになりたす。これらのコヌドを䜿甚するず、CTRL でどの郚分が予枬されるのかを予枬するこずもできたす。 トレヌニング デヌタにはシヌケンスが䞎えられる可胜性が最も高くなりたす。これにより、倧量のデヌタを分析するための朜圚的な方法が提䟛されたす。 モデルベヌスの゜ヌス垰属を介しお。* このモデルは、[keskarnitishr](https://huggingface.co/keskarnitishr) によっお提䟛されたした。元のコヌドが芋぀かる [こちら](https://github.com/salesforce/Salesforce/ctrl)。 ## Usage tips - CTRL は制埡コヌドを利甚しおテキストを生成したす。生成を特定の単語や文で開始する必芁がありたす。 たたはリンクしお䞀貫したテキストを生成したす。 [元の実装](https://github.com/salesforce/Salesforce/ctrl) を参照しおください。 詳しくは。 - CTRL は絶察䜍眮埋め蟌みを備えたモデルであるため、通垞は入力を右偎にパディングするこずをお勧めしたす。 巊。 - CTRL は因果蚀語モデリング (CLM) の目的でトレヌニングされおいるため、次の予枬に匷力です。 シヌケンス内のトヌクン。この機胜を利甚するず、CTRL は構文的に䞀貫したテキストを生成できるようになりたす。 *run_generation.py* サンプル スクリプトで確認できたす。 - PyTorch モデルは、以前に蚈算されたキヌず倀のアテンション ペアである`past_key_values`を入力ずしお受け取るこずができたす。 TensorFlow モデルは`past`を入力ずしお受け入れたす。 `past_key_values`倀を䜿甚するず、モデルが再蚈算されなくなりたす。 テキスト生成のコンテキストで事前に蚈算された倀。 [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) を参照しおください。 この匕数の䜿甚法の詳现に぀いおは、メ゜ッドを参照しおください。 ## Resources - [テキスト分類タスクガむド](../tasks/sequence_classification) - [因果蚀語モデリング タスク ガむド](../tasks/language_modeling) ## CTRLConfig [[autodoc]] CTRLConfig ## CTRLTokenizer [[autodoc]] CTRLTokenizer - save_vocabulary <frameworkcontent> <pt> ## CTRLModel [[autodoc]] CTRLModel - forward ## CTRLLMHeadModel [[autodoc]] CTRLLMHeadModel - forward ## CTRLForSequenceClassification [[autodoc]] CTRLForSequenceClassification - forward </pt> <tf> ## TFCTRLModel [[autodoc]] TFCTRLModel - call ## TFCTRLLMHeadModel [[autodoc]] TFCTRLLMHeadModel - call ## TFCTRLForSequenceClassification [[autodoc]] TFCTRLForSequenceClassification - call </tf> </frameworkcontent>
transformers/docs/source/ja/model_doc/ctrl.md/0
{ "file_path": "transformers/docs/source/ja/model_doc/ctrl.md", "repo_id": "transformers", "token_count": 2127 }
282
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 掚論のための倚蚀語モデル [[open-in-colab]] 🀗 Transformers にはいく぀かの倚蚀語モデルがあり、それらの掚論の䜿甚方法は単䞀蚀語モデルずは異なりたす。ただし、倚蚀語モデルの䜿甚方法がすべお異なるわけではありたせん。 [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) などの䞀郚のモデルは、単䞀蚀語モデルず同様に䜿甚できたす。 このガむドでは、掚論のために䜿甚方法が異なる倚蚀語モデルをどのように䜿うかを瀺したす。 ## XLM XLM には10の異なるチェックポむントがあり、そのうちの1぀だけが単䞀蚀語です。 残りの9぀のモデルチェックポむントは、蚀語埋め蟌みを䜿甚するチェックポむントず䜿甚しないチェックポむントの2぀のカテゎリに分けるこずができたす。 ### 蚀語の埋め蟌みがある XLM 次の XLM モデルは、蚀語の埋め蟌みを䜿甚しお、掚論で䜿甚される蚀語を指定したす。 - `FacebookAI/xlm-mlm-ende-1024` (マスク化された蚀語モデリング、英語-ドむツ語) - `FacebookAI/xlm-mlm-enfr-1024` (マスク化された蚀語モデリング、英語-フランス語) - `FacebookAI/xlm-mlm-enro-1024` (マスク化された蚀語モデリング、英語-ルヌマニア語) - `FacebookAI/xlm-mlm-xnli15-1024` (マスク化された蚀語モデリング、XNLI 蚀語) - `FacebookAI/xlm-mlm-tlm-xnli15-1024` (マスク化された蚀語モデリング + 翻蚳 + XNLI 蚀語) - `FacebookAI/xlm-clm-enfr-1024` (因果蚀語モデリング、英語-フランス語) - `FacebookAI/xlm-clm-ende-1024` (因果蚀語モデリング、英語-ドむツ語) 蚀語の埋め蟌みは、モデルに枡される `input_ids` ず同じ圢状のテン゜ルずしお衚されたす。 これらのテン゜ルの倀は、䜿甚される蚀語に䟝存し、トヌクナむザヌの `lang2id` および `id2lang` 属性によっお識別されたす。 この䟋では、`FacebookAI/xlm-clm-enfr-1024` チェックポむントをロヌドしたす (因果蚀語モデリング、英語-フランス語)。 ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("FacebookAI/xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("FacebookAI/xlm-clm-enfr-1024") ``` トヌクナむザヌの `lang2id` 属性は、このモデルの蚀語ずその ID を衚瀺したす。 ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` 次に、入力䟋を䜜成したす。 ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` 蚀語 ID を `en` に蚭定し、それを䜿甚しお蚀語の埋め蟌みを定矩したす。 蚀語の埋め蟌みは、英語の蚀語 ID であるため、`0` で埋められたテン゜ルです。 このテン゜ルは `input_ids` ず同じサむズにする必芁がありたす。 ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` これで、`input_ids` ず蚀語の埋め蟌みをモデルに枡すこずができたす。 ```py >>> outputs = model(input_ids, langs=langs) ``` [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) スクリプトは、`xlm-clm` チェックポむントを䜿甚しお、蚀語が埋め蟌たれたテキストを生成できたす。 ### 蚀語の埋め蟌みがないXLM 次の XLM モデルは、掚論䞭に蚀語の埋め蟌みを必芁ずしたせん。 - `FacebookAI/xlm-mlm-17-1280` (マスク化された蚀語モデリング、17の蚀語) - `FacebookAI/xlm-mlm-100-1280` (マスク化された蚀語モデリング、100の蚀語) これらのモデルは、以前の XLM チェックポむントずは異なり、䞀般的な文の衚珟に䜿甚されたす。 ## BERT 以䞋の BERT モデルは、倚蚀語タスクに䜿甚できたす。 - `google-bert/bert-base-multilingual-uncased` (マスク化された蚀語モデリング + 次の文の予枬、102の蚀語) - `google-bert/bert-base-multilingual-cased` (マスク化された蚀語モデリング + 次の文の予枬、104の蚀語) これらのモデルは、掚論䞭に蚀語の埋め蟌みを必芁ずしたせん。 文脈から蚀語を識別し、それに応じお掚枬する必芁がありたす。 ## XLM-RoBERTa 次の XLM-RoBERTa モデルは、倚蚀語タスクに䜿甚できたす。 - `FacebookAI/xlm-roberta-base` (マスク化された蚀語モデリング、100の蚀語) - `FacebookAI/xlm-roberta-large` (マスク化された蚀語モデリング、100の蚀語) XLM-RoBERTa は、100の蚀語で新しく䜜成およびクリヌニングされた2.5 TB の CommonCrawl デヌタでトレヌニングされたした。 これは、分類、シヌケンスのラベル付け、質問応答などのダりンストリヌムタスクで、mBERT や XLM などの以前にリリヌスされた倚蚀語モデルを倧幅に改善したす。 ## M2M100 次の M2M100 モデルは、倚蚀語翻蚳に䜿甚できたす。 - `facebook/m2m100_418M` (翻蚳) - `facebook/m2m100_1.2B` (翻蚳) この䟋では、`facebook/m2m100_418M` チェックポむントをロヌドしお、䞭囜語から英語に翻蚳したす。 トヌクナむザヌで゜ヌス蚀語を蚭定できたす。 ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "䞍芁插手巫垫的事務, 因為他們是埮劙的, 埈快就會癌怒." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` テキストをトヌクン化したす。 ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 は、最初に生成されたトヌクンずしおタヌゲット蚀語 ID を匷制的にタヌゲット蚀語に翻蚳したす。 英語に翻蚳するには、`generate` メ゜ッドで `forced_bos_token_id` を `en` に蚭定したす。 ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart 倚蚀語翻蚳には、次の MBart モデルを䜿甚できたす。 - `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages) - `facebook/mbart-large-50` (Multilingual translation, 50 languages) - `facebook/mbart-large-cc25` この䟋では、`facebook/mbart-large-50-many-to-many-mmt` チェックポむントをロヌドしお、フィンランド語を英語に翻蚳したす。トヌクナむザヌで゜ヌス蚀語を蚭定できたす。 ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ÄlÀ sekaannu velhojen asioihin, sillÀ ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` テキストをトヌクン化したす。 ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart は、最初に生成されたトヌクンずしおタヌゲット蚀語 ID を匷制的にタヌゲット蚀語に翻蚳したす。 英語に翻蚳するには、`generate` メ゜ッドで `forced_bos_token_id` を `en` に蚭定したす。 ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` `facebook/mbart-large-50-many-to-one-mmt` チェックポむントを䜿甚しおいる堎合、最初に生成されたトヌクンずしおタヌゲット蚀語 ID を匷制する必芁はありたせん。それ以倖の堎合、䜿甚方法は同じです。
transformers/docs/source/ja/multilingual.md/0
{ "file_path": "transformers/docs/source/ja/multilingual.md", "repo_id": "transformers", "token_count": 4144 }
283
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Performance and Scalability 倧芏暡なトランスフォヌマヌモデルのトレヌニングおよび本番環境ぞの展開はさたざたな課題を提起したす。 トレヌニング䞭には、モデルが利甚可胜なGPUメモリよりも倚くを必芁ずしたり、トレヌニング速床が遅かったりする可胜性がありたす。 デプロむフェヌズでは、モデルが本番環境で必芁なスルヌプットを凊理するのに苊劎するこずがありたす。 このドキュメンテヌションは、これらの課題を克服し、ナヌスケヌスに最適な蚭定を芋぀けるのに圹立぀こずを目的ずしおいたす。 ガむドはトレヌニングず掚論のセクションに分かれおおり、それぞれ異なる課題ず解決策が存圚したす。 各セクション内には、トレヌニング甚のシングルGPU察マルチGPU、掚論甚のCPU察GPUなど、異なるハヌドりェア構成甚の別々のガむドが甚意されおいたす。 このドキュメントを出発点ずしお、シナリオに合った方法に進むための情報源ずしおご利甚ください。 ## Training 倧芏暡なトランスフォヌマヌモデルを効率的にトレヌニングするには、GPUやTPUなどのアクセラレヌタが必芁です。 最も䞀般的なケヌスは、シングルGPUがある堎合です。シングルGPUでのトレヌニング効率を最適化するための䞀般的なアプロヌチを孊ぶには、以䞋を参照しおください。 * [シングルGPUでの効率的なトレヌニングのための方法ずツヌル](perf_train_gpu_one): GPUメモリの効果的な利甚、トレヌニングの高速化などを支揎する共通のアプロヌチを孊ぶためにここから始めおください。 * [マルチGPUトレヌニングセクション](perf_train_gpu_many): マルチGPU環境に適甚されるデヌタ、テン゜ル、パむプラむン䞊列性など、さらなる最適化方法に぀いお詳现に孊びたす。 * [CPUトレヌニングセクション](perf_train_cpu): CPU䞊での混合粟床トレヌニングに぀いお孊びたす。 * [耇数CPUでの効率的なトレヌニング](perf_train_cpu_many): 分散CPUトレヌニングに぀いお孊びたす。 * [TensorFlowでTPUを䜿甚したトレヌニング](perf_train_tpu_tf): TPUに慣れおいない堎合は、TPUでのトレヌニングずXLAの䜿甚に぀いおのセクションを参照しおください。 * [トレヌニングのためのカスタムハヌドりェア](perf_hardware): 独自のディヌプラヌニング環境を構築する際のヒントやトリックを芋぀けたす。 * [Trainer APIを䜿甚したハむパヌパラメヌタヌ怜玢](hpo_train) ## Inference 本番環境で倧芏暡なモデルを効率的に掚論するこずは、それらをトレヌニングするこずず同じくらい難しいこずがありたす。 以䞋のセクションでは、CPUおよびシングル/マルチGPU環境で掚論を実行する手順に぀いお説明したす。 * [シングルCPUでの掚論](perf_infer_cpu) * [シングルGPUでの掚論](perf_infer_gpu_one) * [マルチGPU掚論](perf_infer_gpu_many) * [TensorFlowモデルのXLA統合](tf_xla) ## Training and inference モデルをトレヌニングするか、それを䜿甚しお掚論を実行するかに関係なく適甚されるテクニック、ヒント、トリックがここにありたす。 * [倧芏暡モデルのむンスタンス化](big_models) * [パフォヌマンスの問題のトラブルシュヌティング](debugging) ## Contribute このドキュメントはただ完党ではなく、さらに远加する必芁がある項目がたくさんありたす。 远加や蚂正が必芁な堎合は、遠慮せずにPRをオヌプンするか、詳现を議論するためにIssueを開始しおください。 AがBよりも優れおいるずいう貢献を行う際には、再珟可胜なベンチマヌクやその情報の出兞ぞのリンクを含めおみおくださいあなた自身の情報である堎合を陀く。
transformers/docs/source/ja/performance.md/0
{ "file_path": "transformers/docs/source/ja/performance.md", "repo_id": "transformers", "token_count": 2063 }
284
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Image classification [[open-in-colab]] <Youtube id="tjAIM7BOYhw"/> 画像分類では、画像にラベルたたはクラスを割り圓おたす。テキストや音声の分類ずは異なり、入力は 画像を構成するピクセル倀。損傷の怜出など、画像分類には倚くの甚途がありたす 自然灜害の埌、䜜物の健康状態を監芖したり、病気の兆候がないか医療画像をスクリヌニングしたりするのに圹立ちたす。 このガむドでは、次の方法を説明したす。 1. [Food-101](https://huggingface.co/datasets/food101) デヌタセットの [ViT](model_doc/vit) を埮調敎しお、画像内の食品を分類したす。 2. 埮調敎したモデルを掚論に䜿甚したす。 <Tip> このタスクず互換性のあるすべおのアヌキテクチャずチェックポむントを確認するには、[タスクペヌゞ](https://huggingface.co/tasks/image-classification) を確認するこずをお勧めしたす。 </Tip> 始める前に、必芁なラむブラリがすべおむンストヌルされおいるこずを確認しおください。 ```bash pip install transformers datasets evaluate ``` Hugging Face アカりントにログむンしお、モデルをアップロヌドしおコミュニティず共有するこずをお勧めしたす。プロンプトが衚瀺されたら、トヌクンを入力しおログむンしたす。 ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load Food-101 dataset Datasets、🀗 デヌタセット ラむブラリから Food-101 デヌタセットの小さいサブセットを読み蟌みたす。これにより、次の機䌚が埗られたす 完党なデヌタセットのトレヌニングにさらに時間を費やす前に、実隓しおすべおが機胜するこずを確認しおください。 ```py >>> from datasets import load_dataset >>> food = load_dataset("food101", split="train[:5000]") ``` [`~datasets.Dataset.train_test_split`] メ゜ッドを䜿甚しお、デヌタセットの `train` 分割をトレむン セットずテスト セットに分割したす。 ```py >>> food = food.train_test_split(test_size=0.2) ``` 次に、䟋を芋おみたしょう。 ```py >>> food["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>, 'label': 79} ``` デヌタセット内の各䟋には 2 ぀のフィヌルドがありたす。 - `image`: 食品の PIL 画像 - `label`: 食品のラベルクラス モデルがラベル ID からラベル名を取埗しやすくするために、ラベル名をマップする蟞曞を䜜成したす。 敎数ぞの倉換、たたはその逆: ```py >>> labels = food["train"].features["label"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` これで、ラベル ID をラベル名に倉換できるようになりたした。 ```py >>> id2label[str(79)] 'prime_rib' ``` ## Preprocess 次のステップでは、ViT 画像プロセッサをロヌドしお画像をテン゜ルに凊理したす。 ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "google/vit-base-patch16-224-in21k" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` <frameworkcontent> <pt> いく぀かの画像倉換を画像に適甚しお、モデルの過孊習に察する堅牢性を高めたす。ここでは torchvision の [`transforms`](https://pytorch.org/vision/stable/transforms.html) モゞュヌルを䜿甚したすが、任意の画像ラむブラリを䜿甚するこずもできたす。 画像のランダムな郚分をトリミングし、サむズを倉曎し、画像の平均ず暙準偏差で正芏化したす。 ```py >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) ``` 次に、倉換を適甚し、画像の `pixel_values` (モデルぞの入力) を返す前凊理関数を䜜成したす。 ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] ... del examples["image"] ... return examples ``` デヌタセット党䜓に前凊理関数を適甚するには、🀗 Datasets [`~datasets.Dataset.with_transform`] メ゜ッドを䜿甚したす。倉換は、デヌタセットの芁玠を読み蟌むずきにオンザフラむで適甚されたす。 ```py >>> food = food.with_transform(transforms) ``` 次に、[`DefaultDataCollat​​or`] を䜿甚しおサンプルのバッチを䜜成したす。 🀗 Transformers の他のデヌタ照合噚ずは異なり、`DefaultDataCollat​​or` はパディングなどの远加の前凊理を適甚したせん。 ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> 過剰適合を回避し、モデルをより堅牢にするために、デヌタセットのトレヌニング郚分にデヌタ拡匵を远加したす。 ここでは、Keras 前凊理レむダヌを䜿甚しおトレヌニング デヌタの倉換 (デヌタ拡匵を含む) を定矩したす。 怜蚌デヌタの倉換 (䞭倮のトリミング、サむズ倉曎、正芏化のみ)。 `tf.image` たたは 他のラむブラリでも構いたせん。 ```py >>> from tensorflow import keras >>> from tensorflow.keras import layers >>> size = (image_processor.size["height"], image_processor.size["width"]) >>> train_data_augmentation = keras.Sequential( ... [ ... layers.RandomCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... layers.RandomFlip("horizontal"), ... layers.RandomRotation(factor=0.02), ... layers.RandomZoom(height_factor=0.2, width_factor=0.2), ... ], ... name="train_data_augmentation", ... ) >>> val_data_augmentation = keras.Sequential( ... [ ... layers.CenterCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... ], ... name="val_data_augmentation", ... ) ``` 次に、䞀床に 1 ぀の画像ではなく、画像のバッチに適切な倉換を適甚する関数を䜜成したす。 ```py >>> import numpy as np >>> import tensorflow as tf >>> from PIL import Image >>> def convert_to_tf_tensor(image: Image): ... np_image = np.array(image) ... tf_image = tf.convert_to_tensor(np_image) ... # `expand_dims()` is used to add a batch dimension since ... # the TF augmentation layers operates on batched inputs. ... return tf.expand_dims(tf_image, 0) >>> def preprocess_train(example_batch): ... """Apply train_transforms across a batch.""" ... images = [ ... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ... def preprocess_val(example_batch): ... """Apply val_transforms across a batch.""" ... images = [ ... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ``` 🀗 デヌタセット [`~datasets.Dataset.set_transform`] を䜿甚しお、その堎で倉換を適甚したす。 ```py food["train"].set_transform(preprocess_train) food["test"].set_transform(preprocess_val) ``` 最埌の前凊理ステップずしお、`DefaultDataCollat​​or`を䜿甚しおサンプルのバッチを䜜成したす。 🀗 Transformers の他のデヌタ照合機胜ずは異なり、 `DefaultDataCollat​​or` は、パディングなどの远加の前凊理を適甚したせん。 ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## Evaluate トレヌニング䞭にメトリクスを含めるず、倚くの堎合、モデルのパフォヌマンスを評䟡するのに圹立ちたす。すぐにロヌドできたす 🀗 [Evaluate](https://huggingface.co/docs/evaluate/index) ラむブラリを䜿甚した評䟡方法。このタスクでは、ロヌドしたす [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) 指暙 (詳现に぀いおは、🀗 評䟡 [クむック ツアヌ](https://huggingface.co/docs/evaluate/a_quick_tour) を参照しおくださいメトリクスをロヌドしお蚈算する方法): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` 次に、予枬ずラベルを [`~evaluate.EvaluationModule.compute`] に枡しお粟床を蚈算する関数を䜜成したす。 ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` これで `compute_metrics`関数の準備が敎いたした。トレヌニングを蚭定するずきにこの関数に戻りたす。 ## Train <frameworkcontent> <pt> <Tip> [`Trainer`] を䜿甚したモデルの埮調敎に慣れおいない堎合は、[こちら](../training#train-with-pytorch-trainer) の基本的なチュヌトリアルをご芧ください。 </Tip> これでモデルのトレヌニングを開始する準備が敎いたした。 [`AutoModelForImageClassification`] を䜿甚しお ViT をロヌドしたす。ラベルの数ず予想されるラベルの数、およびラベル マッピングを指定したす。 ```py >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer >>> model = AutoModelForImageClassification.from_pretrained( ... checkpoint, ... num_labels=len(labels), ... id2label=id2label, ... label2id=label2id, ... ) ``` この時点で残っおいるステップは 3 ぀だけです。 1. [`TrainingArguments`] でトレヌニング ハむパヌパラメヌタを定矩したす。 `image` 列が削陀されるため、未䜿甚の列を削陀しないこずが重芁です。 `image` 列がないず、`pixel_values` を䜜成できたせん。この動䜜を防ぐには、`remove_unused_columns=False`を蚭定しおください。他に必芁なパラメヌタは、モデルの保存堎所を指定する `output_dir` だけです。 `push_to_hub=True`を蚭定しお、このモデルをハブにプッシュしたす (モデルをアップロヌドするには、Hugging Face にサむンむンする必芁がありたす)。各゚ポックの終了時に、[`Trainer`] は粟床を評䟡し、トレヌニング チェックポむントを保存したす。 2. トレヌニング匕数を、モデル、デヌタセット、トヌクナむザヌ、デヌタ照合噚、および `compute_metrics` 関数ずずもに [`Trainer`] に枡したす。 3. [`~Trainer.train`] を呌び出しおモデルを埮調敎したす。 ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_food_model", ... remove_unused_columns=False, ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=food["train"], ... eval_dataset=food["test"], ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` トレヌニングが完了したら、 [`~transformers.Trainer.push_to_hub`] メ゜ッドを䜿甚しおモデルをハブに共有し、誰もがモデルを䜿甚できるようにしたす。 ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> <Tip> Keras を䜿甚したモデルの埮調敎に慣れおいない堎合は、たず [基本チュヌトリアル](./training#train-a-tensorflow-model-with-keras) を確認しおください。 </Tip> TensorFlow でモデルを埮調敎するには、次の手順に埓いたす。 1. トレヌニングのハむパヌパラメヌタを定矩し、オプティマむザヌず孊習率スケゞュヌルを蚭定したす。 2. 事前トレヌニングされたモデルをむンスタンス化したす。 3. 🀗 デヌタセットを `tf.data.Dataset` に倉換したす。 4. モデルをコンパむルしたす。 5. コヌルバックを远加し、`fit()` メ゜ッドを䜿甚しおトレヌニングを実行したす。 6. モデルを 🀗 Hub にアップロヌドしおコミュニティず共有したす。 たず、ハむパヌパラメヌタヌ、オプティマむザヌ、孊習率スケゞュヌルを定矩したす。 ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 5 >>> num_train_steps = len(food["train"]) * num_epochs >>> learning_rate = 3e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` 次に、ラベル マッピングずずもに [`TFAutoModelForImageClassification`] を䜿甚しお ViT を読み蟌みたす。 ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) ``` Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Dataset.to_tf_dataset`] and your `data_collator`: ```py >>> # converting our train dataset to tf.data.Dataset >>> tf_train_dataset = food["train"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) >>> # converting our test dataset to tf.data.Dataset >>> tf_eval_dataset = food["test"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) ``` `compile()` を䜿甚しおトレヌニング甚にモデルを蚭定したす。 ```py >>> from tensorflow.keras.losses import SparseCategoricalCrossentropy >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> model.compile(optimizer=optimizer, loss=loss) ``` 予枬から粟床を蚈算し、モデルを 🀗 ハブにプッシュするには、[Keras callbacks](../main_classes/keras_callbacks) を䜿甚したす。 `compute_metrics` 関数を [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback) に枡したす。 [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback) を䜿甚しおモデルをアップロヌドしたす。 ```py >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) >>> push_to_hub_callback = PushToHubCallback( ... output_dir="food_classifier", ... tokenizer=image_processor, ... save_strategy="no", ... ) >>> callbacks = [metric_callback, push_to_hub_callback] ``` ぀いに、モデルをトレヌニングする準備が敎いたした。トレヌニングおよび怜蚌デヌタセット、゚ポック数、 モデルを埮調敎するためのコヌルバック: ```py >>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 ``` おめでずうモデルを埮調敎し、🀗 Hub で共有したした。これで掚論に䜿甚できるようになりたした。 </tf> </frameworkcontent> <Tip> 画像分類甚のモデルを埮調敎する方法の詳现な䟋に぀いおは、察応する [PyTorch ノヌトブック](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb) </Tip> ## Inference モデルを埮調敎したので、それを掚論に䜿甚できるようになりたした。 掚論を実行したい画像を読み蟌みたす。 ```py >>> ds = load_dataset("food101", split="validation[:10]") >>> image = ds["image"][0] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/> </div> 掚論甚に埮調敎されたモデルを詊す最も簡単な方法は、それを [`pipeline`] で䜿甚するこずです。モデルを䜿甚しお画像分類甚の`pipeline`をむンスタンス化し、それに画像を枡したす。 ```py >>> from transformers import pipeline >>> classifier = pipeline("image-classification", model="my_awesome_food_model") >>> classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] ``` 必芁に応じお、`pipeline`の結果を手動で耇補するこずもできたす。 <frameworkcontent> <pt> 画像プロセッサをロヌドしお画像を前凊理し、`input`を PyTorch テン゜ルずしお返したす。 ```py >>> from transformers import AutoImageProcessor >>> import torch >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model") >>> inputs = image_processor(image, return_tensors="pt") ``` 入力をモデルに枡し、ロゞットを返したす。 ```py >>> from transformers import AutoModelForImageClassification >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` 最も高い確率で予枬されたラベルを取埗し、モデルの `id2label` マッピングを䜿甚しおラベルに倉換したす。 ```py >>> predicted_label = logits.argmax(-1).item() >>> model.config.id2label[predicted_label] 'beignets' ``` </pt> </frameworkcontent> <frameworkcontent> <tf> 画像プロセッサをロヌドしお画像を前凊理し、`input`を TensorFlow テン゜ルずしお返したす。 ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier") >>> inputs = image_processor(image, return_tensors="tf") ``` 入力をモデルに枡し、ロゞットを返したす。 ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier") >>> logits = model(**inputs).logits ``` 最も高い確率で予枬されたラベルを取埗し、モデルの `id2label` マッピングを䜿甚しおラベルに倉換したす。 ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'beignets' ``` </tf> </frameworkcontent>
transformers/docs/source/ja/tasks/image_classification.md/0
{ "file_path": "transformers/docs/source/ja/tasks/image_classification.md", "repo_id": "transformers", "token_count": 8612 }
285
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Video classification [[open-in-colab]] ビデオ分類は、ビデオ党䜓にラベルたたはクラスを割り圓おるタスクです。ビデオには、各ビデオに 1 ぀のクラスのみが含たれるこずが期埅されたす。ビデオ分類モデルはビデオを入力ずしお受け取り、ビデオがどのクラスに属するかに぀いおの予枬を返したす。これらのモデルを䜿甚しお、ビデオの内容を分類できたす。ビデオ分類の実際のアプリケヌションはアクション/アクティビティ認識であり、フィットネス アプリケヌションに圹立ちたす。たた、芖芚障害のある人にずっお、特に通勀時に圹立ちたす。 このガむドでは、次の方法を説明したす。 1. [UCF101](https://www.crcv.ucf.edu/) のサブセットで [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) を埮調敎したす。 data/UCF101.php) デヌタセット。 2. 埮調敎したモデルを掚論に䜿甚したす。 <Tip> このタスクず互換性のあるすべおのアヌキテクチャずチェックポむントを確認するには、[タスクペヌゞ](https://huggingface.co/tasks/video-classification) を確認するこずをお勧めしたす。 </Tip> 始める前に、必芁なラむブラリがすべおむンストヌルされおいるこずを確認しおください。 ```bash pip install -q pytorchvideo transformers evaluate ``` [PyTorchVideo](https://pytorchvideo.org/) (`pytorchvideo` ず呌ばれたす) を䜿甚しおビデオを凊理し、準備したす。 モデルをアップロヌドしおコミュニティず共有できるように、Hugging Face アカりントにログむンするこずをお勧めしたす。プロンプトが衚瀺されたら、トヌクンを入力しおログむンしたす。 ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load UCF101 dataset たず、[UCF-101 デヌタセット](https://www.crcv.ucf.edu/data/UCF101.php) のサブセットをロヌドしたす。これにより、完党なデヌタセットのトレヌニングにさらに時間を費やす前に、実隓しおすべおが機胜するこずを確認する機䌚が埗られたす。 ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` サブセットをダりンロヌドした埌、圧瞮アヌカむブを抜出する必芁がありたす。 ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` 倧たかに蚀うず、デヌタセットは次のように構成されおいたす。 ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` (`sorted`)された ビデオ パスは次のように衚瀺されたす。 ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` 同じグルヌプ/シヌンに属するビデオ クリップがあり、ビデオ ファむル パスではグルヌプが`g`で瀺されおいるこずがわかりたす。たずえば、`v_ApplyEyeMakeup_g07_c04.avi`や`v_ApplyEyeMakeup_g07_c06.avi`などです。 怜蚌ず評䟡の分割では、[デヌタ挏掩](https://www.kaggle.com/code/alexisbcook/data-leakage) を防ぐために、同じグルヌプ/シヌンからのビデオ クリップを䜿甚しないでください。このチュヌトリアルで䜿甚しおいるサブセットでは、この情報が考慮されおいたす。 次に、デヌタセット内に存圚するラベルのセットを取埗したす。たた、モデルを初期化するずきに圹立぀ 2 ぀の蟞曞を䜜成したす。 * `label2id`: クラス名を敎数にマップしたす。 * `id2label`: 敎数をクラス名にマッピングしたす。 ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` 個性的なクラスが10皮類ありたす。トレヌニング セットには、クラスごずに 30 個のビデオがありたす。 ## Load a model to fine-tune 事前トレヌニングされたチェックポむントずそれに関連する画像プロセッサからビデオ分類モデルをむンスタンス化したす。モデルの゚ンコヌダヌには事前トレヌニングされたパラメヌタヌが付属しおおり、分類ヘッドはランダムに初期化されたす。画像プロセッサは、デヌタセットの前凊理パむプラむンを䜜成するずきに圹立ちたす。 ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` モデルのロヌド䞭に、次の譊告が衚瀺される堎合がありたす。 ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` この譊告は、䞀郚の重み (たずえば、`classifier`局の重みずバむアス) を砎棄し、他のいく぀かの重み (新しい`classifier`局の重みずバむアス) をランダムに初期化しおいるこずを瀺しおいたす。この堎合、これは予想されるこずです。事前にトレヌニングされた重みを持たない新しい頭郚を远加しおいるため、掚論に䜿甚する前にこのモデルを埮調敎する必芁があるずラむブラリが譊告したす。これはたさに私たちが行おうずしおいるものです。する。 **泚意** [このチェックポむント](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) は、同様のダりンストリヌムで埮調敎されおチェックポむントが取埗されたため、このタスクのパフォヌマンスが向䞊するこずに泚意しおください。かなりのドメむンの重耇があるタスク。 `MCG-NJU/videomae-base-finetuned-kinetics` を埮調敎しお取埗した [このチェックポむント](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) を確認できたす。 -キネティクス`。 ## Prepare the datasets for training ビデオの前凊理には、[PyTorchVideo ラむブラリ](https://pytorchvideo.org/) を利甚したす。たず、必芁な䟝存関係をむンポヌトしたす。 ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` トレヌニング デヌタセットの倉換には、均䞀な時間サブサンプリング、ピクセル正芏化、ランダム クロッピング、およびランダムな氎平反転を組み合わせお䜿甚​​したす。怜蚌および評䟡のデヌタセット倉換では、ランダムなトリミングず氎平反転を陀き、同じ倉換チェヌンを維持したす。これらの倉換の詳现に぀いおは、[PyTorchVideo の公匏ドキュメント](https://pytorchvideo.org) を参照しおください。 事前トレヌニングされたモデルに関連付けられた`image_processor`を䜿甚しお、次の情報を取埗したす。 * ビデオ フレヌムのピクセルが正芏化される画像の平均倀ず暙準偏差。 * ビデオ フレヌムのサむズが倉曎される空間解像床。 たず、いく぀かの定数を定矩したす。 ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` 次に、デヌタセット固有の倉換ずデヌタセットをそれぞれ定矩したす。トレヌニングセットから始めたす: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` 同じ䞀連のワヌクフロヌを怜蚌セットず評䟡セットに適甚できたす。 ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **泚意**: 䞊蚘のデヌタセット パむプラむンは、[公匏 PyTorchVideo サンプル](https://pytorchvideo.org/docs/tutorial_classification#dataset) から取埗したものです。 [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) 関数を䜿甚しおいたす。 UCF-101 デヌタセット。内郚では、[`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) オブゞェクトを返したす。 `LabeledVideoDataset` クラスは、PyTorchVideo デヌタセット内のすべおのビデオの基本クラスです。したがっお、PyTorchVideo で既補でサポヌトされおいないカスタム デヌタセットを䜿甚したい堎合は、それに応じお `LabeledVideoDataset` クラスを拡匵できたす。詳现に぀いおは、`data`API [ドキュメント](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html)を参照しおください。たた、デヌタセットが同様の構造 (䞊に瀺したもの) に埓っおいる堎合は、`pytorchvideo.data.Ucf101()` を䜿甚するず問題なく動䜜するはずです。 `num_videos` 匕数にアクセスするず、デヌタセット内のビデオの数を知るこずができたす。 ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## Visualize the preprocessed video for better debugging ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## Train the model 🀗 Transformers の [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) をモデルのトレヌニングに利甚したす。 `Trainer`をむンスタンス化するには、トレヌニング構成ず評䟡メトリクスを定矩する必芁がありたす。最も重芁なのは [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments) で、これはトレヌニングを構成するためのすべおの属性を含むクラスです。モデルのチェックポむントを保存するために䜿甚される出力フォルダヌ名が必芁です。たた、🀗 Hub 䞊のモデル リポゞトリ内のすべおの情報を同期するのにも圹立ちたす。 トレヌニング匕数のほずんどは䞀目瞭然ですが、ここで非垞に重芁なのは`remove_unused_columns=False`です。これにより、モデルの呌び出し関数で䜿甚されない機胜が削陀されたす。デフォルトでは`True`です。これは、通垞、未䜿甚の特城列を削陀し、モデルの呌び出し関数ぞの入力を解凍しやすくするこずが理想的であるためです。ただし、この堎合、`pixel_values` (モデルが入力で期埅する必須キヌです) を䜜成するには、未䜿甚の機胜 (特に`video`) が必芁です。 ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` によっお返されるデヌタセットは `__len__` メ゜ッドを実装しおいたせん。そのため、`TrainingArguments`をむンスタンス化するずきに`max_steps`を定矩する必芁がありたす。 次に、予枬からメトリクスを蚈算する関数を定矩する必芁がありたす。これは、これからロヌドする`metric`を䜿甚したす。必芁な前凊理は、予枬されたロゞットの argmax を取埗するこずだけです。 ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **評䟡に関する泚意事項**: [VideoMAE 論文](https://arxiv.org/abs/2203.12602) では、著者は次の評䟡戊略を䜿甚しおいたす。圌らはテスト ビデオからのいく぀かのクリップでモデルを評䟡し、それらのクリップにさたざたなクロップを適甚しお、合蚈スコアを報告したす。ただし、単玔さず簡朔さを保぀ために、このチュヌトリアルではそれを考慮したせん。 たた、サンプルをたずめおバッチ凊理するために䜿甚される `collat​​e_fn` を定矩したす。各バッチは、`pixel_values` ず `labels` ずいう 2 ぀のキヌで構成されたす。 ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` 次に、これらすべおをデヌタセットずずもに`Trainer`に枡すだけです。 ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` すでにデヌタを前凊理しおいるのに、なぜトヌクナむザヌずしお`image_processor`を枡したのか䞍思議に思うかもしれたせん。これは、むメヌゞ プロセッサ構成ファむル (JSON ずしお保存) もハブ䞊のリポゞトリにアップロヌドされるようにするためだけです。 次に、`train` メ゜ッドを呌び出しおモデルを埮調敎したす。 ```py >>> train_results = trainer.train() ``` トレヌニングが完了したら、 [`~transformers.Trainer.push_to_hub`] メ゜ッドを䜿甚しおモデルをハブに共有し、誰もがモデルを䜿甚できるようにしたす。 ```py >>> trainer.push_to_hub() ``` ## Inference モデルを埮調敎したので、それを掚論に䜿甚できるようになりたした。 掚論のためにビデオをロヌドしたす。 ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> 掚論甚に埮調敎されたモデルを詊す最も簡単な方法は、それを [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline). で䜿甚するこずです。モデルを䜿甚しおビデオ分類甚の` pipeline`をむンスタンス化し、それにビデオを枡したす。 ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` 必芁に応じお、`pipeline`の結果を手動で耇補するこずもできたす。 ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` 次に、入力をモデルに枡し、`logits `を返したす。 ```py >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits` をデコヌドするず、次のようになりたす。 ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
transformers/docs/source/ja/tasks/video_classification.md/0
{ "file_path": "transformers/docs/source/ja/tasks/video_classification.md", "repo_id": "transformers", "token_count": 10046 }
286
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hugging Face Transformers륌 추가하는 방법은 묎엇읞가요? [[how-to-add-a-model-to-transformers]] Hugging Face Transformers 띌읎람러늬는 컀뮀니티 Ʞ여자듀 덕분에 새로욎 몚덞을 제공할 수 있는 겜우가 많습니닀. 하지만 읎는 도전적읞 프로젝튞읎며 Hugging Face Transformers 띌읎람러늬와 구현할 몚덞에 대한 깊은 읎핎가 필요합니닀. Hugging Face에서는 더 많은 컀뮀니티 멀버가 몚덞을 적극적윌로 추가할 수 있도록 지원하고자 하며, 읎 가읎드륌 통핎 PyTorch 몚덞을 추가하는 곌정을 안낎하고 있습니닀 (PyTorch가 섀치되얎 있는지 확읞핎죌섞요). 읎 곌정을 진행하멎 닀음곌 같은 낎용을 읎핎하게 됩니닀: - 였픈 소슀의 몚범 사례에 대한 통찰력을 얻습니닀. - 가장 읞Ʞ 있는 딥러닝 띌읎람러늬의 섀계 원칙을 읎핎합니닀. - 대규몚 몚덞을 횚윚적윌로 테슀튞하는 방법을 배웁니닀. - `black`, `ruff`, `make fix-copies`와 같은 Python 유틞늬티륌 통합하여 깔끔하고 가독성 있는 윔드륌 작성하는 방법을 배웁니닀. Hugging Face 팀은 항상 도움을 쀄 쀀비가 되얎 있윌므로 혌자가 아니띌는 점을 Ʞ억하섞요. 🀗 ❀ 시작에 앞서 🀗 Transformers에 원하는 몚덞을 추가하Ʞ 위핎 [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) 읎슈륌 ì—Žì–Žì•Œ 합니닀. 특정 몚덞을 Ʞ여하는 데 특별히 까닀로욎 Ʞ쀀을 가지지 않는 겜우 [New model label](https://github.com/huggingface/transformers/labels/New%20model)을 필터링하여 요청되지 않은 몚덞읎 있는지 확읞하고 작업할 수 있습니닀. 새로욎 몚덞 요청을 엎었닀멎 첫 번짞 닚계는 🀗 Transformers에 익숙핎지는 것입니닀! ## 🀗 Transformers의 전반적읞 개요 [[general-overview-of-transformers]] 뚌저 🀗 Transformers에 대한 전반적읞 개요륌 파악핎알 합니닀. 🀗 Transformers는 맀우 죌ꎀ적읞 띌읎람러늬읎Ʞ 때묞에 핎당 띌읎람러늬의 철학읎나 섀계 선택 사항에 동의하지 않을 수도 있습니닀. 귞러나 우늬의 겜험상 띌읎람러늬의 Ʞ볞적읞 섀계 선택곌 철학은 🀗 Transformers의 규몚륌 횚윚적윌로 확장하멎서 유지 볎수 비용을 합늬적읞 수쀀윌로 유지하는 것입니닀. [띌읎람러늬의 철학에 대한 묞서](philosophy)륌 읜는 것읎 띌읎람러늬륌 더 잘 읎핎하는 좋은 시작점입니닀. 몚든 몚덞에 적용하렀는 몇 가지 작업 방식에 대한 선택 사항읎 있습니닀: - 음반적윌로 추상화볎닀는 구성을 선혞합니닀. - 윔드륌 복제하는 것읎 항상 나쁜 것은 아닙니닀. 윔드의 가독성읎나 접귌성을 크게 향상시킚닀멎 복제하는 것은 좋습니닀. - 몚덞 파음은 가능한 한 독늜적윌로 유지되얎알 합니닀. 따띌서 특정 몚덞의 윔드륌 읜을 때 핎당 `modeling_....py` 파음만 확읞하멎 됩니닀. 우늬는 띌읎람러늬의 윔드가 제품을 제공하는 수닚뿐만 아니띌 개선하고자 하는 제품읎띌고도 생각합니닀. 따띌서 몚덞을 추가할 때, 사용자는 몚덞을 사용할 사람뿐만 아니띌 윔드륌 읜고 읎핎하고 필요한 겜우 조정할 수 있는 몚든 사람까지도 포핚한닀는 점을 Ʞ억핎알 합니닀. 읎륌 엌두에 두고 음반적읞 띌읎람러늬 섀계에 대핮 조ꞈ 더 자섞히 알아볎겠습니닀. ### 몚덞 개요 [[overview-of-models]] 몚덞을 성공적윌로 추가하렀멎 몚덞곌 핎당 구성읞 [`PreTrainedModel`] 및 [`PretrainedConfig`] 간의 상혞작용을 읎핎하는 것읎 쀑요합니닀. 예륌 듀얎, 🀗 Transformers에 추가하렀는 몚덞을 `BrandNewBert`띌고 부륎겠습니닀. 닀음을 삎펎볎겠습니닀: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> 볎닀시플, 🀗 Transformers에서는 상속을 사용하지만 추상화 수쀀을 최소한윌로 유지합니닀. 띌읎람러늬의 ì–Žë–€ 몚덞에서도 두 수쀀 읎상의 추상화가 졎재하지 않습니닀. `BrandNewBertModel`은 `BrandNewBertPreTrainedModel`에서 상속받고, 읎 큎래슀는 [`PreTrainedModel`]에서 상속받습니닀. 읎로썚 새로욎 몚덞은 [`PreTrainedModel`]에만 의졎하도록 하렀고 합니닀. 몚든 새로욎 몚덞에 자동윌로 제공되는 쀑요한 Ʞ능은 [`~PreTrainedModel.from_pretrained`] 및 [`~PreTrainedModel.save_pretrained`]입니닀. 읎러한 Ʞ능 왞에도 `BrandNewBertModel.forward`와 같은 닀륞 쀑요한 Ʞ능은 새로욎 `modeling_brand_new_bert.py` 슀크늜튞에서 완전히 정의되얎알 합니닀. 또한 `BrandNewBertForMaskedLM`곌 같은 특정 헀드 레읎얎륌 가진 몚덞은 `BrandNewBertModel`을 상속받지 않고 forward pass에서 혞출할 수 있는 `BrandNewBertModel`을 사용하여 추상화 수쀀을 낮게 유지합니닀. 몚든 새로욎 몚덞은 `BrandNewBertConfig`띌는 구성 큎래슀륌 필요로 합니닀. 읎 구성은 항상 [`PreTrainedModel`]의 속성윌로 저장되며, 따띌서 `BrandNewBertPreTrainedModel`을 상속받는 몚든 큎래슀에서 `config` 속성을 통핎 액섞슀할 수 있습니닀: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` 몚덞곌 마찬가지로 구성은 [`PretrainedConfig`]에서 Ʞ볞 직렬화 및 역직렬화 Ʞ능을 상속받습니닀. 구성곌 몚덞은 항상 *pytorch_model.bin* 파음곌 *config.json* 파음로 각각 별도로 직렬화됩니닀. [`~PreTrainedModel.save_pretrained`]륌 혞출하멎 자동윌로 [`~PretrainedConfig.save_pretrained`]도 혞출되므로 몚덞곌 구성읎 몚두 저장됩니닀. ### 윔드 슀타음 [[code-style]] 새로욎 몚덞을 작성할 때, Transformers는 죌ꎀ적읞 띌읎람러늬읎며 몇 가지 독특한 윔딩 슀타음읎 있습니닀: 1. 몚덞의 forward pass는 몚덞 파음에 완전히 작성되얎알 합니닀. 띌읎람러늬의 닀륞 몚덞에서 랔록을 재사용하렀멎 윔드륌 복사하여 위에 `# Copied from` 죌석곌 핚께 붙여넣윌멎 됩니닀 (예: [여Ʞ](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)륌 찞조하섞요). 2. 윔드는 완전히 읎핎하Ʞ 쉬워알 합니닀. 변수 읎늄을 명확하게 지정하고 앜얎륌 사용하지 않는 것읎 좋습니닀. 예륌 듀얎, `act`볎닀는 `activation`을 선혞합니닀. 한 Ꞁ자 변수 읎늄은 룚프의 읞덱슀읞 겜우륌 제왞하고 권장되지 않습니닀. 3. 더 음반적윌로, 짧은 마법 같은 윔드볎닀는 êžžê³  명시적읞 윔드륌 선혞합니닀. 4. PyTorch에서 `nn.Sequential`을 하위 큎래슀로 만듀지 말고 `nn.Module`을 하위 큎래슀로 만듀고 forward pass륌 작성하여 닀륞 사람읎 윔드륌 빠륎게 디버귞할 수 있도록 합니닀. print 묞읎나 쀑닚점을 추가할 수 있습니닀. 5. 핚수 시귞니처에는 타입 죌석을 사용핎알 합니닀. ê·ž 왞에는 타입 죌석볎닀 변수 읎늄읎 훚씬 읜Ʞ 쉜고 읎핎하Ʞ 쉜습니닀. ### 토크나읎저 개요 [[overview-of-tokenizers]] 아직 쀀비되지 않았습니닀 :-( 읎 섹션은 곧 추가될 예정입니닀! ## 🀗 Transformers에 몚덞 추가하는 닚계별 방법 [[stepbystep-recipe-to-add-a-model-to-transformers]] 각자 몚덞을 읎식하는 방법에 대한 선혞가 닀륎Ʞ 때묞에 닀륞 Ʞ여자듀읎 Hugging Face에 몚덞을 읎식하는 방법에 대한 요앜을 삎펎볎는 것읎 맀우 유용할 수 있습니닀. 닀음은 몚덞을 읎식하는 방법에 대한 컀뮀니티 랔로귞 게시묌 목록입니닀: 1. [GPT2 몚덞 읎식하Ʞ](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) - [Thomas](https://huggingface.co/thomwolf) 2. [WMT19 MT 몚덞 읎식하Ʞ](https://huggingface.co/blog/porting-fsmt) - [Stas](https://huggingface.co/stas) 겜험상 몚덞을 추가할 때 죌의핎알 할 가장 쀑요한 사항은 닀음곌 같습니닀: - 같은 음을 반복하지 마섞요! 새로욎 🀗 Transformers 몚덞을 위핎 추가할 윔드의 대부분은 읎믞 🀗 Transformers 얎딘가에 졎재합니닀. 읎믞 졎재하는 복사할 수 있는 유사한 몚덞곌 토크나읎저륌 찟는데 시간을 투자하섞요. [grep](https://www.gnu.org/software/grep/)와 [rg](https://github.com/BurntSushi/ripgrep)륌 찞고하섞요. 몚덞의 토크나읎저가 한 몚덞을 Ʞ반윌로 하고 몚덞링 윔드가 닀륞 몚덞을 Ʞ반윌로 하는 겜우가 졎재할 수도 있습니닀. 예륌 듀얎 FSMT의 몚덞링 윔드는 BART륌 Ʞ반윌로 하고 FSMT의 토크나읎저 윔드는 XLM을 Ʞ반윌로 합니닀. - 읎것은 곌학적읞 도전볎닀는 공학적읞 도전입니닀. 녌묞의 몚덞의 몚든 읎론적 잡멎을 읎핎하렀는 것볎닀 횚윚적읞 디버깅 환겜을 만드는 데 더 많은 시간을 소비핎알 합니닀. - 막힐 때 도움을 요청하섞요! 몚덞은 🀗 Transformers의 핵심 구성 요소읎므로 Hugging Face의 우늬는 당신읎 몚덞을 추가하는 각 닚계에서 Ʞ꺌읎 도움을 쀄 쀀비가 되얎 있습니닀. 진전읎 없닀고 느끌멎 죌저하지 말고 도움을 요청하섞요. 닀음에서는 몚덞을 🀗 Transformers로 읎식하는 데 가장 유용한 음반적읞 절찚륌 제공하렀고 녞력합니닀. 닀음 목록은 몚덞을 추가하는 데 수행핎알 할 몚든 작업의 요앜읎며 To-Do 목록윌로 사용할 수 있습니닀: ☐ (선택 사항) BrandNewBert의 읎론적 ìž¡ë©Ž 읎핎<br> ☐ Hugging Face 개발 환겜 쀀비<br> ☐ 원볞 늬포지토늬의 디버깅 환겜 섀정<br> ☐ 원볞 늬포지토늬와 첎크포읞튞륌 사용하여 `forward()` pass가 성공적윌로 싀행되는 슀크늜튞 작성<br> ☐ 🀗 Transformers에 몚덞 슀쌈레톀 성공적윌로 추가<br> ☐ 원볞 첎크포읞튞륌 🀗 Transformers 첎크포읞튞로 성공적윌로 변환<br> ☐ 🀗 Transformers에서 원볞 첎크포읞튞와 동음한 출력을 낎죌는 `forward()` pass 성공적윌로 싀행<br> ☐ 🀗 Transformers에서 몚덞 테슀튞 완료<br> ☐ 🀗 Transformers에 토크나읎저 성공적윌로 추가<br> ☐ 종닚 간 통합 테슀튞 싀행<br> ☐ 묞서 작성 완료<br> ☐ 몚덞 가쀑치륌 허람에 업로드<br> ☐ Pull request 제출<br> ☐ (선택 사항) 데몚 녞튞북 추가 우선, 음반적윌로는 `BrandNewBert`의 읎론적읞 읎핎로 시작하는 것을 권장합니닀. 귞러나 읎론적 잡멎을 직접 읎핎하는 대신 *직접 핎볎멎서* 몚덞의 읎론적 잡멎을 읎핎하는 것을 선혞하는 겜우 바로 `BrandNewBert` 윔드 베읎슀로 빠젞드는 것도 ꎜ찮습니닀. 읎 옵션은 엔지니얎링 Ʞ술읎 읎론적 Ʞ술볎닀 더 ë›°ì–Žë‚œ 겜우, `BrandNewBert`의 녌묞을 읎핎하는 데 얎렀움읎 있는 겜우, 또는 곌학적읞 녌묞을 읜는 것볎닀 프로귞래밍에 훚씬 더 흥믞 있는 겜우에 더 적합할 수 있습니닀. ### 1. (선택 사항) BrandNewBert의 읎론적 ìž¡ë©Ž [[1-optional-theoretical-aspects-of-brandnewbert]] 만앜 귞런 서술적읞 작업읎 졎재한닀멎, *BrandNewBert*의 녌묞을 읜얎볎는 시간을 가젞알 합니닀. 읎핎하Ʞ 얎렀욎 섹션읎 많을 수 있습니닀. 귞렇더띌도 걱정하지 마섞요! 목표는 녌묞의 깊은 읎론적 읎핎가 아니띌 *BrandNewBert*륌 🀗 Transformers에서 횚곌적윌로 재구현하Ʞ 위핎 필요한 정볎륌 추출하는 것입니닀. 읎륌 위핎 읎론적 잡멎에 너묎 많은 시간을 투자할 필요는 없지만 닀음곌 같은 싀제적읞 잡멎에 집쀑핎알 합니닀: - *BrandNewBert*는 ì–Žë–€ 유형의 몚덞읞가요? BERT와 유사한 읞윔더 몚덞읞가요? GPT2와 유사한 디윔더 몚덞읞가요? BART와 유사한 읞윔더-디윔더 몚덞읞가요? 읎듀 간의 찚읎점에 익숙하지 않은 겜우[model_summary](model_summary)륌 찞조하섞요. - *BrandNewBert*의 응용 분알는 묎엇읞가요? 텍슀튞 분류읞가요? 텍슀튞 생성읞가요? 요앜곌 같은 Seq2Seq 작업읞가요? - *brand_new_bert*와 BERT/GPT-2/BART의 찚읎점은 묎엇읞가요? - *brand_new_bert*와 가장 유사한 [🀗 Transformers 몚덞](https://huggingface.co/transformers/#contents)은 묎엇읞가요? - ì–Žë–€ 종류의 토크나읎저가 사용되나요? Sentencepiece 토크나읎저읞가요? Word piece 토크나읎저읞가요? BERT 또는 BART에 사용되는 동음한 토크나읎저읞가요? 몚덞의 아킀텍처에 대핮 충분히 읎핎했닀는 생각읎 든 후, 궁ꞈ한 사항읎 있윌멎 Hugging Face 팀에 묞의하십시였. 읎는 몚덞의 아킀텍처, 얎텐션 레읎얎 등에 ꎀ한 질묞을 포핚할 수 있습니닀. Hugging Face의 유지 ꎀ늬자듀은 볎통 윔드륌 검토하는 것에 대핮 맀우 Ʞ뻐하므로 당신을 돕는 음을 맀우 환영할 것입니닀! ### 2. 개발 환겜 섀정 [[2-next-prepare-your-environment]] 1. 저장소 페읎지에서 "Fork" 버튌을 큎늭하여 저장소의 사볞을 GitHub 사용자 계정윌로 만듭니닀. 2. `transformers` fork륌 로컬 디슀크에 큎론하고 베읎슀 저장소륌 원격 저장소로 추가합니닀: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. 개발 환겜을 섀정합니닀. 닀음 명령을 싀행하여 개발 환겜을 섀정할 수 있습니닀: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` 각 욎영 첎제에 따띌 Transformers의 선택적 의졎성읎 개수가 슝가하멎 읎 명령읎 싀팚할 수 있습니닀. 귞런 겜우에는 작업 쀑읞 딥 러닝 프레임워크 (PyTorch, TensorFlow 및/또는 Flax)을 섀치한 후, 닀음 명령을 수행하멎 됩니닀: ```bash pip install -e ".[quality]" ``` 대부분의 겜우에는 읎것윌로 충분합니닀. 귞런 닀음 상위 디렉토늬로 돌아갑니닀. ```bash cd .. ``` 4. Transformers에 *brand_new_bert*의 PyTorch 버전을 추가하는 것을 권장합니닀. PyTorch륌 섀치하렀멎 닀음 링크의 지칚을 따륎십시였: https://pytorch.org/get-started/locally/. **ì°žê³ :** CUDA륌 섀치할 필요는 없습니닀. 새로욎 몚덞읎 CPU에서 작동하도록 만드는 것윌로 충분합니닀. 5. *brand_new_bert*륌 읎식하Ʞ 위핎서는 핎당 원볞 저장소에 접귌할 수 있얎알 합니닀: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` 읎제 *brand_new_bert*륌 🀗 Transformers로 읎식하Ʞ 위한 개발 환겜을 섀정하였습니닀. ### 3.-4. 원볞 저장소에서 사전 훈령된 첎크포읞튞 싀행하Ʞ [[3.-4.-run-a-pretrained-checkpoint-using-the-original-repository]] 뚌저, 원볞 *brand_new_bert* 저장소에서 작업을 시작합니닀. 원볞 구현은 볎통 "연구용"윌로 많읎 사용됩니닀. 슉, 묞서화가 부족하고 윔드가 읎핎하Ʞ 얎렀욞 수 있습니닀. 귞러나 읎것읎 바로 *brand_new_bert*륌 닀시 구현하렀는 동Ʞ가 되얎알 합니닀. Hugging Face에서의 죌요 목표 쀑 하나는 **거읞의 얎깚 위에 서는 것**읎며, 읎는 여Ʞ에서 쉜게 핎석되얎 동작하는 몚덞을 가젞와서 가능한 한 **ì ‘ê·Œ 가능하고 사용자 친화적읎며 아늄답게** 만드는 것입니닀. 읎것은 🀗 Transformers에서 몚덞을 닀시 구현하는 가장 쀑요한 동Ʞ입니닀 - 새로욎 복잡한 NLP Ʞ술을 **몚두에게** ì ‘ê·Œ 가능하게 만드는 것을 목표로 합니닀. 따띌서 원볞 저장소에 대핮 자섞히 삎펎볎는 것윌로 시작핎알 합니닀. 원볞 저장소에서 공식 사전 훈령된 몚덞을 성공적윌로 싀행하는 것은 종종 **가장 얎렀욎** 닚계입니닀. 우늬의 겜험에 따륎멎, 원볞 윔드 베읎슀에 익숙핎지는 데 시간을 투자하는 것읎 맀우 쀑요합니닀. 닀음을 파악핎알 합니닀: - 사전 훈령된 가쀑치륌 얎디서 찟을 수 있는지? - 사전 훈령된 가쀑치륌 핎당 몚덞에로드하는 방법은? - 몚덞곌 독늜적윌로 토크나읎저륌 싀행하는 방법은? - ê°„ë‹ší•œ forward pass에 필요한 큎래슀와 핚수륌 파악하Ʞ 위핎 forward pass륌 한 번 추적핎 볎섞요. 음반적윌로 핎당 핚수듀만 닀시 구현하멎 됩니닀. - 몚덞의 쀑요한 구성 요소륌 찟을 수 있얎알 합니닀. 몚덞 큎래슀는 얎디에 있나요? 몚덞 하위 큎래슀(*EncoderModel*, *DecoderModel* 등)가 있나요? self-attention 레읎얎는 얎디에 있나요? self-attention, cross-attention 등 여러 가지 닀륞 얎텐션 레읎얎가 있나요? - 원볞 환겜에서 몚덞을 디버귞할 수 있는 방법은 묎엇읞가요? *print* 묞을 추가핎알 하나요? *ipdb*와 같은 대화식 디버거륌 사용할 수 있나요? PyCharm곌 같은 횚윚적읞 IDE륌 사용핎 몚덞을 디버귞할 수 있나요? 원볞 저장소에서 윔드륌 읎식하는 작업을 시작하Ʞ 전에 원볞 저장소에서 윔드륌 **횚윚적윌로** 디버귞할 수 있얎알 합니닀! 또한, 였픈 소슀 띌읎람러늬로 작업하고 있닀는 것을 Ʞ억핎알 합니닀. 따띌서 원볞 저장소에서 issue륌 엎거나 pull request륌 엎Ʞ륌 죌저하지 마십시였. 읎 저장소의 유지 ꎀ늬자듀은 누군가가 자신듀의 윔드륌 삎펎볞닀는 것에 대핮 맀우 Ʞ뻐할 것입니닀! 현재 시점에서, 원래 몚덞을 디버깅하Ʞ 위핎 ì–Žë–€ 디버깅 환겜곌 전략을 선혞하는지는 당신에게 달렞습니닀. 우늬는 고가의 GPU 환겜을 구축하는 것은 비추천합니닀. 대신, 원래 저장소로 듀얎가서 작업을 시작할 때와 🀗 Transformers 몚덞의 구현을 시작할 때에도 CPU에서 작업하는 것읎 좋습니닀. 몚덞읎 읎믞 🀗 Transformers로 성공적윌로 읎식되었을 때에만 몚덞읎 GPU에서도 예상대로 작동하는지 확읞핎알합니닀. 음반적윌로, 원래 몚덞을 싀행하Ʞ 위한 두 가지 가능한 디버깅 환겜읎 있습니닀. - [Jupyter 녞튞북](https://jupyter.org/) / [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) - 로컬 Python 슀크늜튞 Jupyter 녞튞북의 장점은 셀 닚위로 싀행할 수 있닀는 것입니닀. 읎는 녌늬적읞 구성 요소륌 더 잘 분늬하고 쀑간 결곌륌 저장할 수 있윌므로 디버깅 사읎큎읎 더 빚띌질 수 있습니닀. 또한, 녞튞북은 닀륞 Ʞ여자와 쉜게 공유할 수 있윌므로 Hugging Face 팀의 도움을 요청하렀는 겜우 맀우 유용할 수 있습니닀. Jupyter 녞튞북에 익숙하닀멎 읎륌 사용하는 것을 강력히 추천합니닀. Jupyter 녞튞북의 닚점은 사용에 익숙하지 않은 겜우 새로욎 프로귞래밍 환겜에 적응하는 데 시간을 í• ì• í•Žì•Œ 하며, `ipdb`와 같은 알렀진 디버깅 도구륌 더 읎상 사용할 수 없을 수도 있닀는 것입니닀. 각 윔드 베읎슀에 대핮 좋은 첫 번짞 닚계는 항상 **작은** 사전 훈령된 첎크포읞튞륌 로드하고 더믞 정수 벡터 입력을 사용하여 닚음 forward pass륌 재현하는 것입니닀. 읎와 같은 슀크늜튞는 닀음곌 같을 수 있습니닀(의사 윔드로 작성): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` 닀음윌로, 디버깅 전략에 대핮 음반적윌로 닀음곌 같은 몇 가지 선택지가 있습니닀: - 원볞 몚덞을 많은 작은 테슀튞 가능한 구성 요소로 분핎하고 각각에 대핮 forward pass륌 싀행하여 검슝합니닀. - 원볞 몚덞을 원볞 *tokenizer*곌 원볞 *model*로만 분핎하고 핎당 부분에 대핮 forward pass륌 싀행한 후 검슝을 위핎 쀑간 출력(print 묞 또는 쀑닚점)을 사용합니닀. 닀시 말하지만, ì–Žë–€ 전략을 선택할지는 당신에게 달렀 있습니닀. 원볞 윔드 베읎슀에 따띌 하나 또는 닀륞 전략읎 유늬할 수 있습니닀. 원볞 윔드 베읎슀륌 몚덞의 작은 하위 구성 요소로 분핎할 수 있는지 여부, 예륌 듀얎 원볞 윔드 베읎슀가 슉시 싀행 몚드에서 간닚히 싀행될 수 있는 겜우, 귞런 겜우에는 ê·ž 녞력읎 가치가 있닀는 것읎 음반적입니닀. 쎈Ʞ에 더 얎렀욎 방법을 선택하는 것에는 몇 가지 쀑요한 장점읎 있습니닀. - 원볞 몚덞을 🀗 Transformers 구현곌 비교할 때 각 구성 요소가 음치하는지 자동윌로 확읞할 수 있습니닀. 슉, 시각적읞 비교(print 묞을 통한 비교가 아닌) 대신 🀗 Transformers 구현곌 귞에 대응하는 원볞 구성 요소가 음치하는지 확읞할 수 있습니닀. - 전첎 몚덞을 몚듈별로, 슉 작은 구성 요소로 분핎핚윌로썚 몚덞을 읎식하는 큰 묞제륌 닚순히 개별 구성 요소륌 읎식하는 작은 묞제로 분핎할 수 있윌므로 작업을 더 잘 구조화할 수 있습니닀. - 몚덞을 녌늬적윌로 의믞 있는 구성 요소로 분늬하는 것은 몚덞의 섀계에 대한 더 나은 개요륌 얻고 몚덞을 더 잘 읎핎하는 데 도움읎 됩니닀. - 읎러한 구성 요소별 테슀튞륌 통핎 윔드륌 변겜하멎서 회귀가 발생하지 않도록 볎장할 수 있습니닀. [Lysandre의 ELECTRA 통합 검사](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed)는 읎륌 수행하는 좋은 예제입니닀. 귞러나 원볞 윔드 베읎슀가 맀우 복잡하거나 쀑간 구성 요소륌 컎파음된 몚드에서 싀행하는 것만 허용하는 겜우, 몚덞을 테슀튞 가능한 작은 하위 구성 요소로 분핎하는 것읎 시간읎 많읎 소요되거나 불가능할 수도 있습니닀. [T5의 MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) 띌읎람러늬는 맀우 복잡하며 몚덞을 하위 구성 요소로 분핎하는 ê°„ë‹ší•œ 방법을 제공하지 않습니닀. 읎러한 띌읎람러늬의 겜우, 볎통 print 묞을 통핎 확읞합니닀. ì–Žë–€ 전략을 선택하더띌도 권장되는 절찚는 동음합니닀. 뚌저 시작 레읎얎륌 디버귞하고 마지막 레읎얎륌 마지막에 디버귞하는 것읎 좋습니닀. 닀음 순서로 각 레읎얎의 출력을 검색하는 것읎 좋습니닀: 1. 몚덞에 전달된 입력 ID 가젞였Ʞ 2. 워드 임베딩 가젞였Ʞ 3. 첫 번짞 Transformer 레읎얎의 입력 가젞였Ʞ 4. 첫 번짞 Transformer 레읎얎의 출력 가젞였Ʞ 5. 닀음 n-1개의 Transformer 레읎얎의 출력 가젞였Ʞ 6. BrandNewBert 몚덞의 출력 가젞였Ʞ 입력 ID는 정수 ë°°ì—Žë¡œ 구성되며, 예륌 듀얎 `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]`와 같을 수 있습니닀. 닀음 레읎얎의 출력은 종종 닀찚원 싀수 ë°°ì—Žë¡œ 구성되며, 닀음곌 같읎 나타낌 수 있습니닀: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` 🀗 Transformers에 추가되는 몚든 몚덞은 통합 테슀튞륌 통곌핎알 합니닀. 슉, 원볞 몚덞곌 🀗 Transformers의 재구현 버전읎 0.001의 정밀도로 정확히 동음한 출력을 ë‚Žì•Œ 합니닀! 동음한 몚덞읎 닀륞 띌읎람러늬에서 작성되었을 때 띌읎람러늬 프레임워크에 따띌 앜간 닀륞 출력을 얻는 것은 정상읎므로 1e-3(0.001)의 였찚는 허용합니닀. 거의 동음한 출력을 낮는 것만윌로는 충분하지 않윌며, 완벜히 음치하는 수쀀읎얎알 합니닀. 따띌서 🀗 Transformers 버전의 쀑간 출력을 *brand_new_bert*의 원래 구현의 쀑간 출력곌 여러 번 비교핎알 합니닀. 읎 겜우 원볞 저장소의 **횚윚적읞** 디버깅 환겜읎 절대적윌로 쀑요합니닀. 디버깅 환겜을 가능한 한 횚윚적윌로 만드는 몇 가지 조얞을 제시합니닀. - 쀑간 결곌륌 디버귞하는 가장 좋은 방법을 찟윌섞요. 원볞 저장소가 PyTorch로 작성되었닀멎 원볞 몚덞을 더 작은 하위 구성 요소로 분핎하여 쀑간 값을 검색하는 ꞎ 슀크늜튞륌 작성하는 것에 시간을 투자할 가치가 있습니닀. 원볞 저장소가 Tensorflow 1로 작성되었닀멎 [tf.print](https://www.tensorflow.org/api_docs/python/tf/print)와 같은 Tensorflow 출력 작업을 사용하여 쀑간 값을 출력핎알 할 수도 있습니닀. 원볞 저장소가 Jax로 작성되었닀멎 forward pass륌 싀행할 때 몚덞읎 **jit 되지 않도록** í•Žì•Œ 합니닀. 예륌 듀얎 [읎 링크](https://github.com/google/jax/issues/196)륌 확읞핎 볎섞요. - 사용 가능한 가장 작은 사전 훈령된 첎크포읞튞륌 사용하섞요. 첎크포읞튞가 작을수록 디버귞 사읎큎읎 더 빚띌집니닀. 전반적윌로 forward pass에 10쎈 읎상읎 걞늬는 겜우 횚윚적읎지 않습니닀. 맀우 큰 첎크포읞튞만 사용할 수 있는 겜우, 새 환겜에서 임의로 쎈Ʞ화된 가쀑치로 더믞 몚덞을 만듀고 핎당 가쀑치륌 🀗 Transformers 버전곌 비교하Ʞ 위핎 저장하는 것읎 더 의믞가 있을 수 있습니닀. - 디버깅 섀정에서 가장 쉜게 forward pass륌 혞출하는 방법을 사용하섞요. 원볞 저장소에서 **닚음** forward pass만 혞출하는 핚수륌 찟는 것읎 읎상적입니닀. 읎 핚수는 음반적윌로 `predict`, `evaluate`, `forward`, `__call__`곌 같읎 혞출됩니닀. `autoregressive_sample`곌 같은 텍슀튞 생성에서 `forward`륌 여러 번 혞출하여 텍슀튞륌 생성하는 등의 작업을 수행하는 핚수륌 디버귞하고 싶지 않을 것입니닀. - 토큰화 곌정을 몚덞의 *forward* pass와 분늬하렀고 녞력하섞요. 원볞 저장소에서 입력 묞자엎을 입력핎알 하는 예제가 있는 겜우, 입력 묞자엎읎 입력 ID로 변겜되는 순간을 ì°Ÿì•„ì„œ 시작하섞요. 읎 겜우 직접 ID륌 입력할 수 있도록 작은 슀크늜튞륌 작성하거나 원볞 윔드륌 수정핎알 할 수도 있습니닀. - 디버깅 섀정에서 몚덞읎 훈령 몚드가 아니띌는 것을 확읞하섞요. 훈령 몚드에서는 몚덞의 여러 드롭아웃 레읎얎 때묞에 묎작위 출력읎 생성될 수 있습니닀. 디버깅 환겜에서 forward pass가 **결정론적**읎도록 í•Žì•Œ 합니닀. 또는 동음한 프레임워크에 있는 겜우 *transformers.utils.set_seed*륌 사용하섞요. 닀음 섹션에서는 *brand_new_bert*에 대핮 읎 작업을 수행하는 데 더 구첎적읞 섞부 사항/팁을 제공합니닀. ### 5.-14. 🀗 Transformers에 BrandNewBert륌 읎식하Ʞ [[5.-14.-port-brandnewbert-to-transformers]] 읎제, 마칚낎 🀗 Transformers에 새로욎 윔드륌 추가할 수 있습니닀. 🀗 Transformers 포크의 큎론윌로 읎동하섞요: ```bash cd transformers ``` 닀음곌 같읎 읎믞 졎재하는 몚덞의 몚덞 아킀텍처와 정확히 음치하는 몚덞을 추가하는 특별한 겜우에는 [읎 섹션](#write-a-conversion-script)에 섀명된대로 변환 슀크늜튞만 추가하멎 됩니닀. 읎 겜우에는 읎믞 졎재하는 몚덞의 전첎 몚덞 아킀텍처륌 귞대로 재사용할 수 있습니닀. 귞렇지 않윌멎 새 몚덞 생성을 시작하겠습니닀. 닀음 슀크늜튞륌 사용하여 닀음에서 시작하는 몚덞을 추가하는 것읎 좋습니닀. êž°ì¡Ž 몚덞: ```bash transformers-cli add-new-model-like ``` 몚덞의 Ʞ볞 정볎륌 입력하는 섀묞지가 표시됩니닀. **huggingface/transformers 메읞 저장소에 Pull Request ì—Žêž°** 자동윌로 생성된 윔드륌 수정하Ʞ 전에, 지ꞈ은 "작업 진행 쀑 (WIP)" 풀 늬퀘슀튞륌 ì—Žêž° 위한 시Ʞ입니닀. 예륌 듀얎, 🀗 Transformers에 "*brand_new_bert* 추가"띌는 제목의 "[WIP] Add *brand_new_bert*" 풀 늬퀘슀튞륌 엜니닀. 읎렇게 하멎 당신곌 Hugging Face 팀읎 🀗 Transformers에 몚덞을 통합하는 작업을 핚께할 수 있습니닀. 닀음을 수행핎알 합니닀: 1. 메읞 람랜치에서 작업을 잘 섀명하는 읎늄윌로 람랜치 생성 ```bash git checkout -b add_brand_new_bert ``` 2. 자동윌로 생성된 윔드 컀밋 ```bash git add . git commit ``` 3. 현재 메읞을 가젞였고 늬베읎슀 ```bash git fetch upstream git rebase upstream/main ``` 4. 변겜 사항을 계정에 í‘žì‹œ ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. 만족슀럜닀멎, GitHub에서 자신의 포크한 웹 페읎지로 읎동합니닀. "Pull request"륌 큎늭합니닀. Hugging Face 팀의 음부 멀버의 GitHub 핞듀을 늬뷰얎로 추가하여 Hugging Face 팀읎 앞윌로의 변겜 사항에 대핮 알늌을 받을 수 있도록 합니닀. 6. GitHub 풀 늬퀘슀튞 웹 페읎지 였륞쪜에 있는 "Convert to draft"륌 큎늭하여 PR을 쎈안윌로 변겜합니닀. 닀음윌로, ì–Žë–€ 진전을 읎룚었닀멎 작업을 컀밋하고 계정에 푞시하여 풀 늬퀘슀튞에 표시되도록 í•Žì•Œ 합니닀. 또한, 닀음곌 같읎 현재 메읞곌 작업을 업데읎튞핎알 합니닀: ```bash git fetch upstream git merge upstream/main ``` 음반적윌로, 몚덞 또는 구현에 ꎀ한 몚든 질묞은 자신의 PR에서 í•Žì•Œ 하며, PR에서 토론되고 핎결되얎알 합니닀. 읎렇게 하멎 Hugging Face 팀읎 새로욎 윔드륌 컀밋하거나 질묞을 할 때 항상 알늌을 받을 수 있습니닀. Hugging Face 팀에게 묞제 또는 질묞을 횚윚적윌로 읎핎할 수 있도록 추가한 윔드륌 명시하는 것읎 도움읎 될 때가 많습니닀. 읎륌 위핎, 변겜 사항을 몚두 볌 수 있는 "Files changed" 탭윌로 읎동하여 질묞하고자 하는 쀄로 읎동한 닀음 "+" Ʞ혞륌 큎늭하여 윔멘튞륌 추가할 수 있습니닀. 질묞읎나 묞제가 핎결되멎, 생성된 윔멘튞의 "Resolve" 버튌을 큮멭할 수 있습니닀. 마찬가지로, Hugging Face 팀은 윔드륌 늬뷰할 때 윔멘튞륌 ë‚šêžž 것입니닀. 우늬는 PR에서 대부분의 질묞을 GitHub에서 묻는 것을 권장합니닀. 공개에 크게 도움읎 되지 않는 맀우 음반적읞 질묞의 겜우, Slack읎나 읎메음을 통핎 Hugging Face 팀에게 묞의할 수 있습니닀. **5. brand_new_bert에 대핮 생성된 몚덞 윔드륌 적용하Ʞ** 뚌저, 우늬는 몚덞 자첎에만 쎈점을 맞추고 토크나읎저에 대핎서는 신겜 쓰지 않을 것입니닀. 몚든 ꎀ렚 윔드는 닀음의 생성된 파음에서 찟을 수 있습니닀: `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` 및 `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. 읎제 마칚낎 윔딩을 시작할 수 있습니닀 :). `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`의 생성된 윔드는 읞윔더 전용 몚덞읞 겜우 BERT와 동음한 아킀텍처륌 가지거나, 읞윔더-디윔더 몚덞읞 겜우 BART와 동음한 아킀텍처륌 가질 것입니닀. 읎 시점에서, 몚덞의 읎론적 잡멎에 대핮 배욎 낎용을 닀시 상Ʞ핎알 합니닀: *몚덞읎 BERT 또는 BART와 얎떻게 닀륞가요?*. 자죌 변겜핎알 하는 것은 *self-attention* 레읎얎, 정규화 레읎얎의 순서 등을 변겜하는 것입니닀. 닀시 말하지만, 자신의 몚덞을 구현하는 데 도움읎 되도록 Transformers에서 읎믞 졎재하는 몚덞의 유사한 아킀텍처륌 삎펎볎는 것읎 유용할 수 있습니닀. **ì°žê³ ë¡œ** 읎 시점에서, 윔드가 완전히 정확하거나 깚끗하닀고 확신할 필요는 없습니닀. 였히렀 처음에는 원볞 윔드의 첫 번짞 *불완전하고* 복사된 버전을 `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`에 추가하는 것읎 좋습니닀. 필요한 몚든 윔드가 추가될 때까지 읎러한 작업을 진행한 후, 닀음 섹션에서 섀명한 변환 슀크늜튞륌 사용하여 윔드륌 점진적윌로 개선하고 수정하는 것읎 훚씬 횚윚적입니닀. 읎 시점에서 작동핎알 하는 유음한 것은 닀음 명령읎 작동하는 것입니닀: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` 위의 명령은 `BrandNewBertConfig()`에 정의된 Ʞ볞 맀개변수에 따띌 묎작위 가쀑치로 몚덞을 생성하며, 읎로썚 몚든 구성 요소의 `init()` 메서드가 작동핚을 볎장합니닀. 몚든 묎작위 쎈Ʞ화는 `BrandnewBertPreTrainedModel` 큎래슀의 `_init_weights` 메서드에서 수행되얎알 합니닀. 읎 메서드는 구성 섀정 변수에 따띌 몚든 늬프 몚듈을 쎈Ʞ화핎알 합니닀. BERT의 `_init_weights` 메서드 예제는 닀음곌 같습니닀: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` 몇 가지 몚듈에 대핮 특별한 쎈Ʞ화가 필요한 겜우 사용자 정의 방식을 사용할 수도 있습니닀. 예륌 듀얎, `Wav2Vec2ForPreTraining`에서 마지막 두 개의 선형 레읎얎는 음반적읞 PyTorch `nn.Linear`의 쎈Ʞ화륌 가젞알 하지만, 닀륞 몚든 레읎얎는 위와 같은 쎈Ʞ화륌 사용핎알 합니닀. 읎는 닀음곌 같읎 윔드화됩니닀: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` `_is_hf_initialized` 플래귞는 서람몚듈을 한 번만 쎈Ʞ화하도록 낎부적윌로 사용됩니닀. `module.project_q` 및 `module.project_hid`에 대핮 `True`로 섀정핚윌로썚, 우늬가 수행한 사용자 정의 쎈Ʞ화가 읎후에 덮얎쓰읎지 않도록 합니닀. 슉, `_init_weights` 핚수가 읎듀에게 적용되지 않습니닀. **6. 변환 슀크늜튞 작성하Ʞ** 닀음윌로, 디버귞에 사용한 첎크포읞튞륌 êž°ì¡Ž 저장소에서 만든 🀗 Transformers 구현곌 혾환되는 첎크포읞튞로 변환할 수 있는 변환 슀크늜튞륌 작성핎알 합니닀. 변환 슀크늜튞륌 처음부터 작성하는 것볎닀는 *brand_new_bert*와 동음한 프레임워크로 작성된 유사한 몚덞을 변환한 êž°ì¡Ž 변환 슀크늜튞륌 찟아볎는 것읎 좋습니닀. 음반적윌로 êž°ì¡Ž 변환 슀크늜튞륌 복사하여 사용 사례에 맞게 앜간 수정하는 것윌로 충분합니닀. 몚덞에 대핮 유사한 êž°ì¡Ž 변환 슀크늜튞륌 얎디에서 찟을 수 있는지 Hugging Face 팀에게 묞의하는 것을 망섀읎지 마섞요. - TensorFlow에서 PyTorch로 몚덞을 읎전하는 겜우, 좋은 ì°žê³  자료로 BERT의 변환 슀크늜튞 [여Ʞ](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)륌 ì°žì¡°í•  수 있습니닀. - PyTorch에서 PyTorch로 몚덞을 읎전하는 겜우, 좋은 ì°žê³  자료로 BART의 변환 슀크늜튞 [여Ʞ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)륌 ì°žì¡°í•  수 있습니닀. 닀음에서는 PyTorch 몚덞읎 레읎얎 가쀑치륌 저장하고 레읎얎 읎늄을 정의하는 방법에 대핮 간닚히 섀명하겠습니닀. PyTorch에서 레읎얎의 읎늄은 레읎얎에 지정한 큎래슀 속성의 읎늄윌로 정의됩니닀. 닀음곌 같읎 PyTorch에서 `SimpleModel`읎띌는 더믞 몚덞을 정의핎 뎅시닀: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` 읎제 읎 몚덞 정의의 읞슀턎슀륌 생성할 수 있윌며 `dense`, `intermediate`, `layer_norm` 등의 가쀑치가 랜덀하게 할당됩니닀. 몚덞을 출력하여 아킀텍처륌 확읞할 수 있습니닀. ```python model = SimpleModel() print(model) ``` 읎는 닀음곌 같읎 출력됩니닀: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` 우늬는 레읎얎의 읎늄읎 PyTorch에서 큎래슀 속성의 읎늄윌로 정의되얎 있는 것을 볌 수 있습니닀. 특정 레읎얎의 가쀑치 값을 출력하여 확읞할 수 있습니닀: ```python print(model.dense.weight.data) ``` 가쀑치가 묎작위로 쎈Ʞ화되었음을 확읞할 수 있습니닀. ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` 변환 슀크늜튞에서는 읎러한 묎작위로 쎈Ʞ화된 가쀑치륌 첎크포읞튞의 핎당 레읎얎의 정확한 가쀑치로 채워알 합니닀. 예륌 듀멎 닀음곌 같습니닀: ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` 읎렇게 하멎 PyTorch 몚덞의 묎작위로 쎈Ʞ화된 각 가쀑치와 핎당 첎크포읞튞 가쀑치가 **몚양곌 읎늄** 몚두에서 정확히 음치하는지 확읞핎알 합니닀. 읎륌 위핎 몚양에 대한 assert 묞을 추가하고 첎크포읞튞 가쀑치의 읎늄을 출력핎알 합니닀. 예륌 듀얎 닀음곌 같은 묞장을 추가핎알 합니닀: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` 또한 두 가쀑치의 읎늄을 출력하여 음치하는지 확읞핎알 합니닀. *예시*: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` 몚양 또는 읎늄읎 음치하지 않는 겜우, 랜덀윌로 쎈Ʞ화된 레읎얎에 잘못된 첎크포읞튞 가쀑치륌 할당한 것윌로 추잡됩니닀. 잘못된 몚양은 `BrandNewBertConfig()`의 구성 맀개변수 섀정읎 변환하렀는 첎크포읞튞에 사용된 섀정곌 정확히 음치하지 ì•Šêž° 때묞음 가능성읎 가장 큜니닀. 귞러나 PyTorch의 레읎얎 구현 자첎에서 가쀑치륌 전치핎알 할 수도 있습니닀. 마지막윌로, **몚든** 필요한 가쀑치가 쎈Ʞ화되었는지 확읞하고 쎈Ʞ화에 사용되지 않은 몚든 첎크포읞튞 가쀑치륌 출력하여 몚덞읎 올바륎게 변환되었는지 확읞핎알 합니닀. 잘못된 몚양 묞장읎나 잘못된 읎늄 할당윌로 읞핎 변환 시도가 싀팚하는 것은 완전히 정상입니닀. 읎는 `BrandNewBertConfig()`에서 잘못된 맀개변수륌 사용하거나 🀗 Transformers 구현에서 잘못된 아킀텍처, 🀗 Transformers 구현의 구성 요소 쀑 하나의 `init()` 핚수에 버귞가 있는 겜우읎거나 첎크포읞튞 가쀑치 쀑 하나륌 전치핎알 하는 겜우음 가능성읎 가장 높습니닀. 읎 닚계는 읎전 닚계와 핚께 반복되얎알 하며 몚든 첎크포읞튞의 가쀑치가 Transformers 몚덞에 올바륎게 로드되었을 때까지 계속되얎알 합니닀. 🀗 Transformers 구현에 첎크포읞튞륌 올바륎게 로드한 후에는 `/path/to/converted/checkpoint/folder`와 같은 원하는 폎더에 몚덞을 저장할 수 있얎알 합니닀. 핎당 폎더에는 `pytorch_model.bin` 파음곌 `config.json` 파음읎 몚두 포핚되얎알 합니닀. ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. 순방향 팚슀 구현하Ʞ** 🀗 Transformers 구현에 사전 훈령된 가쀑치륌 정확하게 로드한 후에는 순방향 팚슀가 올바륎게 구현되었는지 확읞핎알 합니닀. [원볞 저장소에 익숙핎지Ʞ](#3-4-run-a-pretrained-checkpoint-using-the-original-repository)에서 읎믞 원볞 저장소륌 사용하여 몚덞의 순방향 팚슀륌 싀행하는 슀크늜튞륌 만듀었습니닀. 읎제 원볞 대신 🀗 Transformers 구현을 사용하는 유사한 슀크늜튞륌 작성핎알 합니닀. 닀음곌 같읎 작성되얎알 합니닀: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` 🀗 Transformers 구현곌 원볞 몚덞 구현읎 처음부터 정확히 동음한 출력을 제공하지 않거나 순방향 팚슀에서 였류가 발생할 가능성읎 맀우 높습니닀. 싀망하지 마섞요. 예상된 음입니닀! 뚌저, 순방향 팚슀에서 였류가 발생하지 않도록 í•Žì•Œ 합니닀. 종종 잘못된 찚원읎 사용되얎 *찚원 불음치* 였류가 발생하거나 잘못된 데읎터 유형 개첎가 사용되는 겜우가 있습니닀. 예륌 듀멎 `torch.long` 대신에 `torch.float32`가 사용된 겜우입니닀. í•Žê²°í•  수 없는 였류가 발생하멎 Hugging Face 팀에 도움을 요청하는 것읎 좋습니닀. 🀗 Transformers 구현읎 올바륎게 작동하는지 확읞하는 마지막 닚계는 출력읎 `1e-3`의 정밀도로 동음한지 확읞하는 것입니닀. 뚌저, 출력 몚양읎 동음하도록 볎장핎알 합니닀. 슉, 🀗 Transformers 구현 슀크늜튞와 원볞 구현 사읎에서 `outputs.shape`는 동음한 값을 반환핎알 합니닀. ê·ž 닀음윌로, 출력 값읎 동음하도록 í•Žì•Œ 합니닀. 읎는 새로욎 몚덞을 추가할 때 가장 얎렀욎 부분 쀑 하나입니닀. 출력읎 동음하지 않은 음반적읞 싀수 사례는 닀음곌 같습니닀: - 음부 레읎얎가 추가되지 않았습니닀. 슉, *활성화* 레읎얎가 추가되지 않았거나 잔찚 연결읎 빠졌습니닀. - ë‹šì–Ž 임베딩 행렬읎 연결되지 않았습니닀. - 잘못된 위치 임베딩읎 사용되었습니닀. 원볞 구현에서는 였프셋을 사용합니닀. - 순방향 팚슀 쀑에 Dropout읎 적용되었습니닀. 읎륌 수정하렀멎 *model.training읎 False*읞지 확읞하고 순방향 팚슀 쀑에 Dropout 레읎얎가 잘못 활성화되지 않도록 하섞요. 슉, [PyTorch의 Ʞ능적 Dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)에 *self.training*을 전달하섞요. 묞제륌 핎결하는 가장 좋은 방법은 음반적윌로 원볞 구현곌 🀗 Transformers 구현의 순방향 팚슀륌 나란히 놓고 찚읎점읎 있는지 확읞하는 것입니닀. 읎상적윌로는 순방향 팚슀의 쀑간 출력을 디버귞/출력하여 원볞 구현곌 🀗 Transformers 구현의 정확한 위치륌 찟을 수 있얎알 합니닀. 뚌저, 두 슀크늜튞의 하드윔딩된 `input_ids`가 동음한지 확읞하섞요. 닀음윌로, `input_ids`의 첫 번짞 변환의 출력(음반적윌로 ë‹šì–Ž 임베딩)읎 동음한지 확읞하섞요. 귞런 닀음 넀튞워크의 가장 마지막 레읎얎까지 진행핎볎섞요. 얎느 시점에서 두 구현 사읎에 찚읎가 있는 것을 알게 되는데, 읎는 🀗 Transformers 구현의 버귞 위치륌 가늬킬 것입니닀. 저희 겜험상윌로는 원볞 구현곌 🀗 Transformers 구현 몚두에서 동음한 위치에 많은 출력 묞을 추가하고 읎듀의 쀑간 표현에 대핮 동음한 값을 볎읎는 출력 묞을 연속적윌로 제거하는 것읎 간닚하고 횚곌적읞 방법입니닀. `torch.allclose(original_output, output, atol=1e-3)`로 출력을 확읞하여 두 구현읎 동음한 출력을 하는 것을 확신한닀멎, 가장 얎렀욎 부분은 끝났습니닀! 축하드늜니닀. 낚은 작업은 쉬욎 음읎 될 것입니닀 😊. **8. 필요한 몚든 몚덞 테슀튞 추가하Ʞ** 읎 시점에서 새로욎 몚덞을 성공적윌로 추가했습니닀. 귞러나 핎당 몚덞읎 요구되는 디자읞에 완전히 부합하지 않을 수도 있습니닀. 🀗 Transformers와 완벜하게 혾환되는 구현읞지 확읞하Ʞ 위핎 몚든 음반 테슀튞륌 통곌핎알 합니닀. Cookiecutter는 아마도 몚덞을 위한 테슀튞 파음을 자동윌로 추가했을 것입니닀. 아마도 `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`와 같은 겜로에 위치할 것입니닀. 읎 테슀튞 파음을 싀행하여 음반 테슀튞가 몚두 통곌하는지 확읞하섞요. ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` 몚든 음반 테슀튞륌 수정한 후, 읎제 수행한 작업을 충분히 테슀튞하여 닀음 사항을 볎장핎알 합니닀. - a) 컀뮀니티가 *brand_new_bert*의 특정 테슀튞륌 삎펎뎄윌로썚 작업을 쉜게 읎핎할 수 있도록 핹 - b) 몚덞에 대한 향후 변겜 사항읎 몚덞의 쀑요한 Ʞ능을 손상시킀지 않도록 핹 뚌저 통합 테슀튞륌 추가핎알 합니닀. 읎러한 통합 테슀튞는 읎전에 몚덞을 🀗 Transformers로 구현하Ʞ 위핎 사용한 디버깅 슀크늜튞와 동음한 작업을 수행합니닀. Cookiecutter에 읎믞 읎러한 몚덞 테슀튞의 템플늿읞 `BrandNewBertModelIntegrationTests`가 추가되얎 있윌며, 여러분읎 작성핎알 할 낎용윌로만 채워 넣윌멎 됩니닀. 읎러한 테슀튞가 통곌하는지 확읞하렀멎 닀음을 싀행하섞요. ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Windows륌 사용하는 겜우 `RUN_SLOW=1`을 `SET RUN_SLOW=1`로 바꿔알 합니닀. </Tip> 둘짞로, *brand_new_bert*에 특화된 몚든 Ʞ능도 별도의 테슀튞에서 추가로 테슀튞핎알 합니닀. 읎 부분은 종종 잊히는데, 두 가지 잡멎에서 굉장히 유용합니닀. - *brand_new_bert*의 특수 Ʞ능읎 얎떻게 작동핎알 하는지 볎여쀌윌로썚 컀뮀니티에게 몚덞 추가 곌정에서 습득한 지식을 전달하는 데 도움읎 됩니닀. - 향후 Ʞ여자는 읎러한 특수 테슀튞륌 싀행하여 몚덞에 대한 변겜 사항을 빠륎게 테슀튞할 수 있습니닀. **9. 토크나읎저 구현하Ʞ** 닀음윌로, *brand_new_bert*의 토크나읎저륌 추가핎알 합니닀. 볎통 토크나읎저는 🀗 Transformers의 êž°ì¡Ž 토크나읎저와 동음하거나 맀우 유사합니닀. 토크나읎저가 올바륎게 작동하는지 확읞하Ʞ 위핎 뚌저 원볞 늬포지토늬에서 묞자엎을 입력하고 `input_ids`륌 반환하는 슀크늜튞륌 생성하는 것읎 좋습니닀. 닀음곌 같은 유사한 슀크늜튞음 수 있습니닀 (의사 윔드로 작성): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` 원볞 늬포지토늬륌 자섞히 삎펎볎고 올바륞 토크나읎저 핚수륌 찟거나, 복제볞에서 변겜 사항을 적용하여 `input_ids`만 출력하도록 í•Žì•Œ 합니닀. 원볞 늬포지토늬륌 사용하는 Ʞ능적읞 토큰화 슀크늜튞륌 작성한 후, 🀗 Transformers의 유사한 슀크늜튞륌 생성핎알 합니닀. 닀음곌 같읎 작성되얎알 합니닀: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` 두 개의 `input_ids`가 동음한 값을 반환할 때, 마지막 닚계로 토크나읎저 테슀튞 파음도 추가핎알 합니닀. *brand_new_bert*의 몚덞링 테슀튞 파음곌 유사하게, *brand_new_bert*의 토크나읎제읎션 테슀튞 파음에는 몇 가지 하드윔딩된 통합 테슀튞가 포핚되얎알 합니닀. **10. 종닚 간 통합 테슀튞 싀행** 토크나읎저륌 추가한 후에는 몚덞곌 토크나읎저륌 사용하여 몇 가지 종닚 간 통합 테슀튞륌 추가핎알 합니닀. `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`에 추가핎죌섞요. 읎러한 테슀튞는 🀗 Transformers 구현읎 예상대로 작동하는지륌 의믞 있는 text-to-text 예시로 볎여쀘알 합니닀. ê·ž 예시로는 *예륌 듀얎* source-to-target 번역 쌍, article-to-summary 쌍, question-to-answer 쌍 등읎 포핚될 수 있습니닀. 불러옚 첎크포읞튞 쀑 얎느 것도 닀욎슀튞늌 작업에서 믞섞 조정되지 않았닀멎, 몚덞 테슀튞만윌로 충분합니닀. 몚덞읎 완전히 Ʞ능을 갖추었는지 확읞하Ʞ 위핎 마지막 닚계로 GPU에서 몚든 테슀튞륌 싀행하는 것읎 좋습니닀. 몚덞의 낎부 텐서의 음부에 `.to(self.device)` 묞을 추가하는 것을 잊었을 수 있윌며, 읎 겜우 테슀튞에서 였류로 표시됩니닀. GPU에 액섞슀할 수 없는 겜우, Hugging Face 팀읎 테슀튞륌 대신 싀행할 수 있습니닀. **11. Ʞ술묞서 추가** 읎제 *brand_new_bert*에 필요한 몚든 Ʞ능읎 추가되었습니닀. 거의 끝났습니닀! 추가핎알 할 것은 멋진 Ʞ술묞서곌 Ʞ술묞서 페읎지입니닀. Cookiecutter가 `docs/source/model_doc/brand_new_bert.md`띌는 템플늿 파음을 추가핎쀬을 것입니닀. 읎 페읎지륌 사용하Ʞ 전에 몚덞을 사용하는 사용자듀은 음반적윌로 읎 페읎지륌 뚌저 확읞합니닀. 따띌서 묞서는 읎핎하Ʞ 쉜고 ê°„ê²°í•Žì•Œ 합니닀. 몚덞을 사용하는 방법을 볎여죌Ʞ 위핎 *팁*을 추가하는 것읎 컀뮀니티에 맀우 유용합니닀. 독슀튞링에 ꎀ렚하여 Hugging Face 팀에 묞의하는 것을 죌저하지 마섞요. 닀음윌로, `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`에 추가된 독슀튞링읎 올바륎며 필요한 몚든 입력 및 출력을 포핚하도록 확읞하섞요. [여Ʞ](writing-documentation)에서 우늬의 묞서 작성 가읎드와 독슀튞링 형식에 대한 상섞 가읎드가 있습니닀. 묞서는 음반적윌로 컀뮀니티와 몚덞의 첫 번짞 접점읎Ʞ 때묞에, 묞서는 적얎도 윔드만큌의 죌의륌 Ʞ욞여알 합니닀. **윔드 늬팩토링** 좋아요, 읎제 *brand_new_bert*륌 위한 몚든 필요한 윔드륌 추가했습니닀. 읎 시점에서 닀음을 싀행하여 잠재적윌로 잘못된 윔드 슀타음을 수정핎알 합니닀: 귞늬고 윔딩 슀타음읎 품질 점검을 통곌하는지 확읞하Ʞ 위핎 닀음을 싀행하고 확읞핎알 합니닀: ```bash make style ``` 🀗 Transformers에는 여전히 싀팚할 수 있는 몇 가지 맀우 엄격한 디자읞 테슀튞가 있습니닀. 읎는 독슀튞링에 누띜된 정볎나 잘못된 명명 때묞에 종종 발생합니닀. 여Ʞ서 막히멎 Hugging Face 팀읎 도움을 쀄 것입니닀. ```bash make quality ``` 마지막윌로, 윔드가 정확히 작동하는 것을 확읞한 후에는 항상 윔드륌 늬팩토링하는 것읎 좋은 생각입니닀. 몚든 테슀튞가 통곌된 지ꞈ은 추가한 윔드륌 닀시 검토하고 늬팩토링하는 좋은 시Ʞ입니닀. 읎제 윔딩 부분을 완료했습니닀. 축하합니닀! 🎉 ë©‹ì žìš”! 😎 **12. 몚덞을 몚덞 허람에 업로드하섞요** 읎 마지막 파튞에서는 몚든 첎크포읞튞륌 변환하여 몚덞 허람에 업로드하고 각 업로드된 몚덞 첎크포읞튞에 대한 몚덞 칎드륌 추가핎알 합니닀. [Model sharing and uploading Page](model_sharing)륌 읜고 허뾌 Ʞ능에 익숙핎지섞요. *brand_new_bert*의 저자 조직 아래에 몚덞을 업로드할 수 있는 필요한 액섞슀 권한을 얻Ʞ 위핎 Hugging Face 팀곌 협업핎알 합니닀. `transformers`의 몚든 몚덞에 있는 `push_to_hub` 메서드는 첎크포읞튞륌 허람에 빠륎고 횚윚적윌로 업로드하는 방법입니닀. 아래에 작은 윔드 조각읎 붙여젞 있습니닀: 각 첎크포읞튞에 적합한 몚덞 칎드륌 만드는 데 시간을 할애하는 것은 가치가 있습니닀. 몚덞 칎드는 첎크포읞튞의 특성을 ê°•ì¡°í•Žì•Œ 합니닀. *예륌 듀얎* 읎 첎크포읞튞는 ì–Žë–€ 데읎터셋에서 사전 훈령/섞부 훈렚되었는지? 읎 몚덞은 ì–Žë–€ 하위 작업에서 사용핎알 하는지? 귞늬고 몚덞을 올바륎게 사용하는 방법에 대한 몇 가지 윔드도 포핚핎알 합니닀. ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` **13. (선택 사항) 녞튞북 추가** *brand_new_bert*륌 닀욎슀튞늌 작업에서 추론 또는 믞섞 조정에 사용하는 방법을 자섞히 볎여죌는 녞튞북을 추가하는 것읎 맀우 유용합니닀. 읎것은 PR을 병합하는 데 필수적읎지는 않지만 컀뮀니티에 맀우 유용합니닀. **14. 완료된 PR 제출** 읎제 프로귞래밍을 마쳀윌며, 마지막 닚계로 PR을 메읞 람랜치에 병합핎알 합니닀. 볎통 Hugging Face 팀은 읎믞 여Ʞ까지 도움을 죌었을 것입니닀. 귞러나 PR에 멋진 섀명을 추가하고 늬뷰얎에게 특정 디자읞 선택 사항을 강조하렀멎 완료된 PR에 앜간의 섀명을 추가하는 시간을 할애하는 것읎 가치가 있습니닀. ### 작업묌을 공유하섞요!! [[share-your-work]] 읎제 컀뮀니티에서 작업묌을 읞정받을 시간입니닀! 몚덞 추가 작업을 완료하는 것은 Transformers와 전첎 NLP 컀뮀니티에 큰 Ʞ여입니닀. 당신의 윔드와 읎식된 사전 훈령된 몚덞은 수백, 심지얎 수천 명의 개발자와 연구원에 의핎 확싀히 사용될 것입니닀. 당신의 작업에 자랑슀러워핎알 하며 읎륌 컀뮀니티와 공유핎알 합니닀. **당신은 컀뮀니티 낮 몚든 사람듀에게 맀우 쉜게 ì ‘ê·Œ 가능한 또 닀륞 몚덞을 만듀었습니닀! 🀯**
transformers/docs/source/ko/add_new_model.md/0
{ "file_path": "transformers/docs/source/ko/add_new_model.md", "repo_id": "transformers", "token_count": 43040 }
287
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text generation strategies[[text-generation-strategies]] 텍슀튞 생성은 개방형 텍슀튞 작성, 요앜, 번역 등 닀양한 자연얎 처늬(NLP) 작업에 필수적입니닀. 읎는 또한 음성-텍슀튞 변환, 시각-텍슀튞 변환곌 같읎 텍슀튞륌 출력윌로 하는 여러 혌합 몚달늬티 응용 프로귞랚에서도 쀑요한 역할을 합니닀. 텍슀튞 생성을 가능하게 하는 몇몇 몚덞로는 GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper 등읎 있습니닀. [`~generation.GenerationMixin.generate`] 메서드륌 활용하여 닀음곌 같은 닀양한 작업듀에 대핮 텍슀튞 결곌묌을 생성하는 몇 가지 예시륌 삎펎볎섞요: * [텍슀튞 요앜](./tasks/summarization#inference) * [읎믞지 캡셔닝](./model_doc/git#transformers.GitForCausalLM.forward.example) * [였디였 전사](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) generate 메소드에 입력되는 값듀은 몚덞의 데읎터 형태에 따띌 달띌집니닀. 읎 값듀은 AutoTokenizer나 AutoProcessor와 같은 몚덞의 전처늬 큎래슀에 의핎 반환됩니닀. 몚덞의 전처늬 장치가 하나 읎상의 입력 유형을 생성하는 겜우, 몚든 입력을 generate()에 전달핎알 합니닀. 각 몚덞의 전처늬 장치에 대핎서는 핎당 몚덞의 묞서에서 자섞히 알아볌 수 있습니닀. 텍슀튞륌 생성하Ʞ 위핎 출력 토큰을 선택하는 곌정을 디윔딩읎띌고 하며, `generate()` 메소드가 사용할 디윔딩 전략을 사용자가 컀슀터마읎징할 수 있습니닀. 디윔딩 전략을 수정하는 것은 훈령 가능한 맀개변수의 값듀을 변겜하지 않지만, 생성된 출력의 품질에 눈에 띄는 영향을 쀄 수 있습니닀. 읎는 텍슀튞에서 반복을 쀄읎고, 더 음ꎀ성 있게 만드는 데 도움을 쀄 수 있습니닀. 읎 가읎드에서는 닀음곌 같은 낎용을 닀룹니닀: * Ʞ볞 생성 섀정 * 음반적읞 디윔딩 전략곌 죌요 파띌믞터 * 🀗 Hub에서 믞섞 조정된 몚덞곌 핚께 사용자 정의 생성 섀정을 저장하고 공유하는 방법 ## Ʞ볞 텍슀튞 생성 섀정[[default-text-generation-configuration]] 몚덞의 디윔딩 전략은 생성 섀정에서 정의됩니닀. 사전 훈령된 몚덞을 [`pipeline`] 낎에서 추론에 사용할 때, 몚덞은 낎부적윌로 Ʞ볞 생성 섀정을 적용하는 `PreTrainedModel.generate()` 메소드륌 혞출합니닀. 사용자가 몚덞곌 핚께 사용자 정의 섀정을 저장하지 않았을 겜우에도 Ʞ볞 섀정읎 사용됩니닀. 몚덞을 명시적윌로 로드할 때, `model.generation_config`을 통핎 제공되는 생성 섀정을 검사할 수 있습니닀. ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256, } ``` `model.generation_config`륌 출력하멎 Ʞ볞 섀정곌 닀륞 값듀만 표시되고, Ʞ볞값듀은 나엎되지 않습니닀. Ʞ볞 생성 섀정은 입력 프롬프튞와 출력을 합친 최대 크Ʞ륌 20 토큰윌로 제한하여 늬소슀 부족을 방지합니닀. Ʞ볞 디윔딩 전략은 탐욕 탐색(greedy search)윌로, 닀음 토큰윌로 가장 높은 확률을 가진 토큰을 선택하는 가장 닚순한 디윔딩 전략입니닀. 많은 작업곌 작은 출력 크Ʞ에 대핎서는 읎 방법읎 잘 작동하지만, 더 ꞎ 출력을 생성할 때 사용하멎 맀우 반복적읞 결곌륌 생성하게 될 수 있습니닀. ## 텍슀튞 생성 사용자 정의[[customize-text-generation]] 파띌믞터와 핎당 값을 [`generate`] 메소드에 직접 전달하여 `generation_config`을 재정의할 수 있습니닀: ```python >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP ``` Ʞ볞 디윔딩 전략읎 대부분의 작업에 잘 작동한닀 하더띌도, 조정할 수 있는 몇 가지 파띌믞터가 있습니닀. 음반적윌로 조정되는 파띌믞터에는 닀음곌 같은 것듀읎 포핚됩니닀: - `max_new_tokens`: 생성할 최대 토큰 수입니닀. 슉, 프롬프튞에 있는 토큰을 제왞한 출력 시퀀슀의 크Ʞ입니닀. 출력의 Ꞟ읎륌 쀑닚 Ʞ쀀윌로 사용하는 대신, 전첎 생성묌읎 음정 시간을 쎈곌할 때 생성을 쀑닚하Ʞ로 선택할 수도 있습니닀. 더 알아볎렀멎 [`StoppingCriteria`]륌 확읞하섞요. - `num_beams`: 1볎닀 큰 수의 빔을 지정핚윌로썚, 탐욕 탐색(greedy search)에서 빔 탐색(beam search)윌로 전환하게 됩니닀. 읎 전략은 각 시간 닚계에서 여러 가섀을 평가하고 ê²°êµ­ 전첎 시퀀슀에 대핮 가장 높은 확률을 가진 가섀을 선택합니닀. 읎는 쎈Ʞ 토큰의 확률읎 낮아 탐욕 탐색에 의핎 묎시되었을 높은 확률의 시퀀슀륌 식별할 수 있는 장점을 가집니닀. - `do_sample`: 읎 맀개변수륌 `True`로 섀정하멎, 닀항 샘플링, 빔 탐색 닀항 샘플링, Top-K 샘플링 및 Top-p 샘플링곌 같은 디윔딩 전략을 활성화합니닀. 읎러한 전략듀은 전첎 얎휘에 대한 확률 분포에서 닀음 토큰을 선택하며, 전략별로 특정 조정읎 적용됩니닀. - `num_return_sequences`: 각 입력에 대핮 반환할 시퀀슀 후볎의 수입니닀. 읎 옵션은 빔 탐색(beam search)의 변형곌 샘플링곌 같읎 여러 시퀀슀 후볎륌 지원하는 디윔딩 전략에만 사용할 수 있습니닀. 탐욕 탐색(greedy search)곌 대조 탐색(contrastive search) 같은 디윔딩 전략은 닚음 출력 시퀀슀륌 반환합니닀. ## 몚덞에 사용자 정의 디윔딩 전략 저장[[save-a-custom-decoding-strategy-with-your-model]] 특정 생성 섀정을 가진 믞섞 조정된 몚덞을 공유하고자 할 때, 닀음 닚계륌 따륌 수 있습니닀: * [`GenerationConfig`] 큎래슀 읞슀턎슀륌 생성합니닀. * 디윔딩 전략 파띌믞터륌 섀정합니닀. * 생성 섀정을 [`GenerationConfig.save_pretrained`]륌 사용하여 저장하며, `config_file_name` 읞자는 비워둡니닀. * 몚덞의 저장소에 섀정을 업로드하Ʞ 위핎 `push_to_hub`륌 `True`로 섀정합니닀. ```python >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP ``` 닚음 디렉토늬에 여러 생성 섀정을 저장할 수 있윌며, 읎때 [`GenerationConfig.save_pretrained`]의 `config_file_name` 읞자륌 사용합니닀. 나쀑에 [`GenerationConfig.from_pretrained`]로 읎듀을 읞슀턎슀화할 수 있습니닀. 읎는 닚음 몚덞에 대핮 여러 생성 섀정을 저장하고 싶을 때 유용합니닀(예: 샘플링을 읎용한 찜의적 텍슀튞 생성을 위한 하나, 빔 탐색을 읎용한 요앜을 위한 닀륞 하나 등). 몚덞에 섀정 파음을 추가하Ʞ 위핎 적절한 Hub 권한을 가지고 있얎알 합니닀. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> # 팁: Hub에 push하렀멎 `push_to_hub=True`륌 추가 >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> # 명명된 생성 섀정 파음을 사용하여 생성을 맀개변수화할 수 있습니닀. >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles à utiliser!'] ``` ## 슀튞늬밍[[streaming]] `generate()` 메소드는 `streamer` 입력을 통핎 슀튞늬밍을 지원합니닀. `streamer` 입력은 `put()`곌 `end()` 메소드륌 가진 큎래슀의 읞슀턎슀와 혞환됩니닀. 낎부적윌로, `put()`은 새 토큰을 추가하는 데 사용되며, `end()`는 텍슀튞 생성의 끝을 표시하는 데 사용됩니닀. <Tip warning={true}> 슀튞늬뚞 큎래슀의 API는 아직 개발 쀑읎며, 향후 변겜될 수 있습니닀. </Tip> 싀제로 닀양한 목적을 위핎 자첎 슀튞늬밍 큎래슀륌 만듀 수 있습니닀! 또한, Ʞ볞적읞 슀튞늬밍 큎래슀듀도 쀀비되얎 있얎 바로 사용할 수 있습니닀. 예륌 듀얎, [`TextStreamer`] 큎래슀륌 사용하여 `generate()`의 출력을 화멎에 한 닚얎씩 슀튞늬밍할 수 있습니닀: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # 슀튞늬뚞는 평소와 같은 출력값을 반환할 뿐만 아니띌 생성된 텍슀튞도 표쀀 출력(stdout)윌로 출력합니닀. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` ## 디윔딩 전략[[decoding-strategies]] `generate()` 맀개변수와 궁극적윌로 `generation_config`의 특정 조합을 사용하여 특정 디윔딩 전략을 활성화할 수 있습니닀. 읎 개념읎 처음읎띌멎, 흔히 사용되는 디윔딩 전략읎 얎떻게 작동하는지 섀명하는 [읎 랔로귞 포슀튞](https://huggingface.co/blog/how-to-generate)륌 읜얎볎는 것을 추천합니닀. 여Ʞ서는 디윔딩 전략을 제얎하는 몇 가지 맀개변수륌 볎여죌고, 읎륌 얎떻게 사용할 수 있는지 섀명하겠습니닀. ### 탐욕 탐색(Greedy Search)[[greedy-search]] [`generate`]는 Ʞ볞적윌로 탐욕 탐색 디윔딩을 사용하므로 읎륌 활성화하Ʞ 위핎 별도의 맀개변수륌 지정할 필요가 없습니닀. 읎는 `num_beams`가 1로 섀정되고 `do_sample=False`로 되얎 있닀는 의믞입니닀." ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilbert/distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ``` ### 대조 탐색(Contrastive search)[[contrastive-search]] 2022년 녌묞 [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417)에서 제안된 대조 탐색 디윔딩 전략은 반복되지 않윌멎서도 음ꎀ된 ꞎ 출력을 생성하는 데 있얎 우수한 결곌륌 볎였습니닀. 대조 탐색읎 작동하는 방식을 알아볎렀멎 [읎 랔로귞 포슀튞](https://huggingface.co/blog/introducing-csearch)륌 확읞하섞요. 대조 탐색의 동작을 가능하게 하고 제얎하는 두 가지 죌요 맀개변수는 `penalty_alpha`와 `top_k`입니닀: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ``` ### 닀항 샘플링(Multinomial sampling)[[multinomial-sampling]] 탐욕 탐색(greedy search)읎 항상 가장 높은 확률을 가진 토큰을 닀음 토큰윌로 선택하는 것곌 달늬, 닀항 샘플링(multinomial sampling, 조상 샘플링(ancestral sampling)읎띌고도 핹)은 몚덞읎 제공하는 전첎 얎휘에 대한 확률 분포륌 Ʞ반윌로 닀음 토큰을 묎작위로 선택합니닀. 0읎 아닌 확률을 가진 몚든 토큰은 선택될 Ʞ회가 있윌므로, 반복의 위험을 쀄음 수 있습니닀. 닀항 샘플링을 활성화하렀멎 `do_sample=True` 및 `num_beams=1`을 섀정하섞요. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # 재현성을 위핎 >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'] ``` ### 빔 탐색(Beam-search) 디윔딩[[beam-search-decoding]] 탐욕 검색(greedy search)곌 달늬, 빔 탐색(beam search) 디윔딩은 각 시간 닚계에서 여러 가섀을 유지하고 ê²°êµ­ 전첎 시퀀슀에 대핮 가장 높은 확률을 가진 가섀을 선택합니닀. 읎는 낮은 확률의 쎈Ʞ 토큰윌로 시작하고 귞늬디 검색에서 묎시되었을 가능성읎 높은 시퀀슀륌 식별하는 읎점읎 있습니닀. 읎 디윔딩 전략을 활성화하렀멎 `num_beams` (추적할 가섀 수띌고도 핹)륌 1볎닀 크게 지정하섞요. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "openai-community/gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ``` ### 빔 탐색 닀항 샘플링(Beam-search multinomial sampling)[[beam-search-multinomial-sampling]] 읎 디윔딩 전략은 읎늄에서 알 수 있듯읎 빔 탐색곌 닀항 샘플링을 결합한 것입니닀. 읎 디윔딩 전략을 사용하Ʞ 위핎서는 `num_beams`륌 1볎닀 큰 값윌로 섀정하고, `do_sample=True`로 섀정핎알 합니닀. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # 재현성을 위핎 >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ``` ### 닀양한 빔 탐색 디윔딩(Diverse beam search decoding)[[diverse-beam-search-decoding]] 닀양한 빔 탐색(Decoding) 전략은 선택할 수 있는 더 닀양한 빔 시퀀슀 집합을 생성할 수 있게 핎죌는 빔 탐색 전략의 확장입니닀. 읎 방법은 얎떻게 작동하는지 알아볎렀멎, [닀양한 빔 탐색: 신겜 시퀀슀 몚덞에서 닀양한 솔룚션 디윔딩하Ʞ](https://arxiv.org/pdf/1610.02424.pdf)륌 찞조하섞요. 읎 ì ‘ê·Œ 방식은 ì„ž 가지 죌요 맀개변수륌 가지고 있습니닀: `num_beams`, `num_beam_groups`, 귞늬고 `diversity_penalty`. 닀양성 팹널티는 귞룹 간에 출력읎 서로 닀륎게 하Ʞ 위한 것읎며, 각 귞룹 낎에서 빔 탐색읎 사용됩니닀. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` 읎 가읎드에서는 닀양한 디윔딩 전략을 가능하게 하는 죌요 맀개변수륌 볎여쀍니닀. [`generate`] 메서드에 대한 고꞉ 맀개변수가 졎재하므로 [`generate`] 메서드의 동작을 더욱 섞부적윌로 제얎할 수 있습니닀. 사용 가능한 맀개변수의 전첎 목록은 [API 묞서](./main_classes/text_generation.md)륌 찞조하섞요. ### 추론 디윔딩(Speculative Decoding)[[speculative-decoding]] 추론 디윔딩(볎조 디윔딩(assisted decoding)윌로도 알렀짐)은 동음한 토크나읎저륌 사용하는 훚씬 작은 볎조 몚덞을 활용하여 몇 가지 후볎 토큰을 생성하는 상위 몚덞의 디윔딩 전략을 수정한 것입니닀. 죌 몚덞은 닚음 전방 통곌로 후볎 토큰을 검슝핚윌로썚 디윔딩 곌정을 가속화합니닀. `do_sample=True`음 겜우, [추론 디윔딩 녌묞](https://arxiv.org/pdf/2211.17192.pdf)에 소개된 토큰 검슝곌 재샘플링 방식읎 사용됩니닀. 현재, 탐욕 검색(greedy search)곌 샘플링만읎 지원되는 볎조 디윔딩(assisted decoding) Ʞ능을 통핎, 볎조 디윔딩은 배치 입력을 지원하지 않습니닀. 볎조 디윔딩에 대핮 더 알고 싶닀멎, [읎 랔로귞 포슀튞](https://huggingface.co/blog/assisted-generation)륌 확읞핎 죌섞요. 볎조 디윔딩을 활성화하렀멎 몚덞곌 핚께 `assistant_model` 읞수륌 섀정하섞요. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` 샘플링 방법곌 핚께 볎조 디윔딩을 사용하는 겜우 닀항 샘플링곌 마찬가지로 `temperature` 읞수륌 사용하여 묎작위성을 제얎할 수 있습니닀. 귞러나 볎조 디윔딩에서는 `temperature`륌 낮추멎 대Ʞ 시간을 개선하는 데 도움읎 될 수 있습니닀. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # 재현성을 위핎 >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob, who were both in their early twenties, were both in the process of'] ```
transformers/docs/source/ko/generation_strategies.md/0
{ "file_path": "transformers/docs/source/ko/generation_strategies.md", "repo_id": "transformers", "token_count": 13395 }
288
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 팚딩곌 잘띌낎Ʞ[[padding-and-truncation]] 배치 입력은 Ꞟ읎가 닀륞 겜우가 많아서 고정 크Ʞ 텐서로 변환할 수 없습니닀. 팚딩곌 잘띌낎Ʞ는 닀양한 Ꞟ읎의 배치에서 직사각형 텐서륌 생성할 수 있도록 읎 묞제륌 핎결하는 전략입니닀. 팚딩은 특수한 **팚딩 토큰**을 추가하여 짧은 시퀀슀가 배치에서 가장 ꞎ 시퀀슀 또는 몚덞에서 허용하는 최대 Ꞟ읎와 동음한 Ꞟ읎륌 갖도록 합니닀. 잘띌낎Ʞ는 ꞎ 시퀀슀륌 잘띌낎얎 팚딩곌 닀륞 방식윌로 시퀀슀의 Ꞟ읎륌 동음하게 합니닀. 대부분의 겜우 배치에 가장 ꞎ 시퀀슀의 Ꞟ읎로 팚딩하고 몚덞읎 허용할 수 있는 최대 Ꞟ읎로 잘띌낎는 것읎 잘 작동합니닀. 귞러나 필요하닀멎 API가 지원하는 더 많은 전략을 사용할 수 있습니닀. 필요한 읞수는 `padding`, `truncation`, `max_length` ì„ž 가지입니닀. `padding` 읞수는 팚딩을 제얎합니닀. 불늬얞 또는 묞자엎음 수 있습니닀: - `True` 또는 `'longest'`: 배치에서 가장 ꞎ 시퀀슀로 팚딩합니닀(닚음 시퀀슀만 제공하는 겜우 팚딩읎 적용되지 않습니닀). - `'max_length'`: `max_length` 읞수가 지정한 Ꞟ읎로 팚딩하거나, `max_length`가 제공되지 않은 겜우(`max_length=None`) 몚덞에서 허용되는 최대 Ꞟ읎로 팚딩합니닀. 닚음 시퀀슀만 제공하는 겜우에도 팚딩읎 적용됩니닀. - `False` 또는 `'do_not_pad'`: 팚딩읎 적용되지 않습니닀. 읎것읎 Ʞ볞 동작입니닀. `truncation` 읞수는 잘띌낌 방법을 정합니닀. 불늬얞 또는 묞자엎음 수 있습니닀: - `True` 또는 `longest_first`: `max_length` 읞수가 지정한 최대 Ꞟ읎로 잘띌낎거나, `max_length`가 제공되지 않은 겜우(`max_length=None`) 몚덞에서 허용되는 최대 Ꞟ읎로 잘띌냅니닀. 시퀀슀 쌍에서 가장 ꞎ 시퀀슀의 토큰을 적절한 Ꞟ읎에 도달할 때까지 하나씩 제거합니닀. - `'only_second'`: `max_length` 읞수가 지정한 최대 Ꞟ읎로 잘띌낎거나, `max_length`가 제공되지 않은 겜우(`max_length=None`) 몚덞에서 허용되는 최대 Ꞟ읎로 잘띌냅니닀. 시퀀슀 쌍(또는 시퀀슀 쌍의 배치)가 제공된 겜우 쌍의 두 번짞 묞장만 잘띌냅니닀. - `'only_first'`: `max_length` 읞수가 지정한 최대 Ꞟ읎로 잘띌낎거나, `max_length`가 제공되지 않은 겜우(`max_length=None`) 몚덞에서 허용되는 최대 Ꞟ읎로 잘띌냅니닀. 시퀀슀 쌍(또는 시퀀슀 쌍의 배치)가 제공된 겜우 쌍의 첫 번짞 묞장만 잘띌냅니닀. - `False` 또는 `'do_not_truncate'`: 잘띌낎Ʞ륌 적용하지 않습니닀. 읎것읎 Ʞ볞 동작입니닀. `max_length` 읞수는 팚딩 및 잘띌낎Ʞ륌 적용할 Ꞟ읎륌 제얎합니닀. 읎 읞수는 정수 또는 `None`음 수 있윌며, `None`음 겜우 몚덞읎 허용할 수 있는 최대 Ꞟ읎로 Ʞ볞값읎 섀정됩니닀. 몚덞에 특정한 최대 입력 Ꞟ읎가 없는 겜우 `max_length`에 대한 잘띌낎Ʞ 또는 팚딩읎 비활성화됩니닀. 닀음 표에는 팚딩 및 잘띌낎Ʞ륌 섀정하는 권장 방법읎 요앜되얎 있습니닀. 입력윌로 시퀀슀 쌍을 사용하는 겜우, 닀음 예제에서 `truncation=True`륌 `['only_first', 'only_second', 'longest_first']`에서 선택한 `STRATEGY`, 슉 `truncation='only_second'` 또는 `truncation='longest_first'`로 바꟞멎 앞서 섀명한 대로 쌍의 두 시퀀슀가 잘늬는 방식을 제얎할 수 있습니닀. | 잘띌낎Ʞ | 팚딩 | 사용 방법 | |--------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------| | 잘띌낎Ʞ 없음 | 팚딩 없음 | `tokenizer(batch_sentences)` | | | 배치 낮 최대 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding=True)` 또는 | | | | `tokenizer(batch_sentences, padding='longest')` | | | 몚덞의 최대 입력 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding='max_length')` | | | 특정 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | 닀양한 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8)` | | 몚덞의 최대 입력 Ꞟ읎로 잘띌낎Ʞ | 팚딩 없음 | `tokenizer(batch_sentences, truncation=True)` 또는 | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | 배치 낮 최대 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding=True, truncation=True)` 또는 | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | 몚덞의 최대 입력 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding='max_length', truncation=True)` 또는 | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | 특정 Ꞟ읎로 팚딩 | 사용 불가 | | 특정 Ꞟ읎로 잘띌낎Ʞ | 팚딩 없음 | `tokenizer(batch_sentences, truncation=True, max_length=42)` 또는 | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | 배치 낮 최대 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` 또는 | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | 몚덞의 최대 입력 Ꞟ읎로 팚딩 | 사용 불가 | | | 특정 Ꞟ읎로 팚딩 | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` 또는 | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
transformers/docs/source/ko/pad_truncation.md/0
{ "file_path": "transformers/docs/source/ko/pad_truncation.md", "repo_id": "transformers", "token_count": 5964 }
289
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # AWQ [[awq]] <Tip> 읎 [녞튞북](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY) 윌로 AWQ 양자화륌 싀습핎볎섞요 ! </Tip> [Activation-aware Weight Quantization (AWQ)](https://hf.co/papers/2306.00978)은 몚덞의 몚든 가쀑치륌 양자화하지 않고, LLM 성능에 쀑요한 가쀑치륌 유지합니닀. 읎로썚 4비튞 정밀도로 몚덞을 싀행핎도 성능 저하 없읎 양자화 손싀을 크게 쀄음 수 있습니닀. AWQ 알고늬슘을 사용하여 몚덞을 양자화할 수 있는 여러 띌읎람러늬가 있습니닀. 예륌 듀얎 [llm-awq](https://github.com/mit-han-lab/llm-awq), [autoawq](https://github.com/casper-hansen/AutoAWQ) , [optimum-intel](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc) 등읎 있습니닀. Transformers는 llm-awq, autoawq 띌읎람러늬륌 읎용핎 양자화된 몚덞을 가젞올 수 있도록 지원합니닀. 읎 가읎드에서는 autoawq로 양자화된 몚덞을 가젞였는 방법을 볎여드늬나, llm-awq로 양자화된 몚덞의 겜우도 유사한 절찚륌 따늅니닀. autoawq가 섀치되얎 있는지 확읞하섞요: ```bash pip install autoawq ``` AWQ 양자화된 몚덞은 핎당 몚덞의 [config.json](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ/blob/main/config.json) 파음의 `quantization_config` 속성을 통핎 식별할 수 있습니닀.: ```json { "_name_or_path": "/workspace/process/huggingfaceh4_zephyr-7b-alpha/source", "architectures": [ "MistralForCausalLM" ], ... ... ... "quantization_config": { "quant_method": "awq", "zero_point": true, "group_size": 128, "bits": 4, "version": "gemm" } } ``` 양자화된 몚덞은 [`~PreTrainedModel.from_pretrained`] 메서드륌 사용하여 가젞옵니닀. 몚덞을 CPU에 가젞왔닀멎, 뚌저 몚덞을 GPU 장치로 옮겚알 합니닀. `device_map` 파띌믞터륌 사용하여 몚덞을 배치할 위치륌 지정하섞요: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:0") ``` AWQ 양자화 몚덞을 가젞였멎 자동윌로 성능상의 읎유로 읞핎 가쀑치듀의 Ʞ볞값읎 fp16윌로 섀정됩니닀. 가쀑치륌 닀륞 형식윌로 가젞였렀멎, `torch_dtype` 파띌믞터륌 사용하섞요: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32) ``` 추론을 더욱 가속화하Ʞ 위핎 AWQ 양자화와 [FlashAttention-2](../perf_infer_gpu_one#flashattention-2) 륌 결합 할 수 있습니닀: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ", attn_implementation="flash_attention_2", device_map="cuda:0") ``` ## 퓚슈된 몚듈 [[fused-modules]] 퓚슈된 몚듈은 정확도와 성능을 개선합니닀. 퓚슈된 몚듈은 [Llama](https://huggingface.co/meta-llama) 아킀텍처와 [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) 아킀텍처의 AWQ몚듈에 Ʞ볞적윌로 지원됩니닀. 귞러나 지원되지 않는 아킀텍처에 대핎서도 AWQ 몚듈을 퓚슈할 수 있습니닀. <Tip warning={true}> 퓚슈된 몚듈은 FlashAttention-2와 같은 닀륞 최적화 Ʞ술곌 결합할 수 없습니닀. </Tip> <hfoptions id="fuse"> <hfoption id="supported architectures"> 지원되는 아킀텍처에서 퓚슈된 몚듈을 활성화하렀멎, [`AwqConfig`] 륌 생성하고 맀개변수 `fuse_max_seq_len` 곌 `do_fuse=True`륌 섀정핎알 합니닀. `fuse_max_seq_len` 맀개변수는 전첎 시퀀슀 Ꞟ읎로, 컚텍슀튞 Ꞟ읎와 예상 생성 Ꞟ읎륌 포핚핎알 합니닀. 안전하게 사용하Ʞ 위핎 더 큰 값윌로 섀정할 수 있습니닀. 예륌 듀얎, [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) 몚덞의 AWQ 몚듈을 퓚슈핎볎겠습니닀. ```python import torch from transformers import AwqConfig, AutoModelForCausalLM model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ" quantization_config = AwqConfig( bits=4, fuse_max_seq_len=512, do_fuse=True, ) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) ``` [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) 몚덞은 퓚슈된 몚듈읎 있는 겜우와 없는 겜우 몚두 `batch_size=1` 로 성능 평가되었습니닀. <figcaption class="text-center text-gray-500 text-lg">퓚슈되지 않은 몚듈</figcaption> | 배치 크Ʞ | 프늬필 Ꞟ읎 | 디윔드 Ꞟ읎 | 프늬필 토큰/쎈 | 디윔드 토큰/쎈 | 메몚늬 (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 60.0984 | 38.4537 | 4.50 GB (5.68%) | | 1 | 64 | 64 | 1333.67 | 31.6604 | 4.50 GB (5.68%) | | 1 | 128 | 128 | 2434.06 | 31.6272 | 4.50 GB (5.68%) | | 1 | 256 | 256 | 3072.26 | 38.1731 | 4.50 GB (5.68%) | | 1 | 512 | 512 | 3184.74 | 31.6819 | 4.59 GB (5.80%) | | 1 | 1024 | 1024 | 3148.18 | 36.8031 | 4.81 GB (6.07%) | | 1 | 2048 | 2048 | 2927.33 | 35.2676 | 5.73 GB (7.23%) | <figcaption class="text-center text-gray-500 text-lg">퓚슈된 몚듈</figcaption> | 배치 크Ʞ | 프늬필 Ꞟ읎 | 디윔드 Ꞟ읎 | 프늬필 토큰/쎈 | 디윔드 토큰/쎈 | 메몚늬 (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 81.4899 | 80.2569 | 4.00 GB (5.05%) | | 1 | 64 | 64 | 1756.1 | 106.26 | 4.00 GB (5.05%) | | 1 | 128 | 128 | 2479.32 | 105.631 | 4.00 GB (5.06%) | | 1 | 256 | 256 | 1813.6 | 85.7485 | 4.01 GB (5.06%) | | 1 | 512 | 512 | 2848.9 | 97.701 | 4.11 GB (5.19%) | | 1 | 1024 | 1024 | 3044.35 | 87.7323 | 4.41 GB (5.57%) | | 1 | 2048 | 2048 | 2715.11 | 89.4709 | 5.57 GB (7.04%) | 퓚슈된 몚듈 및 퓚슈되지 않은 몚듈의 속도와 처늬량은 [optimum-benchmark](https://github.com/huggingface/optimum-benchmark)띌읎람러늬륌 사용하여 테슀튞 되었습니닀. <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_forward_memory_plot.png" alt="generate throughput per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">포워드 플크 메몚늬 (forward peak memory)/배치 크Ʞ</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_generate_throughput_plot.png" alt="forward latency per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500"> 생성 처늬량/배치크Ʞ</figcaption> </div> </div> </hfoption> <hfoption id="unsupported architectures"> 퓚슈된 몚듈을 지원하지 않는 아킀텍처의 겜우, `modules_to_fuse` 맀개변수륌 사용핎 직접 퓚슈 맀핑을 만듀얎 ì–Žë–€ 몚듈을 퓚슈할지 정의핎알합니닀. 예로, [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) 몚덞의 AWQ 몚듈을 퓚슈하는 방법입니닀. ```python import torch from transformers import AwqConfig, AutoModelForCausalLM model_id = "TheBloke/Yi-34B-AWQ" quantization_config = AwqConfig( bits=4, fuse_max_seq_len=512, modules_to_fuse={ "attention": ["q_proj", "k_proj", "v_proj", "o_proj"], "layernorm": ["ln1", "ln2", "norm"], "mlp": ["gate_proj", "up_proj", "down_proj"], "use_alibi": False, "num_attention_heads": 56, "num_key_value_heads": 8, "hidden_size": 7168 } ) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) ``` `modules_to_fuse` 맀개변수는 닀음을 포핚핎알 합니닀: - `"attention"`: 얎텐션 레읎얎는 닀음 순서로 퓚슈하섞요 : 쿌늬 (query), í‚€ (key), 값 (value) , 출력 프로젝션 계잵 (output projection layer). 핎당 레읎얎륌 퓚슈하지 않윌렀멎 빈 늬슀튞륌 전달하섞요. - `"layernorm"`: 사용자 정의 퓚슈 레읎얎 정규화로 교할 레읎얎 정규화 레읎얎명. 핎당 레읎얎륌 퓚슈하지 않윌렀멎 빈 늬슀튞륌 전달하섞요. - `"mlp"`: 닚음 MLP 레읎얎로 퓚슈할 MLP 레읎얎 순서 : (게읎튞 (gate) (덎슀(dense), 레읎얎(layer), 포슀튞 얎텐션(post-attention)) / 위 / 아래 레읎얎). - `"use_alibi"`: 몚덞읎 ALiBi positional embedding을 사용할 겜우 섀정합니닀. - `"num_attention_heads"`: 얎텐션 헀드 (attention heads)의 수륌 섀정합니닀. - `"num_key_value_heads"`: 귞룹화 쿌늬 얎텐션 (GQA)을 구현하는데 사용되는 í‚€ 값 헀드의 수륌 섀정합니닀. `num_key_value_heads=num_attention_heads`로 섀정할 겜우, 몚덞은 닀쀑 헀드 얎텐션 (MHA)가 사용되며, `num_key_value_heads=1` 는 닀쀑 쿌늬 얎텐션 (MQA)가, 나뚞지는 GQA가 사용됩니닀. - `"hidden_size"`: 숚겚진 표현(hidden representations)의 찚원을 섀정합니닀. </hfoption> </hfoptions> ## ExLlama-v2 서포튞 [[exllama-v2-support]] 최신 버전 `autoawq`는 빠륞 프늬필곌 디윔딩을 위핎 ExLlama-v2 컀널을 지원합니닀. 시작하Ʞ 위핎 뚌저 최신 버전 `autoawq` 륌 섀치하섞요 : ```bash pip install git+https://github.com/casper-hansen/AutoAWQ.git ``` 맀개변수륌 `version="exllama"`로 섀정핎 `AwqConfig()`륌 생성하고 몚덞에 넘겚죌섞요. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig quantization_config = AwqConfig(version="exllama") model = AutoModelForCausalLM.from_pretrained( "TheBloke/Mistral-7B-Instruct-v0.1-AWQ", quantization_config=quantization_config, device_map="auto", ) input_ids = torch.randint(0, 100, (1, 128), dtype=torch.long, device="cuda") output = model(input_ids) print(output.logits) tokenizer = AutoTokenizer.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-AWQ") input_ids = tokenizer.encode("How to make a cake", return_tensors="pt").to(model.device) output = model.generate(input_ids, do_sample=True, max_length=50, pad_token_id=50256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <Tip warning={true}> 읎 Ʞ능은 AMD GPUs에서 지원됩니닀. </Tip>
transformers/docs/source/ko/quantization/awq.md/0
{ "file_path": "transformers/docs/source/ko/quantization/awq.md", "repo_id": "transformers", "token_count": 7298 }
290
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 읎믞지 특징 추출[[image-feature-extraction]] [[open-in-colab]] 읎믞지 특징 추출은 죌얎진 읎믞지에서 의믞론적윌로 의믞 있는 특징을 추출하는 작업입니닀. 읎는 읎믞지 유사성 및 읎믞지 검색 등 닀양한 사용 사례가 있습니닀. 게닀가 대부분의 컎퓚터 비전 몚덞은 읎믞지 특징 추출에 사용할 수 있윌며, 여Ʞ서 작업 특화 헀드(읎믞지 분류, 묌첎 감지 등)륌 제거하고 특징을 얻을 수 있습니닀. 읎러한 특징은 가장자늬 감지, 몚서늬 감지 등 고찚원 수쀀에서 맀우 유용합니닀. 또한 몚덞의 깊읎에 따띌 싀제 섞계에 대한 정볎(예: 고양읎가 얎떻게 생게는지)륌 포핚할 수도 있습니닀. 따띌서 읎러한 출력은 특정 데읎터 섞튞에 대한 새로욎 분류Ʞ륌 훈령하는 데 사용할 수 있습니닀. 읎 가읎드에서는: - `image-feature-extraction` 파읎프띌읞을 활용하여 ê°„ë‹ší•œ 읎믞지 유사성 시슀템을 구축하는 방법을 배웁니닀. - Ʞ볞 몚덞 추론윌로 동음한 작업을 수행합니닀. ## `image-feature-extraction` 파읎프띌읞을 읎용한 읎믞지 유사성[[image-similarity-using-image-feature-extraction-pipeline]] 묌고Ʞ 귞묌 위에 앉아 있는 두 장의 고양읎 사진읎 있습니닀. 읎 쀑 하나는 생성된 읎믞지입니닀. ```python from PIL import Image import requests img_urls = ["https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png", "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.jpeg"] image_real = Image.open(requests.get(img_urls[0], stream=True).raw).convert("RGB") image_gen = Image.open(requests.get(img_urls[1], stream=True).raw).convert("RGB") ``` 파읎프띌읞을 싀행핎 뎅시닀. 뚌저 파읎프띌읞을 쎈Ʞ화하섞요. 몚덞을 지정하지 않윌멎, 파읎프띌읞은 자동윌로 [google/vit-base-patch16-224](google/vit-base-patch16-224) 몚덞로 쎈Ʞ화됩니닀. 유사도륌 계산하렀멎 `pool`을 True로 섀정하섞요. ```python import torch from transformers import pipeline DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') pipe = pipeline(task="image-feature-extraction", model_name="google/vit-base-patch16-384", device=DEVICE, pool=True) ``` `pipe`륌 사용하여 추론하렀멎 두 읎믞지륌 몚두 전달하섞요. ```python outputs = pipe([image_real, image_gen]) ``` 출력에는 두 읎믞지의 풀링된(pooled) 임베딩읎 포핚되얎 있습니닀. ```python # 닚음 출력의 Ꞟ읎 구하Ʞ print(len(outputs[0][0])) # 출력 결곌 표시하Ʞ print(outputs) # 768 # [[[-0.03909236937761307, 0.43381670117378235, -0.06913255900144577, ``` 유사도 점수륌 얻윌렀멎, 읎듀을 유사도 핚수에 전달핎알 합니닀. ```python from torch.nn.functional import cosine_similarity similarity_score = cosine_similarity(torch.Tensor(outputs[0]), torch.Tensor(outputs[1]), dim=1) print(similarity_score) # tensor([0.6043]) ``` 풀링 읎전의 마지막 은닉 상태륌 얻고 싶닀멎, `pool` 맀개변수에 아묎 값도 전달하지 마섞요. 또한, Ʞ볞값은 `False`로 섀정되얎 있습니닀. 읎 은닉 상태는 몚덞의 특징을 Ʞ반윌로 새로욎 분류Ʞ나 몚덞을 훈렚시킀는 데 유용합니닀. ```python pipe = pipeline(task="image-feature-extraction", model_name="google/vit-base-patch16-224", device=DEVICE) output = pipe(image_real) ``` 아직 출력읎 풀링되지 않았Ʞ 때묞에, 첫 번짞 찚원은 배치 크Ʞ읎고 마지막 두 찚원은 임베딩 형태읞 마지막 은닉 상태륌 얻을 수 있습니닀. ```python import numpy as np print(np.array(outputs).shape) # (1, 197, 768) ``` ## `AutoModel`을 사용하여 특징곌 유사성 얻Ʞ[[getting-features-and-similarities-using-automodel]] transformers의 `AutoModel` 큎래슀륌 사용하여 특징을 얻을 수도 있습니닀. `AutoModel`은 작업 특화 헀드 없읎 몚든 transformers 몚덞을 로드할 수 있윌며, 읎륌 통핎 특징을 추출할 수 있습니닀. ```python from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") model = AutoModel.from_pretrained("google/vit-base-patch16-224").to(DEVICE) ``` 추론을 위한 ê°„ë‹ší•œ 핚수륌 작성핎 볎겠습니닀. 뚌저 입력값을 `processor`에 전달한 닀음, ê·ž 출력값을 `model`에 전달할 것입니닀. ```python def infer(image): inputs = processor(image, return_tensors="pt").to(DEVICE) outputs = model(**inputs) return outputs.pooler_output ``` 읎 핚수에 읎믞지륌 직접 전달하여 임베딩을 얻을 수 있습니닀. ```python embed_real = infer(image_real) embed_gen = infer(image_gen) ``` 귞늬고 읎 임베딩을 사용하여 닀시 유사도륌 계산할 수 있습니닀. ```python from torch.nn.functional import cosine_similarity similarity_score = cosine_similarity(embed_real, embed_gen, dim=1) print(similarity_score) # tensor([0.6061], device='cuda:0', grad_fn=<SumBackward1>) ```
transformers/docs/source/ko/tasks/image_feature_extraction.md/0
{ "file_path": "transformers/docs/source/ko/tasks/image_feature_extraction.md", "repo_id": "transformers", "token_count": 3518 }
291
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 영상 분류 [[video-classification]] [[open-in-colab]] 영상 분류는 영상 전첎에 레읎랔 또는 큎래슀륌 지정하는 작업입니닀. 각 영상에는 하나의 큎래슀가 있을 것윌로 예상됩니닀. 영상 분류 몚덞은 영상을 입력윌로 받아 얎느 큎래슀에 속하는지에 대한 예잡을 반환합니닀. 읎러한 몚덞은 영상읎 ì–Žë–€ 낎용읞지 분류하는 데 사용될 수 있습니닀. 영상 분류의 싀제 응용 예는 플튞니슀 앱에서 유용한 동작 / 욎동 읞식 서비슀가 있습니닀. 읎는 또한 시각 장애읞읎 읎동할 때 볎조하는데 사용될 수 있습니닀 읎 가읎드에서는 닀음을 수행하는 방법을 볎여쀍니닀: 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) 데읎터 섞튞의 하위 집합을 통핎 [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) 몚덞을 믞섞 조정하Ʞ. 2. 믞섞 조정한 몚덞을 추론에 사용하Ʞ. <Tip> 읎 작업곌 혾환되는 몚든 아킀텍처와 첎크포읞튞륌 볎렀멎 [작업 페읎지](https://huggingface.co/tasks/video-classification)륌 확읞하는 것읎 좋습니닀. </Tip> 시작하Ʞ 전에 필요한 몚든 띌읎람러늬가 섀치되었는지 확읞하섞요: ```bash pip install -q pytorchvideo transformers evaluate ``` 영상을 처늬하고 쀀비하Ʞ 위핎 [PyTorchVideo](https://pytorchvideo.org/)(읎하 `pytorchvideo`)륌 사용합니닀. 컀뮀니티에 몚덞을 업로드하고 공유할 수 있도록 Hugging Face 계정에 로귞읞하는 것을 권장합니닀. 프롬프튞가 나타나멎 토큰을 입력하여 로귞읞하섞요: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## UCF101 데읎터셋 불러였Ʞ [[load-ufc101-dataset]] [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) 데읎터 섞튞의 하위 집합(subset)을 불러였는 것윌로 시작할 수 있습니닀. 전첎 데읎터 섞튞륌 학습하는데 더 많은 시간을 할애하Ʞ 전에 데읎터의 하위 집합을 불러와 몚든 것읎 잘 작동하는지 싀험하고 확읞할 수 있습니닀. ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` 데읎터 섞튞의 하위 집합읎 닀욎로드 되멎, 압축된 파음의 압축을 핎제핎알 합니닀: ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` 전첎 데읎터 섞튞는 닀음곌 같읎 구성되얎 있습니닀. ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` 정렬된 영상의 겜로는 닀음곌 같습니닀: ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` 동음한 귞룹/장멎에 속하는 영상 큎늜은 파음 겜로에서 `g`로 표시되얎 있습니닀. 예륌 듀멎, `v_ApplyEyeMakeup_g07_c04.avi`와 `v_ApplyEyeMakeup_g07_c06.avi` 읎 있습니닀. 읎 둘은 같은 귞룹입니닀. 검슝 및 평가 데읎터 분할을 할 때, [데읎터 누출(data leakage)](https://www.kaggle.com/code/alexisbcook/data-leakage)을 방지하Ʞ 위핎 동음한 귞룹 / 장멎의 영상 큎늜을 사용하지 ì•Šì•„ì•Œ 합니닀. 읎 튜토늬얌에서 사용하는 하위 집합은 읎러한 정볎륌 고렀하고 있습니닀. ê·ž 닀음윌로, 데읎터 섞튞에 졎재하는 띌벚을 추출합니닀. 또한, 몚덞을 쎈Ʞ화할 때 도움읎 될 딕셔너늬(dictionary data type)륌 생성합니닀. * `label2id`: 큎래슀 읎늄을 정수에 맀핑합니닀. * `id2label`: 정수륌 큎래슀 읎늄에 맀핑합니닀. ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` 읎 데읎터 섞튞에는 쎝 10개의 고유한 큎래슀가 있습니닀. 각 큎래슀마닀 30개의 영상읎 훈령 섞튞에 있습니닀 ## 믞섞 조정하Ʞ 위핎 몚덞 가젞였Ʞ [[load-a-model-to-fine-tune]] 사전 훈령된 첎크포읞튞와 첎크포읞튞에 연ꎀ된 읎믞지 프로섞서륌 사용하여 영상 분류 몚덞을 읞슀턎슀화합니닀. 몚덞의 읞윔더에는 믞늬 학습된 맀개변수가 제공되며, 분류 헀드(데읎터륌 분류하는 마지막 레읎얎)는 묎작위로 쎈Ʞ화됩니닀. 데읎터 섞튞의 전처늬 파읎프띌읞을 작성할 때는 읎믞지 프로섞서가 유용합니닀. ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` 몚덞을 가젞였는 동안, 닀음곌 같은 겜고륌 마죌칠 수 있습니닀: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` 위 겜고는 우늬가 음부 가쀑치(예: `classifier` 잵의 가쀑치와 펞향)륌 버늬고 새로욎 `classifier` 잵의 가쀑치와 펞향을 묎작위로 쎈Ʞ화하고 있닀는 것을 알렀쀍니닀. 읎 겜우에는 믞늬 학습된 가쀑치가 없는 새로욎 헀드륌 추가하고 있윌므로, 띌읎람러늬가 몚덞을 추론에 사용하Ʞ 전에 믞섞 조정하띌고 겜고륌 볎낎는 것은 당연합니닀. 귞늬고 읎제 우늬는 읎 몚덞을 믞섞 조정할 예정입니닀. **ì°žê³ ** 읎 [첎크포읞튞](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics)는 도메읞읎 많읎 쀑첩된 유사한 닀욎슀튞늌 작업에 대핮 믞섞 조정하여 얻은 첎크포읞튞읎므로 읎 작업에서 더 나은 성능을 볎음 수 있습니닀. `MCG-NJU/videomae-base-finetuned-kinetics` 데읎터 섞튞륌 믞섞 조정하여 얻은 [첎크포읞튞](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset)도 있습니닀. ## 훈렚을 위한 데읎터 섞튞 쀀비하Ʞ[[prepare-the-datasets-for-training]] 영상 전처늬륌 위핎 [PyTorchVideo 띌읎람러늬](https://pytorchvideo.org/)륌 활용할 것입니닀. 필요한 종속성을 가젞였는 것윌로 시작하섞요. ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` 학습 데읎터 섞튞 변환에는 '균음한 시간 샘플링(uniform temporal subsampling)', '픜셀 정규화(pixel normalization)', '랜덀 잘띌낎Ʞ(random cropping)' 및 '랜덀 수평 뒀집Ʞ(random horizontal flipping)'의 조합을 사용합니닀. 검슝 및 평가 데읎터 섞튞 변환에는 '랜덀 잘띌낎Ʞ'와 '랜덀 뒀집Ʞ'륌 제왞한 동음한 변환 첎읞을 유지합니닀. 읎러한 변환에 대핮 자섞히 알아볎렀멎 [PyTorchVideo 공식 묞서](https://pytorchvideo.org)륌 확읞하섞요. 사전 훈령된 몚덞곌 ꎀ렚된 읎믞지 프로섞서륌 사용하여 닀음 정볎륌 얻을 수 있습니닀: * 영상 프레임 픜셀을 정규화하는 데 사용되는 읎믞지 평균곌 표쀀 펞찚 * 영상 프레임읎 조정될 공간 핎상도 뚌저, 몇 가지 상수륌 정의합니닀. ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` 읎제 데읎터 섞튞에 특화된 전처늬(transform)곌 데읎터 섞튞 자첎륌 정의합니닀. 뚌저 훈령 데읎터 섞튞로 시작합니닀: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` 같은 방식의 작업 흐늄을 검슝곌 평가 섞튞에도 적용할 수 있습니닀. ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **ì°žê³ **: 위의 데읎터 섞튞의 파읎프띌읞은 [공식 파읎토치 예제](https://pytorchvideo.org/docs/tutorial_classification#dataset)에서 가젞옚 것입니닀. 우늬는 UCF-101 데읎터셋에 맞게 [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) 핚수륌 사용하고 있습니닀. 낎부적윌로 읎 핚수는 [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) 객첎륌 반환합니닀. `LabeledVideoDataset` 큎래슀는 PyTorchVideo 데읎터셋에서 몚든 영상 ꎀ렚 작업의 Ʞ볞 큎래슀입니닀. 따띌서 PyTorchVideo에서 믞늬 제공하지 않는 사용자 지정 데읎터 섞튞륌 사용하렀멎, 읎 큎래슀륌 적절하게 확장하멎 됩니닀. 더 자섞한 사항읎 알고 싶닀멎 `data` API [묞서](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) 륌 찞고하섞요. 또한 위의 예시와 유사한 구조륌 갖는 데읎터 섞튞륌 사용하고 있닀멎, `pytorchvideo.data.Ucf101()` 핚수륌 사용하는 데 묞제가 없을 것입니닀. 데읎터 섞튞에 영상의 개수륌 알Ʞ 위핎 `num_videos` 읞수에 접귌할 수 있습니닀. ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## 더 나은 디버깅을 위핎 전처늬 영상 시각화하Ʞ[[visualize-the-preprocessed-video-for-better-debugging]] ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## 몚덞 훈렚하Ʞ[[train-the-model]] 🀗 Transformers의 [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer)륌 사용하여 몚덞을 훈렚시쌜볎섞요. `Trainer`륌 읞슀턎슀화하렀멎 훈령 섀정곌 평가 지표륌 정의핎알 합니닀. 가장 쀑요한 것은 [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments)입니닀. 읎 큎래슀는 훈렚을 구성하는 몚든 속성을 포핚하며, 훈령 쀑 첎크포읞튞륌 저장할 출력 폮더 읎늄을 필요로 합니닀. 또한 🀗 Hub의 몚덞 저장소의 몚든 정볎륌 동Ʞ화하는 데 도움읎 됩니닀. 대부분의 훈령 읞수는 따로 섀명할 필요는 없습니닀. 하지만 여Ʞ에서 쀑요한 읞수는 `remove_unused_columns=False` 입니닀. 읎 읞자는 몚덞의 혞출 핚수에서 사용되지 않는 몚든 속성 ì—Ž(columns)을 삭제합니닀. Ʞ볞값은 음반적윌로 True입니닀. 읎는 사용되지 않는 Ʞ능 엎을 삭제하는 것읎 읎상적읎며, 입력을 몚덞의 혞출 핚수로 풀Ʞ(unpack)가 쉬워지Ʞ 때묞입니닀. 하지만 읎 겜우에는 `pixel_values`(몚덞의 입력윌로 필수적읞 í‚€)륌 생성하Ʞ 위핎 사용되지 않는 Ʞ능('video'가 특히 귞렇습니닀)읎 필요합니닀. 따띌서 remove_unused_columns을 False로 섀정핎알 합니닀. ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` 핚수로 반환되는 데읎터 섞튞는 `__len__` 메소드가 읎식되얎 있지 않습니닀. 따띌서, `TrainingArguments`륌 읞슀턎슀화할 때 `max_steps`륌 정의핎알 합니닀. 닀음윌로, 평가지표륌 불러였고, 예잡값에서 평가지표륌 계산할 핚수륌 정의합니닀. 필요한 전처늬 작업은 예잡된 로짓(logits)에 argmax 값을 췚하는 것뿐입니닀: ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **평가에 대한 찞고사항**: [VideoMAE 녌묞](https://arxiv.org/abs/2203.12602)에서 저자는 닀음곌 같은 평가 전략을 사용합니닀. 테슀튞 영상에서 여러 큎늜을 선택하고 ê·ž 큎늜에 닀양한 크롭을 적용하여 집계 점수륌 볎고합니닀. 귞러나 읎번 튜토늬얌에서는 간닚핚곌 간결핚을 위핎 핎당 전략을 고렀하지 않습니닀. 또한, 예제륌 묶얎서 배치륌 형성하는 `collate_fn`을 정의핎알합니닀. 각 배치는 `pixel_values`와 `labels`띌는 2개의 킀로 구성됩니닀. ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` 귞런 닀음 읎 몚든 것을 데읎터 섞튞와 핚께 `Trainer`에 전달하Ʞ만 하멎 됩니닀: ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` 데읎터륌 읎믞 처늬했는데도 불구하고 `image_processor`륌 토크나읎저 읞수로 넣은 읎유는 JSON윌로 저장되는 읎믞지 프로섞서 구성 파음읎 Hub의 저장소에 업로드되도록 하Ʞ 위핚입니닀. `train` 메소드륌 혞출하여 몚덞을 믞섞 조정하섞요: ```py >>> train_results = trainer.train() ``` 학습읎 완료되멎, 몚덞을 [`~transformers.Trainer.push_to_hub`] 메소드륌 사용하여 허람에 공유하여 누구나 몚덞을 사용할 수 있도록 합니닀: ```py >>> trainer.push_to_hub() ``` ## 추론하Ʞ[[inference]] 좋습니닀. 읎제 믞섞 조정된 몚덞을 추론하는 데 사용할 수 있습니닀. 추론에 사용할 영상을 불러였섞요: ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> 믞섞 조정된 몚덞을 추론에 사용하는 가장 ê°„ë‹ší•œ 방법은 [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline)에서 몚덞을 사용하는 것입니닀. 몚덞로 영상 분류륌 하Ʞ 위핎 `pipeline`을 읞슀턎슀화하고 영상을 전달하섞요: ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` 만앜 원한닀멎 수동윌로 `pipeline`의 결곌륌 재현할 수 있습니닀: ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` 몚덞에 입력값을 넣고 `logits`을 반환받윌섞요: ```py >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits`을 디윔딩하멎, 우늬는 닀음 결곌륌 얻을 수 있습니닀: ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
transformers/docs/source/ko/tasks/video_classification.md/0
{ "file_path": "transformers/docs/source/ko/tasks/video_classification.md", "repo_id": "transformers", "token_count": 13644 }
292
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 䜿甚 🀗 Tokenizers 䞭的分词噚 [`PreTrainedTokenizerFast`] 䟝赖于 [🀗 Tokenizers](https://huggingface.co/docs/tokenizers) 库。从 🀗 Tokenizers 库获埗的分词噚可以被蜻束地加蜜到 🀗 Transformers 䞭。 圚了解具䜓内容之前让我们先甚几行代码创建䞀䞪虚拟的分词噚 ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` 现圚我们拥有了䞀䞪针对我们定义的文件进行训练的分词噚。我们可以圚圓前运行时䞭继续䜿甚它或者将其保存到䞀䞪 JSON 文件以䟛将来重倍䜿甚。 ## 盎接从分词噚对象加蜜 让我们看看劂䜕利甚 🀗 Transformers 库䞭的这䞪分词噚对象。[`PreTrainedTokenizerFast`] 类允讞通过接受已实䟋化的 *tokenizer* 对象䜜䞺参数进行蜻束实䟋化 ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` 现圚可以䜿甚这䞪对象䜿甚 🀗 Transformers 分词噚共享的所有方法前埀[分词噚页面](main_classes/tokenizer)了解曎倚信息。 ## 从 JSON 文件加蜜 䞺了从 JSON 文件䞭加蜜分词噚让我们先保存我们的分词噚 ```python >>> tokenizer.save("tokenizer.json") ``` 我们保存歀文件的路埄可以通过 `tokenizer_file` 参数䌠递给 [`PreTrainedTokenizerFast`] 初始化方法 ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` 现圚可以䜿甚这䞪对象䜿甚 🀗 Transformers 分词噚共享的所有方法前埀[分词噚页面](main_classes/tokenizer)了解曎倚信息。
transformers/docs/source/zh/fast_tokenizers.md/0
{ "file_path": "transformers/docs/source/zh/fast_tokenizers.md", "repo_id": "transformers", "token_count": 1249 }
293
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Callbacks Callbacks可以甚来自定义PyTorch [Trainer]䞭训练埪环行䞺的对象歀功胜尚未圚TensorFlow䞭实现该对象可以检查训练埪环状态甚于进床报告、圚TensorBoard或其他ML平台䞊记圕日志等并做出决策䟋劂提前停止。 Callbacks是“只读”的代码片段陀了它们返回的[TrainerControl]对象倖它们䞍胜曎改训练埪环䞭的任䜕内容。对于需芁曎改训练埪环的自定义悚应该继承[Trainer]并重蜜悚需芁的方法有关瀺䟋请参见[trainer](trainer)。 默讀情况䞋`TrainingArguments.report_to` 讟眮䞺"all"然后[Trainer]将䜿甚以䞋callbacks。 - [`DefaultFlowCallback`]它倄理默讀的日志记圕、保存和评䌰行䞺 - [`PrinterCallback`] 或 [`ProgressCallback`]甚于星瀺进床和打印日志劂果通过[`TrainingArguments`]停甚tqdm则䜿甚第䞀䞪凜数吊则䜿甚第二䞪。 - [`~integrations.TensorBoardCallback`]劂果TensorBoard可访问通过PyTorch版本 >= 1.4 或者 tensorboardX。 - [`~integrations.WandbCallback`]劂果安装了[wandb](https://www.wandb.com/)。 - [`~integrations.CometCallback`]劂果安装了[comet_ml](https://www.comet.com/site/)。 - [`~integrations.MLflowCallback`]劂果安装了[mlflow](https://www.mlflow.org/)。 - [`~integrations.NeptuneCallback`]劂果安装了[neptune](https://neptune.ai/)。 - [`~integrations.AzureMLCallback`]劂果安装了[azureml-sdk](https://pypi.org/project/azureml-sdk/)。 - [`~integrations.CodeCarbonCallback`]劂果安装了[codecarbon](https://pypi.org/project/codecarbon/)。 - [`~integrations.ClearMLCallback`]劂果安装了[clearml](https://github.com/allegroai/clearml)。 - [`~integrations.DagsHubCallback`]劂果安装了[dagshub](https://dagshub.com/)。 - [`~integrations.FlyteCallback`]劂果安装了[flyte](https://flyte.org/)。 - [`~integrations.DVCLiveCallback`]劂果安装了[dvclive](https://dvc.org/doc/dvclive)。 劂果安装了䞀䞪蜯件包䜆悚䞍垌望䜿甚盞关的集成悚可以将 `TrainingArguments.report_to` 曎改䞺仅包含悚想芁䜿甚的集成的列衚䟋劂 `["azure_ml", "wandb"]`。 实现callbacks的䞻芁类是[`TrainerCallback`]。它获取甚于实䟋化[`Trainer`]的[`TrainingArguments`]可以通过[`TrainerState`]访问该Trainer的内郚状态并可以通过[`TrainerControl`]对训练埪环执行䞀些操䜜。 ## 可甚的Callbacks 这里是库里可甚[`TrainerCallback`]的列衚 [[autodoc]] integrations.CometCallback - setup [[autodoc]] DefaultFlowCallback [[autodoc]] PrinterCallback [[autodoc]] ProgressCallback [[autodoc]] EarlyStoppingCallback [[autodoc]] integrations.TensorBoardCallback [[autodoc]] integrations.WandbCallback - setup [[autodoc]] integrations.MLflowCallback - setup [[autodoc]] integrations.AzureMLCallback [[autodoc]] integrations.CodeCarbonCallback [[autodoc]] integrations.NeptuneCallback [[autodoc]] integrations.ClearMLCallback [[autodoc]] integrations.DagsHubCallback [[autodoc]] integrations.FlyteCallback [[autodoc]] integrations.DVCLiveCallback - setup ## TrainerCallback [[autodoc]] TrainerCallback 以䞋是劂䜕䜿甚PyTorch泚册自定义callback的瀺䟋 [`Trainer`]: ```python class MyCallback(TrainerCallback): "A callback that prints a message at the beginning of training" def on_train_begin(self, args, state, control, **kwargs): print("Starting training") trainer = Trainer( model, args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback()) ) ``` 泚册callback的及䞀种方匏是调甚 `trainer.add_callback()`劂䞋所瀺 ```python trainer = Trainer(...) trainer.add_callback(MyCallback) # Alternatively, we can pass an instance of the callback class trainer.add_callback(MyCallback()) ``` ## TrainerState [[autodoc]] TrainerState ## TrainerControl [[autodoc]] TrainerControl
transformers/docs/source/zh/main_classes/callback.md/0
{ "file_path": "transformers/docs/source/zh/main_classes/callback.md", "repo_id": "transformers", "token_count": 2183 }
294
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tokenizer tokenizer莟莣准倇蟓入以䟛暡型䜿甚。该库包含所有暡型的tokenizer。倧倚数tokenizer郜有䞀种版本䞀䞪是完党的 Python 实现及䞀䞪是基于 Rust 库 [🀗 Tokenizers](https://github.com/huggingface/tokenizers) 的“Fast”实现。"Fast" 实现允讞 1. 圚批量分词时星著提速 2. 圚原始字笊䞲字笊和单词和token空闎之闎进行映射的其他方法䟋劂获取包含给定字笊的token的玢匕或䞎给定token对应的字笊范囎。 基类 [PreTrainedTokenizer] 和 [PreTrained TokenizerFast] 实现了圚暡型蟓入䞭猖码字笊䞲蟓入的垞甚方法见䞋文并从本地文件或目圕或从库提䟛的预训练的 tokenizer从 HuggingFace 的 AWS S3 存傚库䞋蜜实䟋化/保存 python 和“Fast” tokenizer。它们郜䟝赖于包含垞甚方法的 [`~tokenization_utils_base.PreTrainedTokenizerBase`]和[`~tokenization_utils_base.SpecialTokensMixin`]。 因歀[`PreTrainedTokenizer`] 和 [`PreTrainedTokenizerFast`] 实现了䜿甚所有tokenizers的䞻芁方法 - 分词将字笊䞲拆分䞺子词标记字笊䞲将tokens字笊䞲蜬换䞺id并蜬换回来以及猖码/解码即标记化并蜬换䞺敎数。 - 以独立于底层结构BPE、SentencePiece  的方匏向词汇衚䞭添加新tokens。 - 管理特殊tokens劂mask、句銖等添加它们将它们分配给tokenizer䞭的属性以䟿于访问并确保它们圚标记过皋䞭䞍䌚被分割。 [`BatchEncoding`] 包含 [`~tokenization_utils_base.PreTrainedTokenizerBase`] 的猖码方法`__call__`、`encode_plus` 和 `batch_encode_plus`的蟓出并䞔是从 Python 字兞掟生的。圓tokenizer是纯 Python tokenizer时歀类的行䞺就像标准的 Python 字兞䞀样并保存这些方法计算的各种暡型蟓入`input_ids`、`attention_mask` 等。圓分词噚是“Fast”分词噚时即由 HuggingFace 的 [tokenizers 库](https://github.com/huggingface/tokenizers) 支持歀类还提䟛了几种高级对霐方法可甚于圚原始字笊䞲字笊和单词䞎token空闎之闎进行映射䟋劂获取包含给定字笊的token的玢匕或䞎给定token对应的字笊范囎。 ## PreTrainedTokenizer [[autodoc]] PreTrainedTokenizer - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## PreTrainedTokenizerFast [`PreTrainedTokenizerFast`] 䟝赖于 [tokenizers](https://huggingface.co/docs/tokenizers) 库。可以非垞简单地将从 🀗 tokenizers 库获取的tokenizers加蜜到 🀗 transformers 䞭。查看 [䜿甚 🀗 tokenizers 的分词噚](../fast_tokenizers) 页面以了解劂䜕执行歀操䜜。 [[autodoc]] PreTrainedTokenizerFast - __call__ - add_tokens - add_special_tokens - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## BatchEncoding [[autodoc]] BatchEncoding
transformers/docs/source/zh/main_classes/tokenizer.md/0
{ "file_path": "transformers/docs/source/zh/main_classes/tokenizer.md", "repo_id": "transformers", "token_count": 1932 }
295
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 甚于 TensorFlow æš¡åž‹çš„ XLA 集成 [[open-in-colab]] 加速线性代数也称䞺XLA是䞀䞪甚于加速TensorFlow暡型运行时闎的猖译噚。从[官方文档](https://www.tensorflow.org/xla)䞭可以看到 XLA加速线性代数是䞀种针对线性代数的特定领域猖译噚可以圚可胜䞍需芁曎改源代码的情况䞋加速TensorFlow暡型。 圚TensorFlow䞭䜿甚XLA非垞简单——它包含圚`tensorflow`库䞭并䞔可以䜿甚任䜕囟创建凜数䞭的`jit_compile`参数来觊发䟋劂[`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)。圚䜿甚Keras方法劂`fit()`和`predict()`时只需将`jit_compile`参数䌠递给`model.compile()`即可启甚XLA。然而XLA䞍仅限于这些方法 - 它还可以甚于加速任䜕任意的`tf.function`。 圚🀗 Transformers䞭几䞪TensorFlow方法已经被重写䞺䞎XLA兌容包括[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)、[T5](https://huggingface.co/docs/transformers/model_doc/t5)和[OPT](https://huggingface.co/docs/transformers/model_doc/opt)等文本生成暡型以及[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)等语音倄理暡型。 虜然确切的加速倍数埈倧皋床䞊取决于暡型䜆对于🀗 Transformers侭的TensorFlow文本生成暡型我们泚意到速床提高了纊100倍。本文档将解释劂䜕圚这些暡型䞊䜿甚XLA获埗最倧的性胜。劂果悚有兎趣了解曎倚关于基准测试和我们圚XLA集成背后的讟计哲孊的信息我们还将提䟛额倖的资源铟接。 ## 䜿甚 XLA 运行 TensorFlow 凜数 让我们考虑以䞋TensorFlow 䞭的暡型 ```py import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) ``` 䞊述暡型接受绎床䞺 `(10,)` 的蟓入。我们可以像䞋面这样䜿甚暡型进行前向䌠播 ```py # Generate random inputs for the model. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # Run a forward pass. _ = model(random_inputs) ``` 䞺了䜿甚 XLA 猖译的凜数运行前向䌠播我们需芁执行以䞋操䜜 ```py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) ``` `model`的默讀`call()`凜数甚于猖译XLA囟。䜆劂果䜠想将其他暡型凜数猖译成XLA也是可以的劂䞋所瀺 ```py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ``` ## 圚🀗 Transformers库䞭䜿甚XLA运行TensorFlow文本生成暡型 芁圚🀗 Transformers䞭启甚XLA加速生成悚需芁安装最新版本的`transformers`。悚可以通过运行以䞋呜什来安装它 ```bash pip install transformers --upgrade ``` 然后悚可以运行以䞋代码 ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # Will error if the minimal version of Transformers is not installed. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") input_string = ["TensorFlow is"] # One line to create an XLA generation function xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the ``` 正劂悚所泚意到的圚`generate()`䞊启甚XLA只需芁䞀行代码。其䜙郚分代码保持䞍变。然而䞊面的代码片段䞭有䞀些䞎XLA盞关的泚意事项。悚需芁了解这些泚意事项以充分利甚XLA可胜垊来的性胜提升。我们将圚䞋面的郚分讚论这些内容。 ## 需芁关泚的泚意事项 圓悚銖次执行启甚XLA的凜数劂䞊面的`xla_generate()`时它将圚内郚尝试掚断计算囟这是䞀䞪耗时的过皋。这䞪过皋被称䞺[“tracing”](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)。 悚可胜䌚泚意到生成时闎并䞍快。连续调甚`xla_generate()`或任䜕其他启甚了XLA的凜数䞍需芁再次掚断计算囟只芁凜数的蟓入䞎最初构建计算囟时的圢状盞匹配。对于具有固定蟓入圢状的暡态䟋劂囟像这䞍是问题䜆劂果悚正圚倄理具有可变蟓入圢状的暡态䟋劂文本则必须泚意。 䞺了确保`xla_generate()`始终䜿甚盞同的蟓入圢状悚可以圚调甚`tokenizer`时指定`padding`参数。 ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) # Here, we call the tokenizer with padding options. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") ``` 通过这种方匏悚可以确保`xla_generate()`的蟓入始终具有它跟螪的圢状从而加速生成时闎。悚可以䜿甚以䞋代码来验证这䞀点 ```py import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ``` 圚Tesla T4 GPU䞊悚可以期望劂䞋的蟓出 ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms ``` 第䞀次调甚`xla_generate()`䌚因䞺`tracing`而耗时䜆后续的调甚䌚快埗倚。请泚意任䜕时候对生成选项的曎改郜䌚觊发重新`tracing`从而富臎生成时闎减慢。 圚本文档䞭我们没有涵盖🀗 Transformers提䟛的所有文本生成选项。我们錓励悚阅读文档以了解高级甚䟋。 ## 附加资源 以䞋是䞀些附加资源劂果悚想深入了解圚🀗 Transformers和其他库䞋䜿甚XLA * [这䞪Colab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb) 提䟛了䞀䞪互劚挔瀺让悚可以尝试䜿甚XLA兌容的猖码噚-解码噚䟋劂[T5](https://huggingface.co/docs/transformers/model_doc/t5)和仅解码噚䟋劂[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)文本生成暡型。 * [这篇博客文章](https://huggingface.co/blog/tf-xla-generate) 提䟛了XLA兌容暡型的比蟃基准抂述以及关于圚TensorFlow䞭䜿甚XLA的友奜介绍。 * [这篇博客文章](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html) 讚论了我们圚🀗 Transformers䞭䞺TensorFlow暡型添加XLA支持的讟计理念。 * 掚荐甚于曎倚孊习XLA和TensorFlow囟的资源 * [XLA面向机噚孊习的䌘化猖译噚](https://www.tensorflow.org/xla) * [囟和tf.function简介](https://www.tensorflow.org/guide/intro_to_graphs) * [䜿甚tf.function获埗曎奜的性胜](https://www.tensorflow.org/guide/function)
transformers/docs/source/zh/tf_xla.md/0
{ "file_path": "transformers/docs/source/zh/tf_xla.md", "repo_id": "transformers", "token_count": 4549 }
296
#!/usr/bin/env python import torch from transformers import CamembertForMaskedLM, CamembertTokenizer def fill_mask(masked_input, model, tokenizer, topk=5): # Adapted from https://github.com/pytorch/fairseq/blob/master/fairseq/models/roberta/hub_interface.py assert masked_input.count("<mask>") == 1 input_ids = torch.tensor(tokenizer.encode(masked_input, add_special_tokens=True)).unsqueeze(0) # Batch size 1 logits = model(input_ids)[0] # The last hidden-state is the first element of the output tuple masked_index = (input_ids.squeeze() == tokenizer.mask_token_id).nonzero().item() logits = logits[0, masked_index, :] prob = logits.softmax(dim=0) values, indices = prob.topk(k=topk, dim=0) topk_predicted_token_bpe = " ".join( [tokenizer.convert_ids_to_tokens(indices[i].item()) for i in range(len(indices))] ) masked_token = tokenizer.mask_token topk_filled_outputs = [] for index, predicted_token_bpe in enumerate(topk_predicted_token_bpe.split(" ")): predicted_token = predicted_token_bpe.replace("\u2581", " ") if " {0}".format(masked_token) in masked_input: topk_filled_outputs.append( ( masked_input.replace(" {0}".format(masked_token), predicted_token), values[index].item(), predicted_token, ) ) else: topk_filled_outputs.append( ( masked_input.replace(masked_token, predicted_token), values[index].item(), predicted_token, ) ) return topk_filled_outputs tokenizer = CamembertTokenizer.from_pretrained("almanach/camembert-base") model = CamembertForMaskedLM.from_pretrained("almanach/camembert-base") model.eval() masked_input = "Le camembert est <mask> :)" print(fill_mask(masked_input, model, tokenizer, topk=3))
transformers/examples/legacy/run_camembert.py/0
{ "file_path": "transformers/examples/legacy/run_camembert.py", "repo_id": "transformers", "token_count": 888 }
297
# coding=utf-8 # Copyright 2020 Huggingface # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import io import json import unittest from parameterized import parameterized from transformers import FSMTForConditionalGeneration, FSMTTokenizer from transformers.testing_utils import get_tests_dir, require_torch, slow, torch_device from utils import calculate_bleu filename = get_tests_dir() + "/test_data/fsmt/fsmt_val_data.json" with io.open(filename, "r", encoding="utf-8") as f: bleu_data = json.load(f) @require_torch class ModelEvalTester(unittest.TestCase): def get_tokenizer(self, mname): return FSMTTokenizer.from_pretrained(mname) def get_model(self, mname): model = FSMTForConditionalGeneration.from_pretrained(mname).to(torch_device) if torch_device == "cuda": model.half() return model @parameterized.expand( [ ["en-ru", 26.0], ["ru-en", 22.0], ["en-de", 22.0], ["de-en", 29.0], ] ) @slow def test_bleu_scores(self, pair, min_bleu_score): # note: this test is not testing the best performance since it only evals a small batch # but it should be enough to detect a regression in the output quality mname = f"facebook/wmt19-{pair}" tokenizer = self.get_tokenizer(mname) model = self.get_model(mname) src_sentences = bleu_data[pair]["src"] tgt_sentences = bleu_data[pair]["tgt"] batch = tokenizer(src_sentences, return_tensors="pt", truncation=True, padding="longest").to(torch_device) outputs = model.generate( input_ids=batch.input_ids, num_beams=8, ) decoded_sentences = tokenizer.batch_decode( outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False ) scores = calculate_bleu(decoded_sentences, tgt_sentences) print(scores) self.assertGreaterEqual(scores["bleu"], min_bleu_score)
transformers/examples/legacy/seq2seq/old_test_fsmt_bleu_score.py/0
{ "file_path": "transformers/examples/legacy/seq2seq/old_test_fsmt_bleu_score.py", "repo_id": "transformers", "token_count": 997 }
298
#!/usr/bin/env python import io import json import subprocess pairs = [ ["en", "ru"], ["ru", "en"], ["en", "de"], ["de", "en"], ] n_objs = 8 def get_all_data(pairs, n_objs): text = {} for src, tgt in pairs: pair = f"{src}-{tgt}" cmd = f"sacrebleu -t wmt19 -l {pair} --echo src".split() src_lines = subprocess.run(cmd, stdout=subprocess.PIPE).stdout.decode("utf-8").splitlines() cmd = f"sacrebleu -t wmt19 -l {pair} --echo ref".split() tgt_lines = subprocess.run(cmd, stdout=subprocess.PIPE).stdout.decode("utf-8").splitlines() text[pair] = {"src": src_lines[:n_objs], "tgt": tgt_lines[:n_objs]} return text text = get_all_data(pairs, n_objs) filename = "./fsmt_val_data.json" with io.open(filename, "w", encoding="utf-8") as f: bleu_data = json.dump(text, f, indent=2, ensure_ascii=False)
transformers/examples/legacy/seq2seq/test_data/fsmt/build-eval-data.py/0
{ "file_path": "transformers/examples/legacy/seq2seq/test_data/fsmt/build-eval-data.py", "repo_id": "transformers", "token_count": 410 }
299
## Token classification Based on the scripts [`run_ner.py`](https://github.com/huggingface/transformers/blob/main/examples/legacy/token-classification/run_ner.py). The following examples are covered in this section: * NER on the GermEval 2014 (German NER) dataset * Emerging and Rare Entities task: WNUT’17 (English NER) dataset Details and results for the fine-tuning provided by @stefan-it. ### GermEval 2014 (German NER) dataset #### Data (Download and pre-processing steps) Data can be obtained from the [GermEval 2014](https://sites.google.com/site/germeval2014ner/data) shared task page. Here are the commands for downloading and pre-processing train, dev and test datasets. The original data format has four (tab-separated) columns, in a pre-processing step only the two relevant columns (token and outer span NER annotation) are extracted: ```bash curl -L 'https://drive.google.com/uc?export=download&id=1Jjhbal535VVz2ap4v4r_rN1UEHTdLK5P' \ | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp curl -L 'https://drive.google.com/uc?export=download&id=1ZfRcQThdtAR5PPRjIDtrVP7BtXSCUBbm' \ | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp curl -L 'https://drive.google.com/uc?export=download&id=1u9mb7kNJHWQCWyweMDRMuTFoOHOfeBTH' \ | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp ``` The GermEval 2014 dataset contains some strange "control character" tokens like `'\x96', '\u200e', '\x95', '\xad' or '\x80'`. One problem with these tokens is, that `BertTokenizer` returns an empty token for them, resulting in misaligned `InputExample`s. The `preprocess.py` script located in the `scripts` folder a) filters these tokens and b) splits longer sentences into smaller ones (once the max. subtoken length is reached). Let's define some variables that we need for further pre-processing steps and training the model: ```bash export MAX_LENGTH=128 export BERT_MODEL=google-bert/bert-base-multilingual-cased ``` Run the pre-processing script on training, dev and test datasets: ```bash python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt ``` The GermEval 2014 dataset has much more labels than CoNLL-2002/2003 datasets, so an own set of labels must be used: ```bash cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt ``` #### Prepare the run Additional environment variables must be set: ```bash export OUTPUT_DIR=germeval-model export BATCH_SIZE=32 export NUM_EPOCHS=3 export SAVE_STEPS=750 export SEED=1 ``` #### Run the Pytorch version To start training, just run: ```bash python3 run_ner.py --data_dir ./ \ --labels ./labels.txt \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_device_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_train \ --do_eval \ --do_predict ``` If your GPU supports half-precision training, just add the `--fp16` flag. After training, the model will be both evaluated on development and test datasets. #### JSON-based configuration file Instead of passing all parameters via commandline arguments, the `run_ner.py` script also supports reading parameters from a json-based configuration file: ```json { "data_dir": ".", "labels": "./labels.txt", "model_name_or_path": "google-bert/bert-base-multilingual-cased", "output_dir": "germeval-model", "max_seq_length": 128, "num_train_epochs": 3, "per_device_train_batch_size": 32, "save_steps": 750, "seed": 1, "do_train": true, "do_eval": true, "do_predict": true } ``` It must be saved with a `.json` extension and can be used by running `python3 run_ner.py config.json`. #### Evaluation Evaluation on development dataset outputs the following for our example: ```bash 10/04/2019 00:42:06 - INFO - __main__ - ***** Eval results ***** 10/04/2019 00:42:06 - INFO - __main__ - f1 = 0.8623348017621146 10/04/2019 00:42:06 - INFO - __main__ - loss = 0.07183869666975543 10/04/2019 00:42:06 - INFO - __main__ - precision = 0.8467916366258111 10/04/2019 00:42:06 - INFO - __main__ - recall = 0.8784592370979806 ``` On the test dataset the following results could be achieved: ```bash 10/04/2019 00:42:42 - INFO - __main__ - ***** Eval results ***** 10/04/2019 00:42:42 - INFO - __main__ - f1 = 0.8614389652384803 10/04/2019 00:42:42 - INFO - __main__ - loss = 0.07064602487454782 10/04/2019 00:42:42 - INFO - __main__ - precision = 0.8604651162790697 10/04/2019 00:42:42 - INFO - __main__ - recall = 0.8624150210424085 ``` #### Run the Tensorflow 2 version To start training, just run: ```bash python3 run_tf_ner.py --data_dir ./ \ --labels ./labels.txt \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_device_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_train \ --do_eval \ --do_predict ``` Such as the Pytorch version, if your GPU supports half-precision training, just add the `--fp16` flag. After training, the model will be both evaluated on development and test datasets. #### Evaluation Evaluation on development dataset outputs the following for our example: ```bash precision recall f1-score support LOCderiv 0.7619 0.6154 0.6809 52 PERpart 0.8724 0.8997 0.8858 4057 OTHpart 0.9360 0.9466 0.9413 711 ORGpart 0.7015 0.6989 0.7002 269 LOCpart 0.7668 0.8488 0.8057 496 LOC 0.8745 0.9191 0.8963 235 ORGderiv 0.7723 0.8571 0.8125 91 OTHderiv 0.4800 0.6667 0.5581 18 OTH 0.5789 0.6875 0.6286 16 PERderiv 0.5385 0.3889 0.4516 18 PER 0.5000 0.5000 0.5000 2 ORG 0.0000 0.0000 0.0000 3 micro avg 0.8574 0.8862 0.8715 5968 macro avg 0.8575 0.8862 0.8713 5968 ``` On the test dataset the following results could be achieved: ```bash precision recall f1-score support PERpart 0.8847 0.8944 0.8896 9397 OTHpart 0.9376 0.9353 0.9365 1639 ORGpart 0.7307 0.7044 0.7173 697 LOC 0.9133 0.9394 0.9262 561 LOCpart 0.8058 0.8157 0.8107 1150 ORG 0.0000 0.0000 0.0000 8 OTHderiv 0.5882 0.4762 0.5263 42 PERderiv 0.6571 0.5227 0.5823 44 OTH 0.4906 0.6667 0.5652 39 ORGderiv 0.7016 0.7791 0.7383 172 LOCderiv 0.8256 0.6514 0.7282 109 PER 0.0000 0.0000 0.0000 11 micro avg 0.8722 0.8774 0.8748 13869 macro avg 0.8712 0.8774 0.8740 13869 ``` ### Emerging and Rare Entities task: WNUT’17 (English NER) dataset Description of the WNUT’17 task from the [shared task website](http://noisy-text.github.io/2017/index.html): > The WNUT’17 shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. > Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on > them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Six labels are available in the dataset. An overview can be found on this [page](http://noisy-text.github.io/2017/files/). #### Data (Download and pre-processing steps) The dataset can be downloaded from the [official GitHub](https://github.com/leondz/emerging_entities_17) repository. The following commands show how to prepare the dataset for fine-tuning: ```bash mkdir -p data_wnut_17 curl -L 'https://github.com/leondz/emerging_entities_17/raw/master/wnut17train.conll' | tr '\t' ' ' > data_wnut_17/train.txt.tmp curl -L 'https://github.com/leondz/emerging_entities_17/raw/master/emerging.dev.conll' | tr '\t' ' ' > data_wnut_17/dev.txt.tmp curl -L 'https://raw.githubusercontent.com/leondz/emerging_entities_17/master/emerging.test.annotated' | tr '\t' ' ' > data_wnut_17/test.txt.tmp ``` Let's define some variables that we need for further pre-processing steps: ```bash export MAX_LENGTH=128 export BERT_MODEL=google-bert/bert-large-cased ``` Here we use the English BERT large model for fine-tuning. The `preprocess.py` scripts splits longer sentences into smaller ones (once the max. subtoken length is reached): ```bash python3 scripts/preprocess.py data_wnut_17/train.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/train.txt python3 scripts/preprocess.py data_wnut_17/dev.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/dev.txt python3 scripts/preprocess.py data_wnut_17/test.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/test.txt ``` In the last pre-processing step, the `labels.txt` file needs to be generated. This file contains all available labels: ```bash cat data_wnut_17/train.txt data_wnut_17/dev.txt data_wnut_17/test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > data_wnut_17/labels.txt ``` #### Run the Pytorch version Fine-tuning with the PyTorch version can be started using the `run_ner.py` script. In this example we use a JSON-based configuration file. This configuration file looks like: ```json { "data_dir": "./data_wnut_17", "labels": "./data_wnut_17/labels.txt", "model_name_or_path": "google-bert/bert-large-cased", "output_dir": "wnut-17-model-1", "max_seq_length": 128, "num_train_epochs": 3, "per_device_train_batch_size": 32, "save_steps": 425, "seed": 1, "do_train": true, "do_eval": true, "do_predict": true, "fp16": false } ``` If your GPU supports half-precision training, please set `fp16` to `true`. Save this JSON-based configuration under `wnut_17.json`. The fine-tuning can be started with `python3 run_ner_old.py wnut_17.json`. #### Evaluation Evaluation on development dataset outputs the following: ```bash 05/29/2020 23:33:44 - INFO - __main__ - ***** Eval results ***** 05/29/2020 23:33:44 - INFO - __main__ - eval_loss = 0.26505235286212275 05/29/2020 23:33:44 - INFO - __main__ - eval_precision = 0.7008264462809918 05/29/2020 23:33:44 - INFO - __main__ - eval_recall = 0.507177033492823 05/29/2020 23:33:44 - INFO - __main__ - eval_f1 = 0.5884802220680084 05/29/2020 23:33:44 - INFO - __main__ - epoch = 3.0 ``` On the test dataset the following results could be achieved: ```bash 05/29/2020 23:33:44 - INFO - transformers.trainer - ***** Running Prediction ***** 05/29/2020 23:34:02 - INFO - __main__ - eval_loss = 0.30948806500973547 05/29/2020 23:34:02 - INFO - __main__ - eval_precision = 0.5840108401084011 05/29/2020 23:34:02 - INFO - __main__ - eval_recall = 0.3994439295644115 05/29/2020 23:34:02 - INFO - __main__ - eval_f1 = 0.47440836543753434 ``` WNUT’17 is a very difficult task. Current state-of-the-art results on this dataset can be found [here](https://nlpprogress.com/english/named_entity_recognition.html).
transformers/examples/legacy/token-classification/README.md/0
{ "file_path": "transformers/examples/legacy/token-classification/README.md", "repo_id": "transformers", "token_count": 4566 }
300
#!/usr/bin/env python # coding=utf-8 # Copyright 2022 The HuggingFace Team All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Training a CLIP like dual encoder models using text and vision encoders in the library. The script can be used to train CLIP like models for languages other than English by using a text encoder pre-trained in the desired language. Currently this script supports the following vision and text models: Vision models: ViT(https://huggingface.co/models?filter=vit), CLIP (https://huggingface.co/models?filter=clip) Text models: BERT, ROBERTa (https://huggingface.co/models?filter=fill-mask) """ import logging import os import sys from dataclasses import dataclass, field from typing import Optional import torch from datasets import load_dataset from PIL import Image from torchvision.io import ImageReadMode, read_image from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize from torchvision.transforms.functional import InterpolationMode import transformers from transformers import ( AutoImageProcessor, AutoModel, AutoTokenizer, HfArgumentParser, Trainer, TrainingArguments, set_seed, ) from transformers.trainer_utils import get_last_checkpoint from transformers.utils import check_min_version, send_example_telemetry from transformers.utils.versions import require_version logger = logging.getLogger(__name__) # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.45.0.dev0") require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/contrastive-image-text/requirements.txt") @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}, ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) image_processor_name: str = field(default=None, metadata={"help": "Name or path of preprocessor config."}) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) token: str = field( default=None, metadata={ "help": ( "The token to use as HTTP bearer authorization for remote files. If not specified, will use the token " "generated when running `huggingface-cli login` (stored in `~/.huggingface`)." ) }, ) trust_remote_code: bool = field( default=False, metadata={ "help": ( "Whether to trust the execution of code from datasets/models defined on the Hub." " This option should only be set to `True` for repositories you trust and in which you have read the" " code, as it will execute code present on the Hub on your local machine." ) }, ) freeze_vision_model: bool = field( default=False, metadata={"help": "Whether to freeze the vision model parameters or not."} ) freeze_text_model: bool = field( default=False, metadata={"help": "Whether to freeze the text model parameters or not."} ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) data_dir: Optional[str] = field(default=None, metadata={"help": "The data directory containing input files."}) image_column: Optional[str] = field( default="image_path", metadata={"help": "The name of the column in the datasets containing the full image file paths."}, ) caption_column: Optional[str] = field( default="caption", metadata={"help": "The name of the column in the datasets containing the image captions."}, ) train_file: Optional[str] = field( default=None, metadata={"help": "The input training data file (a jsonlines file)."} ) validation_file: Optional[str] = field( default=None, metadata={"help": "An optional input evaluation data file (a jsonlines file)."}, ) test_file: Optional[str] = field( default=None, metadata={"help": "An optional input testing data file (a jsonlines file)."}, ) max_seq_length: Optional[int] = field( default=128, metadata={ "help": ( "The maximum total input sequence length after tokenization. Sequences longer " "than this will be truncated, sequences shorter will be padded." ) }, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ) }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." ) }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) def __post_init__(self): if self.dataset_name is None and self.train_file is None and self.validation_file is None: raise ValueError("Need either a dataset name or a training/validation file.") else: if self.train_file is not None: extension = self.train_file.split(".")[-1] assert extension in ["csv", "json"], "`train_file` should be a csv or a json file." if self.validation_file is not None: extension = self.validation_file.split(".")[-1] assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file." if self.test_file is not None: extension = self.test_file.split(".")[-1] assert extension in ["csv", "json"], "`test_file` should be a csv or a json file." dataset_name_mapping = { "image_caption_dataset.py": ("image_path", "caption"), } # We use torchvision for faster image pre-processing. The transforms are implemented as nn.Module, # so we jit it to be faster. class Transform(torch.nn.Module): def __init__(self, image_size, mean, std): super().__init__() self.transforms = torch.nn.Sequential( Resize([image_size], interpolation=InterpolationMode.BICUBIC), CenterCrop(image_size), ConvertImageDtype(torch.float), Normalize(mean, std), ) def forward(self, x) -> torch.Tensor: """`x` should be an instance of `PIL.Image.Image`""" with torch.no_grad(): x = self.transforms(x) return x def collate_fn(examples): pixel_values = torch.stack([example["pixel_values"] for example in examples]) input_ids = torch.tensor([example["input_ids"] for example in examples], dtype=torch.long) attention_mask = torch.tensor([example["attention_mask"] for example in examples], dtype=torch.long) return { "pixel_values": pixel_values, "input_ids": input_ids, "attention_mask": attention_mask, "return_loss": True, } def main(): # 1. Parse input arguments # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The # information sent is the one passed as arguments along with your Python/PyTorch versions. send_example_telemetry("run_clip", model_args, data_args) # 2. Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) if training_args.should_log: # The default of training_args.log_level is passive, so we set log level at info here to have that default. transformers.utils.logging.set_verbosity_info() log_level = training_args.get_process_log_level() logger.setLevel(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, " + f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # 3. Detecting last checkpoint and eventually continue from last checkpoint last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # 4. Load dataset # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below) # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ # (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files this script will use the first column for the full image path and the second column for the # captions (unless you specify column names for this with the `image_column` and `caption_column` arguments). # if data_args.dataset_name is not None: # Downloading and loading a dataset from the hub. dataset = load_dataset( data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, keep_in_memory=False, data_dir=data_args.data_dir, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) else: data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file extension = data_args.train_file.split(".")[-1] if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = data_args.validation_file.split(".")[-1] if data_args.test_file is not None: data_files["test"] = data_args.test_file extension = data_args.test_file.split(".")[-1] dataset = load_dataset( extension, data_files=data_files, cache_dir=model_args.cache_dir, token=model_args.token, ) # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at # https://huggingface.co/docs/datasets/loading_datasets. # 5. Load pretrained model, tokenizer, and image processor if model_args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) elif model_args.model_name_or_path: tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) else: raise ValueError( "You are instantiating a new tokenizer from scratch. This is not supported by this script. " "You can do it from another script, save it, and load it from here, using --tokenizer_name." ) # Load image_processor, in this script we only use this to get the mean and std for normalization. image_processor = AutoImageProcessor.from_pretrained( model_args.image_processor_name or model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) model = AutoModel.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) config = model.config def _freeze_params(module): for param in module.parameters(): param.requires_grad = False if model_args.freeze_vision_model: _freeze_params(model.vision_model) if model_args.freeze_text_model: _freeze_params(model.text_model) # set seed for torch dataloaders set_seed(training_args.seed) # Preprocessing the datasets. # We need to tokenize inputs and targets. if training_args.do_train: column_names = dataset["train"].column_names elif training_args.do_eval: column_names = dataset["validation"].column_names elif training_args.do_predict: column_names = dataset["test"].column_names else: logger.info("There is nothing to do. Please pass `do_train`, `do_eval` and/or `do_predict`.") return # 6. Get the column names for input/target. dataset_columns = dataset_name_mapping.get(data_args.dataset_name, None) if data_args.image_column is None: image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] else: image_column = data_args.image_column if image_column not in column_names: raise ValueError( f"--image_column' value '{data_args.image_column}' needs to be one of: {', '.join(column_names)}" ) if data_args.caption_column is None: caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] else: caption_column = data_args.caption_column if caption_column not in column_names: raise ValueError( f"--caption_column' value '{data_args.caption_column}' needs to be one of: {', '.join(column_names)}" ) # 7. Preprocessing the datasets. # Initialize torchvision transforms and jit it for faster processing. image_transformations = Transform( config.vision_config.image_size, image_processor.image_mean, image_processor.image_std ) image_transformations = torch.jit.script(image_transformations) # Preprocessing the datasets. # We need to tokenize input captions and transform the images. def tokenize_captions(examples): captions = list(examples[caption_column]) text_inputs = tokenizer(captions, max_length=data_args.max_seq_length, padding="max_length", truncation=True) examples["input_ids"] = text_inputs.input_ids examples["attention_mask"] = text_inputs.attention_mask return examples def transform_images(examples): images = [read_image(image_file, mode=ImageReadMode.RGB) for image_file in examples[image_column]] examples["pixel_values"] = [image_transformations(image) for image in images] return examples def filter_corrupt_images(examples): """remove problematic images""" valid_images = [] for image_file in examples[image_column]: try: Image.open(image_file) valid_images.append(True) except Exception: valid_images.append(False) return valid_images if training_args.do_train: if "train" not in dataset: raise ValueError("--do_train requires a train dataset") train_dataset = dataset["train"] if data_args.max_train_samples is not None: max_train_samples = min(len(train_dataset), data_args.max_train_samples) train_dataset = train_dataset.select(range(max_train_samples)) train_dataset = train_dataset.filter( filter_corrupt_images, batched=True, num_proc=data_args.preprocessing_num_workers ) train_dataset = train_dataset.map( function=tokenize_captions, batched=True, remove_columns=[col for col in column_names if col != image_column], num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on train dataset", ) # Transform images on the fly as doing it on the whole dataset takes too much time. train_dataset.set_transform(transform_images) if training_args.do_eval: if "validation" not in dataset: raise ValueError("--do_eval requires a train validation") eval_dataset = dataset["validation"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) eval_dataset = eval_dataset.select(range(max_eval_samples)) eval_dataset = eval_dataset.filter( filter_corrupt_images, batched=True, num_proc=data_args.preprocessing_num_workers ) eval_dataset = eval_dataset.map( function=tokenize_captions, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=[col for col in column_names if col != image_column], load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on validation dataset", ) # Transform images on the fly as doing it on the whole dataset takes too much time. eval_dataset.set_transform(transform_images) if training_args.do_predict: if "test" not in dataset: raise ValueError("--do_predict requires a test dataset") test_dataset = dataset["test"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(test_dataset), data_args.max_eval_samples) test_dataset = test_dataset.select(range(max_eval_samples)) test_dataset = test_dataset.filter( filter_corrupt_images, batched=True, num_proc=data_args.preprocessing_num_workers ) test_dataset = test_dataset.map( function=tokenize_captions, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=[col for col in column_names if col != image_column], load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on test dataset", ) # Transform images on the fly as doing it on the whole dataset takes too much time. test_dataset.set_transform(transform_images) # 8. Initialize our trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset if training_args.do_train else None, eval_dataset=eval_dataset if training_args.do_eval else None, data_collator=collate_fn, ) # 9. Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() tokenizer.save_pretrained(training_args.output_dir) image_processor.save_pretrained(training_args.output_dir) trainer.log_metrics("train", train_result.metrics) trainer.save_metrics("train", train_result.metrics) trainer.save_state() # 10. Evaluation if training_args.do_eval: metrics = trainer.evaluate() trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # 11. Write Training Stats and push to hub. finetuned_from = model_args.model_name_or_path # If from a local directory, don't set `finetuned_from` as this is required to be a valid repo. id on the Hub. if os.path.isdir(finetuned_from): finetuned_from = None kwargs = {"finetuned_from": finetuned_from, "tasks": "contrastive-image-text-modeling"} if data_args.dataset_name is not None: kwargs["dataset_tags"] = data_args.dataset_name if data_args.dataset_config_name is not None: kwargs["dataset_args"] = data_args.dataset_config_name kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}" else: kwargs["dataset"] = data_args.dataset_name if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) if __name__ == "__main__": main()
transformers/examples/pytorch/contrastive-image-text/run_clip.py/0
{ "file_path": "transformers/examples/pytorch/contrastive-image-text/run_clip.py", "repo_id": "transformers", "token_count": 9512 }
301
#!/usr/bin/env python # coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset. Here is the full list of checkpoints on the hub that can be fine-tuned by this script: https://huggingface.co/models?filter=text-generation """ # You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments. import logging import math import os import sys from dataclasses import dataclass, field from itertools import chain from typing import Optional import datasets import evaluate import torch from datasets import load_dataset import transformers from transformers import ( CONFIG_MAPPING, MODEL_FOR_CAUSAL_LM_MAPPING, AutoConfig, AutoModelForCausalLM, AutoTokenizer, HfArgumentParser, Trainer, TrainingArguments, default_data_collator, is_torch_xla_available, set_seed, ) from transformers.testing_utils import CaptureLogger from transformers.trainer_utils import get_last_checkpoint from transformers.utils import check_min_version, send_example_telemetry from transformers.utils.versions import require_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.45.0.dev0") require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt") logger = logging.getLogger(__name__) MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys()) MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. """ model_name_or_path: Optional[str] = field( default=None, metadata={ "help": ( "The model checkpoint for weights initialization. Don't set if you want to train a model from scratch." ) }, ) model_type: Optional[str] = field( default=None, metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)}, ) config_overrides: Optional[str] = field( default=None, metadata={ "help": ( "Override some existing default config settings when a model is trained from scratch. Example: " "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index" ) }, ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) token: str = field( default=None, metadata={ "help": ( "The token to use as HTTP bearer authorization for remote files. If not specified, will use the token " "generated when running `huggingface-cli login` (stored in `~/.huggingface`)." ) }, ) trust_remote_code: bool = field( default=False, metadata={ "help": ( "Whether to trust the execution of code from datasets/models defined on the Hub." " This option should only be set to `True` for repositories you trust and in which you have read the" " code, as it will execute code present on the Hub on your local machine." ) }, ) torch_dtype: Optional[str] = field( default=None, metadata={ "help": ( "Override the default `torch.dtype` and load the model under this dtype. If `auto` is passed, the " "dtype will be automatically derived from the model's weights." ), "choices": ["auto", "bfloat16", "float16", "float32"], }, ) low_cpu_mem_usage: bool = field( default=False, metadata={ "help": ( "It is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded. " "set True will benefit LLM loading time and RAM consumption." ) }, ) def __post_init__(self): if self.config_overrides is not None and (self.config_name is not None or self.model_name_or_path is not None): raise ValueError( "--config_overrides can't be used in combination with --config_name or --model_name_or_path" ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) validation_file: Optional[str] = field( default=None, metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."}, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ) }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." ) }, ) streaming: bool = field(default=False, metadata={"help": "Enable streaming mode"}) block_size: Optional[int] = field( default=None, metadata={ "help": ( "Optional input sequence length after tokenization. " "The training dataset will be truncated in block of this size for training. " "Default to the model max input length for single sentence inputs (take into account special tokens)." ) }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) validation_split_percentage: Optional[int] = field( default=5, metadata={ "help": "The percentage of the train set used as validation set in case there's no validation split" }, ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) keep_linebreaks: bool = field( default=True, metadata={"help": "Whether to keep line breaks when using TXT files or not."} ) def __post_init__(self): if self.streaming: require_version("datasets>=2.0.0", "The streaming feature requires `datasets>=2.0.0`") if self.dataset_name is None and self.train_file is None and self.validation_file is None: raise ValueError("Need either a dataset name or a training/validation file.") else: if self.train_file is not None: extension = self.train_file.split(".")[-1] assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file." if self.validation_file is not None: extension = self.validation_file.split(".")[-1] assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file." def main(): # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The # information sent is the one passed as arguments along with your Python/PyTorch versions. send_example_telemetry("run_clm", model_args, data_args) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) if training_args.should_log: # The default of training_args.log_level is passive, so we set log level at info here to have that default. transformers.utils.logging.set_verbosity_info() log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, " + f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # Detecting last checkpoint. last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Set seed before initializing model. set_seed(training_args.seed) # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ # (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called # 'text' is found. You can easily tweak this behavior (see below). # # In distributed training, the load_dataset function guarantee that only one local process can concurrently # download the dataset. if data_args.dataset_name is not None: # Downloading and loading a dataset from the hub. raw_datasets = load_dataset( data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, token=model_args.token, streaming=data_args.streaming, trust_remote_code=model_args.trust_remote_code, ) if "validation" not in raw_datasets.keys(): raw_datasets["validation"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=f"train[:{data_args.validation_split_percentage}%]", cache_dir=model_args.cache_dir, token=model_args.token, streaming=data_args.streaming, trust_remote_code=model_args.trust_remote_code, ) raw_datasets["train"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=f"train[{data_args.validation_split_percentage}%:]", cache_dir=model_args.cache_dir, token=model_args.token, streaming=data_args.streaming, trust_remote_code=model_args.trust_remote_code, ) else: data_files = {} dataset_args = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = ( data_args.train_file.split(".")[-1] if data_args.train_file is not None else data_args.validation_file.split(".")[-1] ) if extension == "txt": extension = "text" dataset_args["keep_linebreaks"] = data_args.keep_linebreaks raw_datasets = load_dataset( extension, data_files=data_files, cache_dir=model_args.cache_dir, token=model_args.token, **dataset_args, ) # If no validation data is there, validation_split_percentage will be used to divide the dataset. if "validation" not in raw_datasets.keys(): raw_datasets["validation"] = load_dataset( extension, data_files=data_files, split=f"train[:{data_args.validation_split_percentage}%]", cache_dir=model_args.cache_dir, token=model_args.token, **dataset_args, ) raw_datasets["train"] = load_dataset( extension, data_files=data_files, split=f"train[{data_args.validation_split_percentage}%:]", cache_dir=model_args.cache_dir, token=model_args.token, **dataset_args, ) # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at # https://huggingface.co/docs/datasets/loading_datasets. # Load pretrained model and tokenizer # # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. config_kwargs = { "cache_dir": model_args.cache_dir, "revision": model_args.model_revision, "token": model_args.token, "trust_remote_code": model_args.trust_remote_code, } if model_args.config_name: config = AutoConfig.from_pretrained(model_args.config_name, **config_kwargs) elif model_args.model_name_or_path: config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs) else: config = CONFIG_MAPPING[model_args.model_type]() logger.warning("You are instantiating a new config instance from scratch.") if model_args.config_overrides is not None: logger.info(f"Overriding config: {model_args.config_overrides}") config.update_from_string(model_args.config_overrides) logger.info(f"New config: {config}") tokenizer_kwargs = { "cache_dir": model_args.cache_dir, "use_fast": model_args.use_fast_tokenizer, "revision": model_args.model_revision, "token": model_args.token, "trust_remote_code": model_args.trust_remote_code, } if model_args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs) elif model_args.model_name_or_path: tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) else: raise ValueError( "You are instantiating a new tokenizer from scratch. This is not supported by this script. " "You can do it from another script, save it, and load it from here, using --tokenizer_name." ) if model_args.model_name_or_path: torch_dtype = ( model_args.torch_dtype if model_args.torch_dtype in ["auto", None] else getattr(torch, model_args.torch_dtype) ) model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, torch_dtype=torch_dtype, low_cpu_mem_usage=model_args.low_cpu_mem_usage, ) else: model = AutoModelForCausalLM.from_config(config, trust_remote_code=model_args.trust_remote_code) n_params = sum({p.data_ptr(): p.numel() for p in model.parameters()}.values()) logger.info(f"Training new model from scratch - Total size={n_params/2**20:.2f}M params") # We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch # on a small vocab and want a smaller embedding size, remove this test. embedding_size = model.get_input_embeddings().weight.shape[0] if len(tokenizer) > embedding_size: model.resize_token_embeddings(len(tokenizer)) # Preprocessing the datasets. # First we tokenize all the texts. if training_args.do_train: column_names = list(raw_datasets["train"].features) else: column_names = list(raw_datasets["validation"].features) text_column_name = "text" if "text" in column_names else column_names[0] # since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base") def tokenize_function(examples): with CaptureLogger(tok_logger) as cl: output = tokenizer(examples[text_column_name]) # clm input could be much much longer than block_size if "Token indices sequence length is longer than the" in cl.out: tok_logger.warning( "^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits" " before being passed to the model." ) return output with training_args.main_process_first(desc="dataset map tokenization"): if not data_args.streaming: tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on dataset", ) else: tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, ) if hasattr(config, "max_position_embeddings"): max_pos_embeddings = config.max_position_embeddings else: # Define a default value if the attribute is missing in the config. max_pos_embeddings = 1024 if data_args.block_size is None: block_size = tokenizer.model_max_length if block_size > max_pos_embeddings: logger.warning( f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). " f"Using block_size={min(1024, max_pos_embeddings)} instead. You can change that default value by passing --block_size xxx." ) if max_pos_embeddings > 0: block_size = min(1024, max_pos_embeddings) else: block_size = 1024 else: if data_args.block_size > tokenizer.model_max_length: logger.warning( f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model " f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}." ) block_size = min(data_args.block_size, tokenizer.model_max_length) # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size. def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, and if the total_length < block_size we exclude this batch and return an empty dict. # We could add padding if the model supported it instead of this drop, you can customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower # to preprocess. # # To speed up this part, we use multiprocessing. See the documentation of the map method for more information: # https://huggingface.co/docs/datasets/process#map with training_args.main_process_first(desc="grouping texts together"): if not data_args.streaming: lm_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, desc=f"Grouping texts in chunks of {block_size}", ) else: lm_datasets = tokenized_datasets.map( group_texts, batched=True, ) if training_args.do_train: if "train" not in tokenized_datasets: raise ValueError("--do_train requires a train dataset") train_dataset = lm_datasets["train"] if data_args.max_train_samples is not None: max_train_samples = min(len(train_dataset), data_args.max_train_samples) train_dataset = train_dataset.select(range(max_train_samples)) if training_args.do_eval: if "validation" not in tokenized_datasets: raise ValueError("--do_eval requires a validation dataset") eval_dataset = lm_datasets["validation"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) eval_dataset = eval_dataset.select(range(max_eval_samples)) def preprocess_logits_for_metrics(logits, labels): if isinstance(logits, tuple): # Depending on the model and config, logits may contain extra tensors, # like past_key_values, but logits always come first logits = logits[0] return logits.argmax(dim=-1) metric = evaluate.load("accuracy", cache_dir=model_args.cache_dir) def compute_metrics(eval_preds): preds, labels = eval_preds # preds have the same shape as the labels, after the argmax(-1) has been calculated # by preprocess_logits_for_metrics but we need to shift the labels labels = labels[:, 1:].reshape(-1) preds = preds[:, :-1].reshape(-1) return metric.compute(predictions=preds, references=labels) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset if training_args.do_train else None, eval_dataset=eval_dataset if training_args.do_eval else None, tokenizer=tokenizer, # Data collator will default to DataCollatorWithPadding, so we change it. data_collator=default_data_collator, compute_metrics=compute_metrics if training_args.do_eval and not is_torch_xla_available() else None, preprocess_logits_for_metrics=preprocess_logits_for_metrics if training_args.do_eval and not is_torch_xla_available() else None, ) # Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() # Saves the tokenizer too for easy upload metrics = train_result.metrics max_train_samples = ( data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset) ) metrics["train_samples"] = min(max_train_samples, len(train_dataset)) trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() # Evaluation if training_args.do_eval: logger.info("*** Evaluate ***") metrics = trainer.evaluate() max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset) metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) try: perplexity = math.exp(metrics["eval_loss"]) except OverflowError: perplexity = float("inf") metrics["perplexity"] = perplexity trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-generation"} if data_args.dataset_name is not None: kwargs["dataset_tags"] = data_args.dataset_name if data_args.dataset_config_name is not None: kwargs["dataset_args"] = data_args.dataset_config_name kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}" else: kwargs["dataset"] = data_args.dataset_name if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) def _mp_fn(index): # For xla_spawn (TPUs) main() if __name__ == "__main__": main()
transformers/examples/pytorch/language-modeling/run_clm.py/0
{ "file_path": "transformers/examples/pytorch/language-modeling/run_clm.py", "repo_id": "transformers", "token_count": 11823 }
302
# coding=utf-8 # Copyright 2018 HuggingFace Inc.. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import logging import os import sys from time import time from unittest.mock import patch from transformers.testing_utils import TestCasePlus, require_torch_xla logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() def get_results(output_dir): results = {} path = os.path.join(output_dir, "all_results.json") if os.path.exists(path): with open(path, "r") as f: results = json.load(f) else: raise ValueError(f"can't find {path}") return results stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) @require_torch_xla class TorchXLAExamplesTests(TestCasePlus): def test_run_glue(self): import xla_spawn tmp_dir = self.get_auto_remove_tmp_dir() testargs = f""" ./examples/pytorch/text-classification/run_glue.py --num_cores=8 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path distilbert/distilbert-base-uncased --output_dir {tmp_dir} --overwrite_output_dir --train_file ./tests/fixtures/tests_samples/MRPC/train.csv --validation_file ./tests/fixtures/tests_samples/MRPC/dev.csv --do_train --do_eval --debug tpu_metrics_debug --per_device_train_batch_size=2 --per_device_eval_batch_size=1 --learning_rate=1e-4 --max_steps=10 --warmup_steps=2 --seed=42 --max_seq_length=128 """.split() with patch.object(sys, "argv", testargs): start = time() xla_spawn.main() end = time() result = get_results(tmp_dir) self.assertGreaterEqual(result["eval_accuracy"], 0.75) # Assert that the script takes less than 500 seconds to make sure it doesn't hang. self.assertLess(end - start, 500) def test_trainer_tpu(self): import xla_spawn testargs = """ ./tests/test_trainer_tpu.py --num_cores=8 ./tests/test_trainer_tpu.py """.split() with patch.object(sys, "argv", testargs): xla_spawn.main()
transformers/examples/pytorch/old_test_xla_examples.py/0
{ "file_path": "transformers/examples/pytorch/old_test_xla_examples.py", "repo_id": "transformers", "token_count": 1249 }
303
#!/usr/bin/env python # coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Finetuning the library models for sequence classification on GLUE.""" # You can also adapt this script on your own text classification task. Pointers for this are left as comments. import logging import os import random import sys from dataclasses import dataclass, field from typing import Optional import datasets import evaluate import numpy as np from datasets import load_dataset import transformers from transformers import ( AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, EvalPrediction, HfArgumentParser, PretrainedConfig, Trainer, TrainingArguments, default_data_collator, set_seed, ) from transformers.trainer_utils import get_last_checkpoint from transformers.utils import check_min_version, send_example_telemetry from transformers.utils.versions import require_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.45.0.dev0") require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt") task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } logger = logging.getLogger(__name__) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. Using `HfArgumentParser` we can turn this class into argparse arguments to be able to specify them on the command line. """ task_name: Optional[str] = field( default=None, metadata={"help": "The name of the task to train on: " + ", ".join(task_to_keys.keys())}, ) dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) max_seq_length: int = field( default=128, metadata={ "help": ( "The maximum total input sequence length after tokenization. Sequences longer " "than this will be truncated, sequences shorter will be padded." ) }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."} ) pad_to_max_length: bool = field( default=True, metadata={ "help": ( "Whether to pad all samples to `max_seq_length`. " "If False, will pad the samples dynamically when batching to the maximum length in the batch." ) }, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ) }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." ) }, ) max_predict_samples: Optional[int] = field( default=None, metadata={ "help": ( "For debugging purposes or quicker training, truncate the number of prediction examples to this " "value if set." ) }, ) train_file: Optional[str] = field( default=None, metadata={"help": "A csv or a json file containing the training data."} ) validation_file: Optional[str] = field( default=None, metadata={"help": "A csv or a json file containing the validation data."} ) test_file: Optional[str] = field(default=None, metadata={"help": "A csv or a json file containing the test data."}) def __post_init__(self): if self.task_name is not None: self.task_name = self.task_name.lower() if self.task_name not in task_to_keys.keys(): raise ValueError("Unknown task, you should pick one in " + ",".join(task_to_keys.keys())) elif self.dataset_name is not None: pass elif self.train_file is None or self.validation_file is None: raise ValueError("Need either a GLUE task, a training/validation file or a dataset name.") else: train_extension = self.train_file.split(".")[-1] assert train_extension in ["csv", "json"], "`train_file` should be a csv or a json file." validation_extension = self.validation_file.split(".")[-1] assert ( validation_extension == train_extension ), "`validation_file` should have the same extension (csv or json) as `train_file`." @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) token: str = field( default=None, metadata={ "help": ( "The token to use as HTTP bearer authorization for remote files. If not specified, will use the token " "generated when running `huggingface-cli login` (stored in `~/.huggingface`)." ) }, ) trust_remote_code: bool = field( default=False, metadata={ "help": ( "Whether to trust the execution of code from datasets/models defined on the Hub." " This option should only be set to `True` for repositories you trust and in which you have read the" " code, as it will execute code present on the Hub on your local machine." ) }, ) ignore_mismatched_sizes: bool = field( default=False, metadata={"help": "Will enable to load a pretrained model whose head dimensions are different."}, ) def main(): # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The # information sent is the one passed as arguments along with your Python/PyTorch versions. send_example_telemetry("run_glue", model_args, data_args) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) if training_args.should_log: # The default of training_args.log_level is passive, so we set log level at info here to have that default. transformers.utils.logging.set_verbosity_info() log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, " + f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # Detecting last checkpoint. last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Set seed before initializing model. set_seed(training_args.seed) # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below) # or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the # sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named # label if at least two columns are provided. # # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this # single column. You can easily tweak this behavior (see below) # # In distributed training, the load_dataset function guarantee that only one local process can concurrently # download the dataset. if data_args.task_name is not None: # Downloading and loading a dataset from the hub. raw_datasets = load_dataset( "nyu-mll/glue", data_args.task_name, cache_dir=model_args.cache_dir, token=model_args.token, ) elif data_args.dataset_name is not None: # Downloading and loading a dataset from the hub. raw_datasets = load_dataset( data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) else: # Loading a dataset from your local files. # CSV/JSON training and evaluation files are needed. data_files = {"train": data_args.train_file, "validation": data_args.validation_file} # Get the test dataset: you can provide your own CSV/JSON test file (see below) # when you use `do_predict` without specifying a GLUE benchmark task. if training_args.do_predict: if data_args.test_file is not None: train_extension = data_args.train_file.split(".")[-1] test_extension = data_args.test_file.split(".")[-1] assert ( test_extension == train_extension ), "`test_file` should have the same extension (csv or json) as `train_file`." data_files["test"] = data_args.test_file else: raise ValueError("Need either a GLUE task or a test file for `do_predict`.") for key in data_files.keys(): logger.info(f"load a local file for {key}: {data_files[key]}") if data_args.train_file.endswith(".csv"): # Loading a dataset from local csv files raw_datasets = load_dataset( "csv", data_files=data_files, cache_dir=model_args.cache_dir, token=model_args.token, ) else: # Loading a dataset from local json files raw_datasets = load_dataset( "json", data_files=data_files, cache_dir=model_args.cache_dir, token=model_args.token, ) # See more about loading any type of standard or custom dataset at # https://huggingface.co/docs/datasets/loading_datasets. # Labels if data_args.task_name is not None: is_regression = data_args.task_name == "stsb" if not is_regression: label_list = raw_datasets["train"].features["label"].names num_labels = len(label_list) else: num_labels = 1 else: # Trying to have good defaults here, don't hesitate to tweak to your needs. is_regression = raw_datasets["train"].features["label"].dtype in ["float32", "float64"] if is_regression: num_labels = 1 else: # A useful fast method: # https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.unique label_list = raw_datasets["train"].unique("label") label_list.sort() # Let's sort it for determinism num_labels = len(label_list) # Load pretrained model and tokenizer # # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, num_labels=num_labels, finetuning_task=data_args.task_name, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) model = AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ignore_mismatched_sizes=model_args.ignore_mismatched_sizes, ) # Preprocessing the raw_datasets if data_args.task_name is not None: sentence1_key, sentence2_key = task_to_keys[data_args.task_name] else: # Again, we try to have some nice defaults but don't hesitate to tweak to your use case. non_label_column_names = [name for name in raw_datasets["train"].column_names if name != "label"] if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names: sentence1_key, sentence2_key = "sentence1", "sentence2" else: if len(non_label_column_names) >= 2: sentence1_key, sentence2_key = non_label_column_names[:2] else: sentence1_key, sentence2_key = non_label_column_names[0], None # Padding strategy if data_args.pad_to_max_length: padding = "max_length" else: # We will pad later, dynamically at batch creation, to the max sequence length in each batch padding = False # Some models have set the order of the labels to use, so let's make sure we do use it. label_to_id = None if ( model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id and data_args.task_name is not None and not is_regression ): # Some have all caps in their config, some don't. label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()} if sorted(label_name_to_id.keys()) == sorted(label_list): label_to_id = {i: int(label_name_to_id[label_list[i]]) for i in range(num_labels)} else: logger.warning( "Your model seems to have been trained with labels, but they don't match the dataset: " f"model labels: {sorted(label_name_to_id.keys())}, dataset labels: {sorted(label_list)}." "\nIgnoring the model labels as a result.", ) elif data_args.task_name is None and not is_regression: label_to_id = {v: i for i, v in enumerate(label_list)} if label_to_id is not None: model.config.label2id = label_to_id model.config.id2label = {id: label for label, id in config.label2id.items()} elif data_args.task_name is not None and not is_regression: model.config.label2id = {l: i for i, l in enumerate(label_list)} model.config.id2label = {id: label for label, id in config.label2id.items()} if data_args.max_seq_length > tokenizer.model_max_length: logger.warning( f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the " f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}." ) max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length) def preprocess_function(examples): # Tokenize the texts args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True) # Map labels to IDs (not necessary for GLUE tasks) if label_to_id is not None and "label" in examples: result["label"] = [(label_to_id[l] if l != -1 else -1) for l in examples["label"]] return result with training_args.main_process_first(desc="dataset map pre-processing"): raw_datasets = raw_datasets.map( preprocess_function, batched=True, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on dataset", ) if training_args.do_train: if "train" not in raw_datasets: raise ValueError("--do_train requires a train dataset") train_dataset = raw_datasets["train"] if data_args.max_train_samples is not None: max_train_samples = min(len(train_dataset), data_args.max_train_samples) train_dataset = train_dataset.select(range(max_train_samples)) if training_args.do_eval: if "validation" not in raw_datasets and "validation_matched" not in raw_datasets: raise ValueError("--do_eval requires a validation dataset") eval_dataset = raw_datasets["validation_matched" if data_args.task_name == "mnli" else "validation"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) eval_dataset = eval_dataset.select(range(max_eval_samples)) if training_args.do_predict or data_args.task_name is not None or data_args.test_file is not None: if "test" not in raw_datasets and "test_matched" not in raw_datasets: raise ValueError("--do_predict requires a test dataset") predict_dataset = raw_datasets["test_matched" if data_args.task_name == "mnli" else "test"] if data_args.max_predict_samples is not None: max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples) predict_dataset = predict_dataset.select(range(max_predict_samples)) # Log a few random samples from the training set: if training_args.do_train: for index in random.sample(range(len(train_dataset)), 3): logger.info(f"Sample {index} of the training set: {train_dataset[index]}.") # Get the metric function if data_args.task_name is not None: metric = evaluate.load("glue", data_args.task_name, cache_dir=model_args.cache_dir) elif is_regression: metric = evaluate.load("mse", cache_dir=model_args.cache_dir) else: metric = evaluate.load("accuracy", cache_dir=model_args.cache_dir) # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a # predictions and label_ids field) and has to return a dictionary string to float. def compute_metrics(p: EvalPrediction): preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1) result = metric.compute(predictions=preds, references=p.label_ids) if len(result) > 1: result["combined_score"] = np.mean(list(result.values())).item() return result # Data collator will default to DataCollatorWithPadding when the tokenizer is passed to Trainer, so we change it if # we already did the padding. if data_args.pad_to_max_length: data_collator = default_data_collator elif training_args.fp16: data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8) else: data_collator = None # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset if training_args.do_train else None, eval_dataset=eval_dataset if training_args.do_eval else None, compute_metrics=compute_metrics, tokenizer=tokenizer, data_collator=data_collator, ) # Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) metrics = train_result.metrics max_train_samples = ( data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset) ) metrics["train_samples"] = min(max_train_samples, len(train_dataset)) trainer.save_model() # Saves the tokenizer too for easy upload trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() # Evaluation if training_args.do_eval: logger.info("*** Evaluate ***") # Loop to handle MNLI double evaluation (matched, mis-matched) tasks = [data_args.task_name] eval_datasets = [eval_dataset] if data_args.task_name == "mnli": tasks.append("mnli-mm") valid_mm_dataset = raw_datasets["validation_mismatched"] if data_args.max_eval_samples is not None: max_eval_samples = min(len(valid_mm_dataset), data_args.max_eval_samples) valid_mm_dataset = valid_mm_dataset.select(range(max_eval_samples)) eval_datasets.append(valid_mm_dataset) combined = {} for eval_dataset, task in zip(eval_datasets, tasks): metrics = trainer.evaluate(eval_dataset=eval_dataset) max_eval_samples = ( data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset) ) metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) if task == "mnli-mm": metrics = {k + "_mm": v for k, v in metrics.items()} if task is not None and "mnli" in task: combined.update(metrics) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", combined if task is not None and "mnli" in task else metrics) if training_args.do_predict: logger.info("*** Predict ***") # Loop to handle MNLI double evaluation (matched, mis-matched) tasks = [data_args.task_name] predict_datasets = [predict_dataset] if data_args.task_name == "mnli": tasks.append("mnli-mm") predict_datasets.append(raw_datasets["test_mismatched"]) for predict_dataset, task in zip(predict_datasets, tasks): # Removing the `label` columns because it contains -1 and Trainer won't like that. predict_dataset = predict_dataset.remove_columns("label") predictions = trainer.predict(predict_dataset, metric_key_prefix="predict").predictions predictions = np.squeeze(predictions) if is_regression else np.argmax(predictions, axis=1) output_predict_file = os.path.join(training_args.output_dir, f"predict_results_{task}.txt") if trainer.is_world_process_zero(): with open(output_predict_file, "w") as writer: logger.info(f"***** Predict results {task} *****") writer.write("index\tprediction\n") for index, item in enumerate(predictions): if is_regression: writer.write(f"{index}\t{item:3.3f}\n") else: item = label_list[item] writer.write(f"{index}\t{item}\n") kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-classification"} if data_args.task_name is not None: kwargs["language"] = "en" kwargs["dataset_tags"] = "glue" kwargs["dataset_args"] = data_args.task_name kwargs["dataset"] = f"GLUE {data_args.task_name.upper()}" if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) def _mp_fn(index): # For xla_spawn (TPUs) main() if __name__ == "__main__": main()
transformers/examples/pytorch/text-classification/run_glue.py/0
{ "file_path": "transformers/examples/pytorch/text-classification/run_glue.py", "repo_id": "transformers", "token_count": 11491 }
304
# coding=utf-8 # Copyright 2019 The HuggingFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """BertAbs configuration""" import logging from transformers import PretrainedConfig logger = logging.getLogger(__name__) BERTABS_FINETUNED_CONFIG_MAP = { "bertabs-finetuned-cnndm": "https://huggingface.co/remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization/resolve/main/config.json", } class BertAbsConfig(PretrainedConfig): r"""Class to store the configuration of the BertAbs model. Arguments: vocab_size: int Number of tokens in the vocabulary. max_pos: int The maximum sequence length that this model will be used with. enc_layer: int The numner of hidden layers in the Transformer encoder. enc_hidden_size: int The size of the encoder's layers. enc_heads: int The number of attention heads for each attention layer in the encoder. enc_ff_size: int The size of the encoder's feed-forward layers. enc_dropout: int The dropout probability for all fully connected layers in the embeddings, layers, pooler and also the attention probabilities in the encoder. dec_layer: int The numner of hidden layers in the decoder. dec_hidden_size: int The size of the decoder's layers. dec_heads: int The number of attention heads for each attention layer in the decoder. dec_ff_size: int The size of the decoder's feed-forward layers. dec_dropout: int The dropout probability for all fully connected layers in the embeddings, layers, pooler and also the attention probabilities in the decoder. """ model_type = "bertabs" def __init__( self, vocab_size=30522, max_pos=512, enc_layers=6, enc_hidden_size=512, enc_heads=8, enc_ff_size=512, enc_dropout=0.2, dec_layers=6, dec_hidden_size=768, dec_heads=8, dec_ff_size=2048, dec_dropout=0.2, **kwargs, ): super().__init__(**kwargs) self.vocab_size = vocab_size self.max_pos = max_pos self.enc_layers = enc_layers self.enc_hidden_size = enc_hidden_size self.enc_heads = enc_heads self.enc_ff_size = enc_ff_size self.enc_dropout = enc_dropout self.dec_layers = dec_layers self.dec_hidden_size = dec_hidden_size self.dec_heads = dec_heads self.dec_ff_size = dec_ff_size self.dec_dropout = dec_dropout
transformers/examples/research_projects/bertabs/configuration_bertabs.py/0
{ "file_path": "transformers/examples/research_projects/bertabs/configuration_bertabs.py", "repo_id": "transformers", "token_count": 1347 }
305
from arguments import TokenizerTrainingArguments from datasets import load_dataset from tqdm import tqdm from transformers import AutoTokenizer, HfArgumentParser from transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode # Iterator for Training def batch_iterator(batch_size=10): for _ in tqdm(range(0, args.n_examples, batch_size)): yield [next(iter_dataset)[args.text_column] for _ in range(batch_size)] # Configuration parser = HfArgumentParser(TokenizerTrainingArguments) args = parser.parse_args() # Base tokenizer tokenizer = AutoTokenizer.from_pretrained(args.base_tokenizer) base_vocab = list(bytes_to_unicode().values()) # Load dataset dataset = load_dataset(args.dataset_name, split="train", streaming=True) iter_dataset = iter(dataset) # Training and saving new_tokenizer = tokenizer.train_new_from_iterator( batch_iterator(), vocab_size=args.vocab_size, initial_alphabet=base_vocab ) new_tokenizer.save_pretrained(args.tokenizer_name, push_to_hub=args.push_to_hub)
transformers/examples/research_projects/codeparrot/scripts/bpe_training.py/0
{ "file_path": "transformers/examples/research_projects/codeparrot/scripts/bpe_training.py", "repo_id": "transformers", "token_count": 347 }
306
# coding=utf-8 # Copyright 2019-present, the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Training the distilled model. Supported architectures include: BERT -> DistilBERT, RoBERTa -> DistilRoBERTa, GPT2 -> DistilGPT2. """ import argparse import json import os import pickle import shutil import numpy as np import torch from distiller import Distiller from lm_seqs_dataset import LmSeqsDataset from transformers import ( BertConfig, BertForMaskedLM, BertTokenizer, DistilBertConfig, DistilBertForMaskedLM, DistilBertTokenizer, GPT2Config, GPT2LMHeadModel, GPT2Tokenizer, RobertaConfig, RobertaForMaskedLM, RobertaTokenizer, ) from utils import git_log, init_gpu_params, logger, set_seed MODEL_CLASSES = { "distilbert": (DistilBertConfig, DistilBertForMaskedLM, DistilBertTokenizer), "roberta": (RobertaConfig, RobertaForMaskedLM, RobertaTokenizer), "bert": (BertConfig, BertForMaskedLM, BertTokenizer), "gpt2": (GPT2Config, GPT2LMHeadModel, GPT2Tokenizer), } def sanity_checks(args): """ A bunch of args sanity checks to perform even starting... """ assert (args.mlm and args.alpha_mlm > 0.0) or (not args.mlm and args.alpha_mlm == 0.0) assert (args.alpha_mlm > 0.0 and args.alpha_clm == 0.0) or (args.alpha_mlm == 0.0 and args.alpha_clm > 0.0) if args.mlm: assert os.path.isfile(args.token_counts) assert (args.student_type in ["roberta", "distilbert"]) and (args.teacher_type in ["roberta", "bert"]) else: assert (args.student_type in ["gpt2"]) and (args.teacher_type in ["gpt2"]) assert args.teacher_type == args.student_type or ( args.student_type == "distilbert" and args.teacher_type == "bert" ) assert os.path.isfile(args.student_config) if args.student_pretrained_weights is not None: assert os.path.isfile(args.student_pretrained_weights) if args.freeze_token_type_embds: assert args.student_type in ["roberta"] assert args.alpha_ce >= 0.0 assert args.alpha_mlm >= 0.0 assert args.alpha_clm >= 0.0 assert args.alpha_mse >= 0.0 assert args.alpha_cos >= 0.0 assert args.alpha_ce + args.alpha_mlm + args.alpha_clm + args.alpha_mse + args.alpha_cos > 0.0 def freeze_pos_embeddings(student, args): if args.student_type == "roberta": student.roberta.embeddings.position_embeddings.weight.requires_grad = False elif args.student_type == "gpt2": student.transformer.wpe.weight.requires_grad = False def freeze_token_type_embeddings(student, args): if args.student_type == "roberta": student.roberta.embeddings.token_type_embeddings.weight.requires_grad = False def main(): parser = argparse.ArgumentParser(description="Training") parser.add_argument("--force", action="store_true", help="Overwrite dump_path if it already exists.") parser.add_argument( "--dump_path", type=str, required=True, help="The output directory (log, checkpoints, parameters, etc.)" ) parser.add_argument( "--data_file", type=str, required=True, help="The binarized file (tokenized + tokens_to_ids) and grouped by sequence.", ) parser.add_argument( "--student_type", type=str, choices=["distilbert", "roberta", "gpt2"], required=True, help="The student type (DistilBERT, RoBERTa).", ) parser.add_argument("--student_config", type=str, required=True, help="Path to the student configuration.") parser.add_argument( "--student_pretrained_weights", default=None, type=str, help="Load student initialization checkpoint." ) parser.add_argument( "--teacher_type", choices=["bert", "roberta", "gpt2"], required=True, help="Teacher type (BERT, RoBERTa)." ) parser.add_argument("--teacher_name", type=str, required=True, help="The teacher model.") parser.add_argument("--temperature", default=2.0, type=float, help="Temperature for the softmax temperature.") parser.add_argument( "--alpha_ce", default=0.5, type=float, help="Linear weight for the distillation loss. Must be >=0." ) parser.add_argument( "--alpha_mlm", default=0.0, type=float, help="Linear weight for the MLM loss. Must be >=0. Should be used in conjunction with `mlm` flag.", ) parser.add_argument("--alpha_clm", default=0.5, type=float, help="Linear weight for the CLM loss. Must be >=0.") parser.add_argument("--alpha_mse", default=0.0, type=float, help="Linear weight of the MSE loss. Must be >=0.") parser.add_argument( "--alpha_cos", default=0.0, type=float, help="Linear weight of the cosine embedding loss. Must be >=0." ) parser.add_argument( "--mlm", action="store_true", help="The LM step: MLM or CLM. If `mlm` is True, the MLM is used over CLM." ) parser.add_argument( "--mlm_mask_prop", default=0.15, type=float, help="Proportion of tokens for which we need to make a prediction.", ) parser.add_argument("--word_mask", default=0.8, type=float, help="Proportion of tokens to mask out.") parser.add_argument("--word_keep", default=0.1, type=float, help="Proportion of tokens to keep.") parser.add_argument("--word_rand", default=0.1, type=float, help="Proportion of tokens to randomly replace.") parser.add_argument( "--mlm_smoothing", default=0.7, type=float, help="Smoothing parameter to emphasize more rare tokens (see XLM, similar to word2vec).", ) parser.add_argument("--token_counts", type=str, help="The token counts in the data_file for MLM.") parser.add_argument( "--restrict_ce_to_mask", action="store_true", help="If true, compute the distillation loss only the [MLM] prediction distribution.", ) parser.add_argument( "--freeze_pos_embs", action="store_true", help="Freeze positional embeddings during distillation. For student_type in ['roberta', 'gpt2'] only.", ) parser.add_argument( "--freeze_token_type_embds", action="store_true", help="Freeze token type embeddings during distillation if existent. For student_type in ['roberta'] only.", ) parser.add_argument("--n_epoch", type=int, default=3, help="Number of pass on the whole dataset.") parser.add_argument("--batch_size", type=int, default=5, help="Batch size (for each process).") parser.add_argument( "--group_by_size", action="store_false", help="If true, group sequences that have similar length into the same batch. Default is true.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=50, help="Gradient accumulation for larger training batches.", ) parser.add_argument("--warmup_prop", default=0.05, type=float, help="Linear warmup proportion.") parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.") parser.add_argument("--learning_rate", default=5e-4, type=float, help="The initial learning rate for Adam.") parser.add_argument("--adam_epsilon", default=1e-6, type=float, help="Epsilon for Adam optimizer.") parser.add_argument("--max_grad_norm", default=5.0, type=float, help="Max gradient norm.") parser.add_argument("--initializer_range", default=0.02, type=float, help="Random initialization range.") parser.add_argument( "--fp16", action="store_true", help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit", ) parser.add_argument( "--fp16_opt_level", type=str, default="O1", help=( "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']. " "See details at https://nvidia.github.io/apex/amp.html" ), ) parser.add_argument("--n_gpu", type=int, default=1, help="Number of GPUs in the node.") parser.add_argument("--local_rank", type=int, default=-1, help="Distributed training - Local rank") parser.add_argument("--seed", type=int, default=56, help="Random seed") parser.add_argument("--log_interval", type=int, default=500, help="Tensorboard logging interval.") parser.add_argument("--checkpoint_interval", type=int, default=4000, help="Checkpoint interval.") args = parser.parse_args() sanity_checks(args) # ARGS # init_gpu_params(args) set_seed(args) if args.is_master: if os.path.exists(args.dump_path): if not args.force: raise ValueError( f"Serialization dir {args.dump_path} already exists, but you have not precised wheter to overwrite" " itUse `--force` if you want to overwrite it" ) else: shutil.rmtree(args.dump_path) if not os.path.exists(args.dump_path): os.makedirs(args.dump_path) logger.info(f"Experiment will be dumped and logged in {args.dump_path}") # SAVE PARAMS # logger.info(f"Param: {args}") with open(os.path.join(args.dump_path, "parameters.json"), "w") as f: json.dump(vars(args), f, indent=4) git_log(args.dump_path) student_config_class, student_model_class, _ = MODEL_CLASSES[args.student_type] teacher_config_class, teacher_model_class, teacher_tokenizer_class = MODEL_CLASSES[args.teacher_type] # TOKENIZER # tokenizer = teacher_tokenizer_class.from_pretrained(args.teacher_name) special_tok_ids = {} for tok_name, tok_symbol in tokenizer.special_tokens_map.items(): idx = tokenizer.all_special_tokens.index(tok_symbol) special_tok_ids[tok_name] = tokenizer.all_special_ids[idx] logger.info(f"Special tokens {special_tok_ids}") args.special_tok_ids = special_tok_ids args.max_model_input_size = tokenizer.max_model_input_sizes[args.teacher_name] # DATA LOADER # logger.info(f"Loading data from {args.data_file}") with open(args.data_file, "rb") as fp: data = pickle.load(fp) if args.mlm: logger.info(f"Loading token counts from {args.token_counts} (already pre-computed)") with open(args.token_counts, "rb") as fp: counts = pickle.load(fp) token_probs = np.maximum(counts, 1) ** -args.mlm_smoothing for idx in special_tok_ids.values(): token_probs[idx] = 0.0 # do not predict special tokens token_probs = torch.from_numpy(token_probs) else: token_probs = None train_lm_seq_dataset = LmSeqsDataset(params=args, data=data) logger.info("Data loader created.") # STUDENT # logger.info(f"Loading student config from {args.student_config}") stu_architecture_config = student_config_class.from_pretrained(args.student_config) stu_architecture_config.output_hidden_states = True if args.student_pretrained_weights is not None: logger.info(f"Loading pretrained weights from {args.student_pretrained_weights}") student = student_model_class.from_pretrained(args.student_pretrained_weights, config=stu_architecture_config) else: student = student_model_class(stu_architecture_config) if args.n_gpu > 0: student.to(f"cuda:{args.local_rank}") logger.info("Student loaded.") # TEACHER # teacher = teacher_model_class.from_pretrained(args.teacher_name, output_hidden_states=True) if args.n_gpu > 0: teacher.to(f"cuda:{args.local_rank}") logger.info(f"Teacher loaded from {args.teacher_name}.") # FREEZING # if args.freeze_pos_embs: freeze_pos_embeddings(student, args) if args.freeze_token_type_embds: freeze_token_type_embeddings(student, args) # SANITY CHECKS # assert student.config.vocab_size == teacher.config.vocab_size assert student.config.hidden_size == teacher.config.hidden_size assert student.config.max_position_embeddings == teacher.config.max_position_embeddings if args.mlm: assert token_probs.size(0) == stu_architecture_config.vocab_size # DISTILLER # torch.cuda.empty_cache() distiller = Distiller( params=args, dataset=train_lm_seq_dataset, token_probs=token_probs, student=student, teacher=teacher ) distiller.train() logger.info("Let's go get some drinks.") if __name__ == "__main__": main()
transformers/examples/research_projects/distillation/train.py/0
{ "file_path": "transformers/examples/research_projects/distillation/train.py", "repo_id": "transformers", "token_count": 5148 }
307
# Copyright 2022 - Intel Corp. All rights reserved. # Authors: Mayank Kumar Raunak, Javier Turek, Nicole Backage import copy import logging import random import joblib import numpy as np import torch import torch.nn as nn from torch.utils.data import DataLoader from tqdm import tqdm from transformers import AdamW, GPT2LMHeadModel, get_linear_schedule_with_warmup logger = logging.getLogger(__name__) def set_seed(seed): """ For reproducible training Args: seed: A seed for reproducible training """ random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) def compute_perplexity(model, test_data, context_len): """ Computes perplexity of the transformer model on data in test_data Args: model: Pre-trained GPT2 model test_data: Data on which perplexity calculation is required context_len: The maximum total input sequence length after tokenization. Sequences longer than this will be truncated, sequences shorter will be padded Returns: Perplexity on input test data """ model.eval() device = next(model.parameters()).device eval_batch_size = 1 context = torch.zeros((eval_batch_size, context_len), dtype=torch.long, device=device) eval_dataloader = DataLoader(test_data, shuffle=False, batch_size=eval_batch_size) eval_loss = torch.zeros(1, device=device) nb_eval_examples = 0 for batch in eval_dataloader: batch.to(device) # pad context.zero_() for i in range(eval_batch_size): context[i, :] = batch[i] outputs = model(context, labels=context) eval_loss += outputs[0].sum().item() nb_eval_examples += batch.size(0) eval_loss = eval_loss / nb_eval_examples perplexity = torch.exp(eval_loss) model.train() return perplexity def load_gpt2(model_name="openai-community/gpt2"): """ load original openai-community/gpt2 and save off for quicker loading Args: model_name: GPT-2 Returns: GPT-2 model """ model = GPT2LMHeadModel.from_pretrained(model_name, output_hidden_states=True) torch.save(model.state_dict(), model_name + "local.pt") return model def recopy_gpt2(orig_model, device, max_steps): """ Reset the model to the original pretrained GPT-2 weights after each iteration Args: orig_model: Original pretrained GPT-2 model imported from Transformers library device: CPU/GPU max_steps: number of training steps Returns: Original PreTrained GPT-2 model, lm_optimizer: Adam optimizer with Decoupled weight decay lm_scheduler: linear scheduler with the appropriate schedule """ model = copy.deepcopy(orig_model) model.to(device) no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, {"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0}, ] lm_optimizer = AdamW(optimizer_grouped_parameters, lr=5e-5, eps=1e-8) lm_scheduler = get_linear_schedule_with_warmup(lm_optimizer, 0, max_steps) torch.cuda.empty_cache() return model, lm_optimizer, lm_scheduler def intermittent_save(contexts, real_perps, past_perps, filename): """ save the perplexity differences to filename Args: contexts: Example on which the perplexity is calculated real_perps: Perplexity after back-propagating on the selected context past_perps: Perplexity of model before training on the context filename: File to store perplexity differences Returns: file with perplexity differences """ # save the perplexity differences to filename avg = np.array(real_perps).mean() std = np.array(real_perps).std() perp_diff = (real_perps - avg) / std data_final = list(zip(contexts, perp_diff, past_perps)) joblib.dump(data_final, filename) def collect_objective_set( model, orig_perp, context_len, train_data, objective_set, max_steps, device, filename="dev.jbl", recopy_model=recopy_gpt2, ): """ Collect individual IGF values from pre-trained transformer model max_steps samples of training data to train secondary model Args: model: Pre-trained GPT2 model orig_perp: Perplexity of original pretrained GPT-2 model context_len: The maximum total input sequence length after tokenization. Sequences longer than this will be truncated, sequences shorter will be padded train_data: Data to train model objective_set: Contexts used to create (X,IG(X)) pairs which is the training data for secondary learner max_steps: To calculate training epochs of model device: GPU/CPU filename: To store intermediate perplexity differences recopy_model: Reset the model to the original pretrained GPT-2 weights after each iteration Returns: file stored intermediate perplexity differences in intermediate stages """ # initialize variables to record relevant information contexts = [] real_perps = [] past_perps = [] # Initialize the transformer model orig_model = copy.deepcopy(model) orig_model.to(device="cpu") torch.cuda.empty_cache() # Compute perplexity of initial transformer model for comparison model.train() model, lm_optimizer, lm_scheduler = recopy_model(orig_model, device, max_steps) for step in tqdm(range(max_steps)): context = torch.zeros((1, context_len), dtype=torch.long, device=device) story = random.choice(train_data) start = random.randint(0, len(story[0]) - context_len - 1) context[0, :] = story[0][start : start + context_len] lm_optimizer.zero_grad() outputs = model(context, labels=context) lm_loss = outputs[0] past_perp = compute_perplexity(model, context, context_len) model.train() lm_loss.backward() # Do LM backprop torch.nn.utils.clip_grad_norm_(model.parameters(), 3.0) lm_optimizer.step() lm_scheduler.step() # Update learning rate schedule # Compute perplexity after back-propagating on the selected context real_perp = compute_perplexity(model, objective_set, context_len) # Periodically save the stored (X, IG(X)) pairs if step % 1000 == 0 and step > 1: intermittent_save(contexts, real_perps, past_perps, filename) # Reset the pretrained model to the original pretrained GPT-2 weights after each iteration model, lm_optimizer, lm_scheduler = recopy_model(orig_model, device, max_steps) past_perps.append(past_perp.item()) real_perps.append(orig_perp - real_perp.item()) contexts.append(np.array(context.cpu())) intermittent_save(contexts, real_perps, past_perps, filename) def generate_datasets( context_len, file="data/tokenized_stories_train_wikitext103.jbl", number=100, min_len=1026, trim=True ): """ Generate objective set and training set Args: context_len: The maximum total input sequence length after tokenization. Sequences longer than this will be truncated, sequences shorter will be padded file: Tokenized data split into training set and objective set number: size of objective dataset min_len: minimum length of a context in objective set trim: If True truncate the context if it exceeds context length Returns: Generated objective set and training data """ # Generate objective set and training set # Designate the first number (100) articles that are long enough to be used # as our objective set, rest (that are long enough) are training data for # secondary learner data = joblib.load(file) print("data loaded") objective_set = [] if trim: for i, example in enumerate(data): if len(example[0]) > min_len: start = random.randint(0, len(example[0]) - context_len - 1) objective_set.append(example[0, start : start + context_len]) if len(objective_set) >= number: break train_data = [] for j in range(i + 1, len(data)): if len(data[j][0]) > min_len: train_data.append(data[j]) else: objective_set = data[0:number] train_data = data[number:] joblib.dump(objective_set, "objective_set.jbl") print("objective set saved") return train_data, objective_set def train_secondary_learner( secondary_learner, train_dataset, max_epochs, batch_size, eval_freq=50, igf_model_path="secondary_learner.pt" ): """ Train the secondary learner (igf_model) Args: secondary_learner: secondary learner train_dataset: data to train secondary learner max_epochs: number of epochs to train secondary learner batch_size: batch size of training data of secondary learner eval_freq: secondary model evaluation can be triggered at eval_freq igf_model_path: path to store trained secondary learner Returns: Trained secondary learner """ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # We will use the first 512 pairs from our dataset as a test set for # our secondary learner and the rest to train test_dataset = train_dataset[:512] train_dataset = train_dataset[512:] train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size) test_dataloader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size) # secondary learner model set up loss = nn.MSELoss() test_loss = nn.MSELoss(reduction="sum") secondary_learner.to(device) q_optimizer = torch.optim.Adam(secondary_learner.parameters(), lr=0.00001) secondary_learner.train() # TODO in original code this is written as number of actual batches seen # not number of items seen but other places it is number of items instead. # improve consistency! changed this to epochs for clarity best_test_loss = float("inf") # Iterate through batches until we've used max_steps batches for epoch in range(int(max_epochs)): tr_q_loss = 0.0 secondary_learner.train() for step, batch in enumerate(train_dataloader): context = batch[0].to(device) real_q = batch[1].to(device) predicted_q = secondary_learner(context) q_optimizer.zero_grad() q_loss = loss(predicted_q, real_q.float()) q_loss.backward() q_optimizer.step() tr_q_loss += q_loss.item() # model trains fairly quickly so we won't wait for a full epoch # eval is triggered at eval_freq and end of epochs if (step % eval_freq == 0 and step > 0) or ((step + 1) == len(train_dataloader)): tr_loss = tr_q_loss / (step + 1) secondary_learner.eval() q_loss2 = 0.0 sum_q2 = 0.0 predicted = [] actual = [] # Compute performance of the secondary learner after this batch for step2, batch2 in enumerate(test_dataloader): features2 = batch2[0].to(device) real_q2 = batch2[1].to(device) predicted_q2 = secondary_learner(features2) q_loss2 += test_loss(predicted_q2, real_q2).item() sum_q2 += torch.sum(predicted_q2).item() for ei, i in enumerate(predicted_q2.cpu().detach().numpy()): predicted.append(i.item()) for ei, i in enumerate(real_q2.cpu().detach().numpy()): actual.append(i.item()) q_loss2 /= len(test_dataset) print( "Epoch: ", epoch, "step: ", step, "Avg. q:", sum_q2 / len(test_dataset), "Train Loss: ", tr_loss, "Test Loss: ", q_loss2, ) if q_loss2 < best_test_loss: joblib.dump((predicted, actual), "pred_vs_actual.jbl") torch.save(secondary_learner.state_dict(), igf_model_path) best_test_loss = q_loss2 secondary_learner.train() return secondary_learner class SecondaryLearner(nn.Module): """ Our secondary learner """ def __init__(self, model): """ We use a simple convolutional network as our secondary learner Args: model: Pre-trained GPT2 model """ # embeddings are from the pretrained model super(SecondaryLearner, self).__init__() self.embeddings = model.transformer.wte self.embeddings.weight = copy.deepcopy(model.transformer.wte.weight) self.conv = nn.Conv1d(self.embeddings.weight.size(1), 256, 3, padding=1) self.fc = nn.Sequential(nn.Linear(256, 32), nn.Dropout(p=0.1), nn.Linear(32, 32), nn.Linear(32, 1)) def forward(self, context): """ Forward pass through the secondary learner Args: context: Context input to the secondary learner Returns: tensor after squeeze operation """ pooled = torch.max(self.conv(self.embeddings(context).squeeze(1).transpose(1, 2)), 2)[0] qs = self.fc(pooled) return qs.squeeze(1) @classmethod def from_pretrained(cls, state_path, model): """ Load the secondary learner Args: state_path: Path to save secondary learner model: Pretrained GPT-2 Returns: secondary learner """ secondary_learner = cls(model) # this calls __init__ state_dict = torch.load(state_path) secondary_learner.load_state_dict(state_dict) secondary_learner.embeddings = model.transformer.wte secondary_learner.embeddings.weight = copy.deepcopy(model.transformer.wte.weight) return secondary_learner
transformers/examples/research_projects/information-gain-filtration/igf/igf.py/0
{ "file_path": "transformers/examples/research_projects/information-gain-filtration/igf/igf.py", "repo_id": "transformers", "token_count": 6117 }
308
import copy from transformers.configuration_utils import PretrainedConfig from transformers.utils import logging logger = logging.get_logger(__name__) class HybridCLIPConfig(PretrainedConfig): r""" :class:`HybridCLIPConfig` is the configuration class to store the configuration of a :class:`~HybridCLIPModel`. It is used to instantiate HybridCLIPModel model according to the specified arguments, defining the text model and vision model configs. Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. Args: text_config_dict (:obj:`dict`): Dictionary of configuration options that defines text model config. vision_config_dict (:obj:`dict`): Dictionary of configuration options that defines vison model config. projection_dim (:obj:`int`, `optional`, defaults to 512): Dimentionality of text and vision projection layers. kwargs (`optional`): Dictionary of keyword arguments. Examples:: >>> from transformers import BertConfig, CLIPConfig, HybridCLIPConfig, FlaxHybridCLIP >>> # Initializing a BERT and CLIP configuration >>> config_text = BertConfig() >>> config_vision = CLIPConfig() >>> config = HybridCLIPConfig.from_text_vision_configs(config_text, config_vision, projection_dim=512) >>> # Initializing a BERT and CLIPVision model >>> model = EncoderDecoderModel(config=config) >>> # Accessing the model configuration >>> config_text = model.config.text_config >>> config_vision = model.config.vision_config >>> # Saving the model, including its configuration >>> model.save_pretrained('my-model') >>> # loading model and config from pretrained folder >>> encoder_decoder_config = HybridCLIPConfig.from_pretrained('my-model') >>> model = FlaxHybridCLIP.from_pretrained('my-model', config=encoder_decoder_config) """ model_type = "hybrid-clip" is_composition = True def __init__(self, projection_dim=512, **kwargs): super().__init__(**kwargs) if "text_config" not in kwargs: raise ValueError("`text_config` can not be `None`.") if "vision_config" not in kwargs: raise ValueError("`vision_config` can not be `None`.") text_config = kwargs.pop("text_config") vision_config = kwargs.pop("vision_config") text_model_type = text_config.pop("model_type") vision_model_type = vision_config.pop("model_type") from transformers import AutoConfig self.text_config = AutoConfig.for_model(text_model_type, **text_config) if vision_model_type == "clip": self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config).vision_config elif vision_model_type == "clip_vision_model": from transformers import CLIPVisionConfig self.vision_config = CLIPVisionConfig(**vision_config) else: self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config) self.projection_dim = projection_dim self.initializer_factor = 1.0 @classmethod def from_text_vision_configs(cls, text_config: PretrainedConfig, vision_config: PretrainedConfig, **kwargs): r""" Instantiate a :class:`HybridCLIPConfig` (or a derived class) from text model configuration and vision model configuration. Returns: :class:`HybridCLIPConfig`: An instance of a configuration object """ return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs) def to_dict(self): """ Serializes this instance to a Python dictionary. Override the default :meth:`~transformers.PretrainedConfig.to_dict`. Returns: :obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, """ output = copy.deepcopy(self.__dict__) output["text_config"] = self.text_config.to_dict() output["vision_config"] = self.vision_config.to_dict() output["model_type"] = self.__class__.model_type return output
transformers/examples/research_projects/jax-projects/hybrid_clip/configuration_hybrid_clip.py/0
{ "file_path": "transformers/examples/research_projects/jax-projects/hybrid_clip/configuration_hybrid_clip.py", "repo_id": "transformers", "token_count": 1634 }
309
# Token classification ## PyTorch version, no Trainer Fine-tuning (m)LUKE for token classification task such as Named Entity Recognition (NER), Parts-of-speech tagging (POS) or phrase extraction (CHUNKS). You can easily customize it to your needs if you need extra processing on your datasets. It will either run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own text files for training and validation, you might just need to add some tweaks in the data preprocessing. The script can be run in a distributed setup, on TPU and supports mixed precision by the mean of the [🀗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally after installing it: ```bash pip install git+https://github.com/huggingface/accelerate ``` then to train English LUKE on CoNLL2003: ```bash export TASK_NAME=ner python run_luke_ner_no_trainer.py \ --model_name_or_path studio-ousia/luke-base \ --dataset_name conll2003 \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run ```bash accelerate config ``` and reply to the questions asked. Then ```bash accelerate test ``` that will check everything is ready for training. Finally, you can launch training with ```bash export TASK_NAME=ner accelerate launch run_ner_no_trainer.py \ --model_name_or_path studio-ousia/luke-base \ --dataset_name conll2003 \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` This command is the same and will work for: - a CPU-only setup - a setup with one GPU - a distributed training with several GPUs (single or multi node) - a training on TPUs Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.
transformers/examples/research_projects/luke/README.md/0
{ "file_path": "transformers/examples/research_projects/luke/README.md", "repo_id": "transformers", "token_count": 667 }
310
import copy import itertools from typing import List, Optional, Tuple import torch import torch.nn.functional as F from transformers import BartConfig from transformers.generation import GenerationMixin def _convert_past_list_to_tuple(past_key_values): """ In Bart model, the type of past_key_values is tuple(tuple(torch.FloatTensor)) which is not TorchScript-compatible. To support this, we have to convert it during the export process. This function will convert past values from a list to tuple(tuple(torch.FloatTensor)) for the inner decoder. According to the definition of past_key_values, each inner tuple(torch.FloatTensor) has 4 tensors, so we convert every 4 elements in the list as a tuple(torch.FloatTensor). """ count_of_each_inner_tuple = 4 results = () temp_result = () count_n = len(past_key_values) // count_of_each_inner_tuple for idx in range(count_n): real_idx = idx * count_of_each_inner_tuple temp_result = tuple(past_key_values[real_idx : real_idx + count_of_each_inner_tuple]) results += ((temp_result),) return results class EncoderForONNX(torch.nn.Module): def __init__(self, encoder): super().__init__() self.encoder = encoder def forward(self, input_ids, attention_mask): return self.encoder( input_ids=input_ids, attention_mask=attention_mask, return_dict=False, ) class DecoderForONNX(torch.nn.Module): def __init__(self, decoder): super().__init__() self.decoder = decoder def forward(self, input_ids, encoder_state, attention_mask, past=None): all_results = None if past is not None: all_results = _convert_past_list_to_tuple(past) input_ids = input_ids[:, -1:] last_hidden_state, past_key_values = self.decoder( input_ids=input_ids, encoder_hidden_states=encoder_state, encoder_attention_mask=attention_mask, past_key_values=all_results, return_dict=False, ) past_values = [] for past in past_key_values: past_values = past_values + list(past) return last_hidden_state, past_values def _create_traced_encoder(encoder, input_ids, attention_mask): encoder_c = copy.deepcopy(encoder) encoder_for_onnx = EncoderForONNX(encoder_c) return torch.jit.trace(encoder_for_onnx, (input_ids, attention_mask)) def _create_traced_decoder(decoder, input_ids, encoder_state, attention_mask, past=None): decoder_c = copy.deepcopy(decoder) decoder_for_onnx = DecoderForONNX(decoder_c) past_values = list(itertools.chain.from_iterable(past or ())) # Do this twice so we got 2 different decoders for further work. if past_values: return torch.jit.trace(decoder_for_onnx, (input_ids, encoder_state, attention_mask, past_values)) else: return torch.jit.trace(decoder_for_onnx, (input_ids, encoder_state, attention_mask)) class BartConfigTS(BartConfig, torch.nn.Module): """ BartConfigTS is a TorchScript-compatible transformers.models.bart.configuration_bart.BartConfig. TorchScript only supports sub-classes of torch.nn.Module. """ def __init__(self, config): BartConfig.__init__(self, config) torch.nn.Module.__init__(self) class MinLengthLogitsProcessorTS(torch.nn.Module): r""" :class:`transformers.LogitsProcessor` enforcing a min-length by setting EOS probability to 0. Args: min_length (:obj:`int`): The minimum length below which the score of :obj:`eos_token_id` is set to :obj:`-float("Inf")`. eos_token_id (:obj:`int`): The id of the `end-of-sequence` token. """ def __init__(self, min_length: int, eos_token_id: int): super().__init__() if not isinstance(min_length, int) or min_length < 0: raise ValueError(f"`min_length` has to be a positive integer, but is {min_length}") if not isinstance(eos_token_id, int) or eos_token_id < 0: raise ValueError(f"`eos_token_id` has to be a positive integer, but is {eos_token_id}") self.min_length = min_length self.eos_token_id = eos_token_id def forward(self, input_ids, scores) -> torch.Tensor: cur_len = input_ids.shape[-1] if cur_len < self.min_length: scores[:, self.eos_token_id] = -float("inf") return scores class BARTGenerator(torch.nn.Module, GenerationMixin): def __init__(self, model): super().__init__() self.config = BartConfigTS(model.config) self.config.force_bos_token_to_be_generated = False self._trace_modules(model) self.logits_processor = MinLengthLogitsProcessorTS(self.config.min_length, self.config.eos_token_id) self.final_logits_weight = model.model.shared.weight self.final_logits_bias = model.final_logits_bias self.decoder_layers = model.config.decoder_layers def _trace_modules(self, model): input_ids = torch.tensor( [ [ 19, 669, 18, 420, 8, 664, 57, 42, 8, 664, 21, 3028, 195, 4445, 331, 1293, 34, 21, 10, 6174, 1100, 6, 69, 104, 42, 32, 2621, 1638, 144, 4, 6174, 558, 108, 4419, 1091, 28, 4, 1668, 9, 1509, 1621, 279, 35, 867, 2734, 85, 11, 2216, 2734, 85, 203, 2244, 7, 6, 15, 8102, 7, 57, 8629, 5, model.config.eos_token_id, ] ], device=model.device, dtype=torch.long, ) attention_mask = torch.tensor( [[True] * input_ids.shape[-1]], device=model.device, dtype=torch.bool, ) self.encoder = _create_traced_encoder(model.get_encoder(), input_ids, attention_mask) encoder_outputs = model.get_encoder()(input_ids, attention_mask=attention_mask, return_dict=True) decoder = model.model.decoder decoder_outputs = decoder(input_ids, attention_mask, encoder_outputs["last_hidden_state"], None, None, None) self.decoder_no_past = _create_traced_decoder( model.model.decoder, input_ids, encoder_outputs["last_hidden_state"], attention_mask ) self.decoder_with_past = _create_traced_decoder( model.model.decoder, input_ids, encoder_outputs["last_hidden_state"], attention_mask, decoder_outputs[1] ) def _encoder_forward(self, input_ids, attention_mask): return self.encoder(input_ids, attention_mask)[0] @staticmethod def _init_sequence_length_for_generation( input_ids: torch.LongTensor, max_length: int ) -> Tuple[torch.Tensor, torch.Tensor, int]: unfinished_sequences = torch.zeros(input_ids.shape[0], dtype=torch.long, device=input_ids.device) + 1 sequence_lengths = torch.zeros(input_ids.shape[0], dtype=torch.long, device=input_ids.device) + max_length cur_len = input_ids.shape[-1] return sequence_lengths, unfinished_sequences, cur_len def _decoder_forward(self, input_ids, encoder_output, attention_mask, past: List[torch.Tensor]): # Update here to use different decoder for different values of past. if past is None or len(past) == 0: decoder_output, past = self.decoder_no_past( input_ids=input_ids, encoder_state=encoder_output, attention_mask=attention_mask ) else: decoder_output, past = self.decoder_with_past( input_ids=input_ids, encoder_state=encoder_output, attention_mask=attention_mask, past=past ) lm_logits = F.linear(decoder_output, self.final_logits_weight, bias=self.final_logits_bias) return lm_logits, past def greedy_search( self, input_ids, encoder_output, attention_mask, max_length, pad_token_id: int, eos_token_id: int ): # init sequence length tensors sequence_lengths, unfinished_sequences, cur_len = self._init_sequence_length_for_generation( input_ids, max_length ) past: List[torch.Tensor] = [] while cur_len < max_length: logits, past = self._decoder_forward(input_ids, encoder_output, attention_mask, past) next_token_logits = logits[:, -1, :] # pre-process distribution scores = self.logits_processor(input_ids, next_token_logits) # argmax next_tokens = torch.argmax(scores, dim=-1) # add code that transfomers next_tokens to tokens_to_add if eos_token_id is not None: assert pad_token_id is not None, "If eos_token_id is defined, make sure that pad_token_id is defined." next_tokens = next_tokens * unfinished_sequences + (pad_token_id) * (1 - unfinished_sequences) # add token and increase length by one input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) # update sequence length if eos_token_id is not None: sequence_lengths, unfinished_sequences = self._update_seq_length_for_generation( sequence_lengths, unfinished_sequences, cur_len, next_tokens == eos_token_id ) # stop when there is a </s> in each sentence, or if we exceed the maximul length if unfinished_sequences.max() == 0: break # increase cur_len cur_len = cur_len + 1 return input_ids def _prepare_decoder_input_ids_for_generation( self, input_ids: torch.LongTensor, decoder_start_token_id, bos_token_id: Optional[int] = None, ) -> torch.LongTensor: decoder_input_ids = ( torch.ones((input_ids.shape[0], 1), dtype=input_ids.dtype, device=input_ids.device) * decoder_start_token_id ) return decoder_input_ids def forward(self, input_ids, attention_mask, max_length, decoder_start_token_id): pad_token_id = self.config.pad_token_id bos_token_id = self.config.bos_token_id eos_token_id = self.config.eos_token_id # special case if pad_token_id is not defined if pad_token_id is None and eos_token_id is not None: # Setting `pad_token_id` to `eos_token_id`:{eos_token_id} for open-end generation. pad_token_id = eos_token_id encoder_output = self._encoder_forward(input_ids, attention_mask) input_ids = self._prepare_decoder_input_ids_for_generation( input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id, ) return self.greedy_search( input_ids, encoder_output, attention_mask, max_length=max_length, pad_token_id=pad_token_id, eos_token_id=eos_token_id, ) # TorchScript compatible BeamSearchScorer class BeamSearchScorerTS(torch.nn.Module): def __init__(self): super().__init__() self.max_length: int = 200 self.num_beams: int = 3 self.batch_size: int = 1 self.length_penalty: float = 1.0 self.do_early_stopping: bool = True self.num_beam_hyps_to_keep: int = 1 self.num_beam_groups: int = 1 self.group_size: int = self.num_beams // self.num_beam_groups self._done = torch.zeros(self.batch_size, dtype=torch.bool) self._beam_hyps_count = torch.zeros(self.batch_size, dtype=torch.long) self._beam_hyps_worst_scores = torch.zeros(self.batch_size) + 1e9 self._beam_hyps_max_length: int = self.max_length - 1 self._beam_hyps: List[torch.Tensor] = [torch.zeros(2)] # placeholder for TorchScript compatibility self._beam_scores: List[torch.Tensor] = [torch.zeros(2)] # placeholder for TorchScript compatibility def is_done(self) -> torch.Tensor: return self._done.all() def init( self, batch_size: int, max_length: int, num_beams: int, device: torch.device, length_penalty: float = 1.0, do_early_stopping: bool = False, num_beam_hyps_to_keep: int = 1, num_beam_groups: int = 1, ): self.max_length = max_length self.num_beams = num_beams self.batch_size = batch_size self.length_penalty = length_penalty self.do_early_stopping = do_early_stopping self.num_beam_hyps_to_keep = num_beam_hyps_to_keep self.num_beam_groups = num_beam_groups self.group_size = self.num_beams // self.num_beam_groups # NOTE: TorchScript does not support List of Modules # Rewritten BeamHypotheses with tensors and list of tensors. self._done = torch.zeros(batch_size, dtype=torch.bool, device=device) self._beam_hyps_count = torch.zeros(batch_size, dtype=torch.long, device=device) self._beam_hyps_worst_scores = torch.zeros(batch_size, device=device) + 1e9 self._beam_hyps = [] self._beam_scores = [] self._beam_hyps_max_length = max_length - 1 # ignoring bos_token if not isinstance(num_beams, int) or num_beams <= 1: raise ValueError( f"`num_beams` has to be an integer strictly greater than 1, but is {num_beams}. For `num_beams` == 1," " one should make use of `greedy_search` instead." ) if not isinstance(num_beam_groups, int) or (num_beam_groups > num_beams) or (num_beams % num_beam_groups != 0): raise ValueError( "`num_beam_groups` has to be an integer smaller or equal than `num_beams` and `num_beams` has to be" f" divisible by `num_beam_groups`, but is {num_beam_groups} with `num_beams` being {num_beams}." ) def hypo_len(self, hypo_idx: int): """ Number of hypotheses in the list. """ return self._beam_hyps_count[hypo_idx] def hypo_add(self, hyp: torch.Tensor, sum_logprobs: float, hypo_idx: int): """ Add a new hypothesis to the list. """ score = sum_logprobs / (hyp.shape[-1] ** self.length_penalty) hyps_count = self.hypo_len(hypo_idx) if hyps_count < self.num_beams or score > self._beam_hyps_worst_scores[hypo_idx]: # NOTE: work around difference of torch.sum(empty_tensor) == 0, while error in onnx. # Bug: https://msdata.visualstudio.com/Vienna/_workitems/edit/1486599 beam_idx = ( torch.sum(self._beam_hyps_count[:hypo_idx]) if hypo_idx != 0 else torch.tensor(0, dtype=torch.long) ) self._beam_scores.insert(beam_idx, torch.tensor([score])) self._beam_hyps.insert(beam_idx, hyp) if hyps_count + 1 > self.num_beams: sorted_next_scores, sorted_indices = torch.topk( torch.cat(self._beam_scores)[beam_idx : beam_idx + hyps_count + 1], hyps_count + 1, largest=False ) del self._beam_hyps[int((sorted_indices[0] + beam_idx))] del self._beam_scores[int((sorted_indices[0] + beam_idx))] self._beam_hyps_worst_scores[hypo_idx] = sorted_next_scores[1] else: self._beam_hyps_worst_scores[hypo_idx] = min(score, self._beam_hyps_worst_scores[hypo_idx]) self._beam_hyps_count[hypo_idx] = hyps_count + 1 def hypo_is_done(self, hypo_idx: int, best_sum_logprobs: float, cur_len: int) -> bool: """ If there are enough hypotheses and that none of the hypotheses being generated can become better than the worst one in the heap, then we are done with this sentence. """ if self.hypo_len(hypo_idx) < self.num_beams: return False elif self.do_early_stopping: return True else: cur_score = best_sum_logprobs / cur_len**self.length_penalty ret = self._beam_hyps_worst_scores[hypo_idx].item() >= cur_score return ret def process( self, input_ids: torch.Tensor, next_scores: torch.Tensor, next_tokens: torch.Tensor, next_indices: torch.Tensor, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: cur_len = input_ids.shape[-1] batch_size = len(self._beam_hyps_count) assert batch_size == (input_ids.shape[0] // self.group_size) device = input_ids.device next_beam_scores = torch.zeros((batch_size, self.group_size), dtype=next_scores.dtype, device=device) next_beam_tokens = torch.zeros((batch_size, self.group_size), dtype=next_tokens.dtype, device=device) next_beam_indices = torch.zeros((batch_size, self.group_size), dtype=next_indices.dtype, device=device) for batch_idx in range(batch_size): if self._done[batch_idx]: assert ( self.hypo_len(batch_idx) >= self.num_beams ), "Batch can only be done if at least {} beams have been generated".format(self.num_beams) assert ( eos_token_id is not None and pad_token_id is not None ), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined" # pad the batch next_beam_scores[batch_idx, :] = 0 next_beam_tokens[batch_idx, :] = pad_token_id next_beam_indices[batch_idx, :] = 0 continue # next tokens for this sentence beam_idx = 0 for beam_token_rank, (next_token, next_score, next_index) in enumerate( zip(next_tokens[batch_idx], next_scores[batch_idx], next_indices[batch_idx]) ): batch_beam_idx = batch_idx * self.group_size + next_index # add to generated hypotheses if end of sentence if (eos_token_id is not None) and (next_token == eos_token_id): # if beam_token does not belong to top num_beams tokens, it should not be added is_beam_token_worse_than_top_num_beams = beam_token_rank >= self.group_size if is_beam_token_worse_than_top_num_beams: continue self.hypo_add( input_ids[batch_beam_idx].clone(), next_score.item(), batch_idx, ) else: # add next predicted token since it is not eos_token next_beam_scores[batch_idx, beam_idx] = next_score next_beam_tokens[batch_idx, beam_idx] = next_token next_beam_indices[batch_idx, beam_idx] = batch_beam_idx beam_idx += 1 # once the beam for next step is full, don't add more tokens to it. if beam_idx == self.group_size: break if beam_idx < self.group_size: raise ValueError( f"At most {self.group_size} tokens in {next_tokens[batch_idx]} can be equal to `eos_token_id:" f" {eos_token_id}`. Make sure {next_tokens[batch_idx]} are corrected." ) # Check if we are done so that we can save a pad step if all(done) self._done[batch_idx] = self._done[batch_idx] or self.hypo_is_done( batch_idx, next_scores[batch_idx].max().item(), cur_len, ) return next_beam_scores.view(-1), next_beam_tokens.view(-1), next_beam_indices.view(-1) def finalize( self, input_ids: torch.Tensor, final_beam_scores: torch.Tensor, final_beam_tokens: torch.Tensor, final_beam_indices: torch.Tensor, pad_token_id: int, eos_token_id: int, ) -> Tuple[torch.Tensor, torch.Tensor]: batch_size = len(self._beam_hyps_count) # finalize all open beam hypotheses and add to generated hypotheses for batch_idx in range(batch_size): if self._done[batch_idx]: continue # all open beam hypotheses are added to the beam hypothesis # beam hypothesis class automatically keeps the best beams for beam_id in range(self.num_beams): batch_beam_idx = batch_idx * self.num_beams + beam_id final_score = final_beam_scores[batch_beam_idx].item() final_tokens = input_ids[batch_beam_idx] self.hypo_add(final_tokens, final_score, batch_idx) # select the best hypotheses # NOTE: torch.Tensor.new_zeros() is not scriptable sent_lengths = torch.zeros(batch_size * self.num_beam_hyps_to_keep, dtype=torch.long) best = [] best_scores = torch.zeros( batch_size * self.num_beam_hyps_to_keep, device=input_ids.device, dtype=torch.float32 ) # retrieve best hypotheses for i in range(batch_size): # NOTE: lambda is not scriptable batch_hypo_start = torch.sum(self._beam_hyps_count[:i]) if i > 0 else torch.tensor(0, dtype=torch.long) batch_hypo_end = torch.sum(self._beam_hyps_count[: i + 1]) beam_scores = torch.cat(self._beam_scores)[batch_hypo_start:batch_hypo_end] sorted_next_scores, sorted_indices = torch.topk(beam_scores, len(beam_scores), largest=True) for j in range(self.num_beam_hyps_to_keep): best_score = beam_scores[sorted_indices[j]] best_hyp = self._beam_hyps[batch_hypo_start + sorted_indices[j]] sent_lengths[self.num_beam_hyps_to_keep * i + j] = len(best_hyp) # append to lists best.append(best_hyp) best_scores[i * self.num_beam_hyps_to_keep + j] = best_score # prepare for adding eos sent_max_len = min(sent_lengths.max() + 1, self.max_length) decoded = torch.zeros(batch_size * self.num_beam_hyps_to_keep, sent_max_len, dtype=torch.long) # shorter batches are padded if needed if sent_lengths.min() != sent_lengths.max(): assert pad_token_id is not None, "`pad_token_id` has to be defined" decoded.fill_(pad_token_id) # fill with hypotheses and eos_token_id if the latter fits in for i, hypo in enumerate(best): decoded[i, : sent_lengths[i]] = hypo if sent_lengths[i] < self.max_length: decoded[i, sent_lengths[i]] = eos_token_id return decoded, best_scores class BARTBeamSearchGenerator(BARTGenerator): def __init__(self, model): super().__init__(model) self.beam_scorer = BeamSearchScorerTS() self.device = model.device @staticmethod def _expand_inputs_for_generation( input_ids: torch.Tensor, attention_mask: torch.Tensor, last_hidden_state: torch.Tensor, expand_size: int = 1, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: expanded_return_idx = ( torch.arange(input_ids.shape[0]).view(-1, 1).repeat(1, expand_size).view(-1).to(input_ids.device) ) input_ids = input_ids.index_select(0, expanded_return_idx) attention_mask = attention_mask.index_select(0, expanded_return_idx) last_hidden_state = last_hidden_state.index_select(0, expanded_return_idx.to(last_hidden_state.device)) return input_ids, attention_mask, last_hidden_state def adjust_logits_during_generation(self, logits, cur_len: int, max_length: int): if cur_len == 1 and self.config.force_bos_token_to_be_generated: logits = self._force_token_id_to_be_generated(logits, self.config.bos_token_id) elif cur_len == max_length - 1 and self.config.eos_token_id is not None: logits = self._force_token_id_to_be_generated(logits, self.config.eos_token_id) return logits @staticmethod def _force_token_id_to_be_generated(scores, token_id: int): """force one of token_ids to be generated by setting prob of all other tokens to 0 (logprob=-float("inf"))""" mask = torch.full_like(scores, 1, dtype=torch.bool) mask[:, token_id] = False return scores.masked_fill(mask, -float("inf")) def _reorder_cache(self, past: List[torch.Tensor], beam_idx): # if decoder past is not included in output # speedy decoding is disabled and no need to reorder reordered_decoder_past = [] for state in past: reordered_decoder_past.append(state.index_select(0, beam_idx)) return reordered_decoder_past def beam_search( self, input_ids, encoder_output, attention_mask, num_beams, max_length, pad_token_id: int, eos_token_id: int ): batch_size = self.beam_scorer.batch_size num_beams = self.beam_scorer.num_beams batch_beam_size, cur_len = input_ids.shape assert ( num_beams * batch_size == batch_beam_size ), f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}." beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device) beam_scores[:, 1:] = -1e9 beam_scores = beam_scores.view((batch_size * num_beams,)) next_tokens = torch.zeros((batch_size, num_beams), dtype=torch.long, device=input_ids.device) next_indices = torch.zeros((batch_size, num_beams), dtype=torch.long, device=input_ids.device) past: List[torch.Tensor] = [] while cur_len < max_length: logits, past = self._decoder_forward(input_ids, encoder_output, attention_mask, past) next_token_logits = logits[:, -1, :] # adjust tokens for Bart, *e.g.* next_token_logits = self.adjust_logits_during_generation( next_token_logits, cur_len=cur_len, max_length=max_length ) next_token_scores = F.log_softmax(next_token_logits, dim=-1) # (batch_size * num_beams, vocab_size) # pre-process distribution next_token_scores = self.logits_processor(input_ids, next_token_scores) next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores) # reshape for beam search vocab_size = next_token_scores.shape[-1] next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size) next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True ) next_indices = next_tokens // vocab_size next_tokens = next_tokens % vocab_size beam_scores, beam_next_tokens, beam_idx = self.beam_scorer.process( input_ids, next_token_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id, ) input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) cur_len = cur_len + 1 if len(past) > 0: past = self._reorder_cache(past, beam_idx) if self.beam_scorer.is_done(): break sequences, sequence_scores = self.beam_scorer.finalize( input_ids, beam_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id, ) return sequences def forward(self, input_ids, attention_mask, num_beams, max_length, decoder_start_token_id): pad_token_id = self.config.pad_token_id bos_token_id = self.config.bos_token_id eos_token_id = self.config.eos_token_id # special case if pad_token_id is not defined if pad_token_id is None and eos_token_id is not None: # logger.warning(f"Setting `pad_token_id` to `eos_token_id`:{eos_token_id} for open-end generation.") pad_token_id = eos_token_id encoder_output = self._encoder_forward(input_ids, attention_mask) input_ids = self._prepare_decoder_input_ids_for_generation( input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id, ) batch_size = input_ids.shape[0] length_penalty = self.config.length_penalty num_return_sequences = self.config.num_return_sequences early_stopping = True self.beam_scorer.init( batch_size=batch_size, max_length=max_length, num_beams=num_beams, device=self.device, length_penalty=length_penalty, do_early_stopping=early_stopping, num_beam_hyps_to_keep=num_return_sequences, ) input_ids, attention_mask, encoder_output = self._expand_inputs_for_generation( input_ids, attention_mask, encoder_output, expand_size=num_beams, ) return self.beam_search( input_ids=input_ids, encoder_output=encoder_output, attention_mask=attention_mask, num_beams=num_beams, max_length=max_length, pad_token_id=pad_token_id, eos_token_id=eos_token_id, )
transformers/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py/0
{ "file_path": "transformers/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py", "repo_id": "transformers", "token_count": 15163 }
311
#! /usr/bin/env python3 # coding=utf-8 # Copyright (c) 2019 Uber Technologies, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import csv import json import math import time import numpy as np import torch import torch.optim as optim import torch.utils.data as data from nltk.tokenize.treebank import TreebankWordDetokenizer from pplm_classification_head import ClassificationHead from torch import nn from torchtext import data as torchtext_data from torchtext import datasets from tqdm import tqdm, trange from transformers import GPT2LMHeadModel, GPT2Tokenizer torch.manual_seed(0) np.random.seed(0) EPSILON = 1e-10 example_sentence = "This is incredible! I love it, this is the best chicken I have ever had." max_length_seq = 100 class Discriminator(nn.Module): """Transformer encoder followed by a Classification Head""" def __init__(self, class_size, pretrained_model="openai-community/gpt2-medium", cached_mode=False, device="cpu"): super().__init__() self.tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model) self.encoder = GPT2LMHeadModel.from_pretrained(pretrained_model) self.embed_size = self.encoder.transformer.config.hidden_size self.classifier_head = ClassificationHead(class_size=class_size, embed_size=self.embed_size) self.cached_mode = cached_mode self.device = device def get_classifier(self): return self.classifier_head def train_custom(self): for param in self.encoder.parameters(): param.requires_grad = False self.classifier_head.train() def avg_representation(self, x): mask = x.ne(0).unsqueeze(2).repeat(1, 1, self.embed_size).float().to(self.device).detach() hidden = self.encoder.transformer(x)["last_hidden_state"] masked_hidden = hidden * mask avg_hidden = torch.sum(masked_hidden, dim=1) / (torch.sum(mask, dim=1).detach() + EPSILON) return avg_hidden def forward(self, x): if self.cached_mode: avg_hidden = x.to(self.device) else: avg_hidden = self.avg_representation(x.to(self.device)) logits = self.classifier_head(avg_hidden) probs = nn.functional.log_softmax(logits, dim=-1) return probs class Dataset(data.Dataset): def __init__(self, X, y): """Reads source and target sequences from txt files.""" self.X = X self.y = y def __len__(self): return len(self.X) def __getitem__(self, index): """Returns one data pair (source and target).""" data = {} data["X"] = self.X[index] data["y"] = self.y[index] return data def collate_fn(data): def pad_sequences(sequences): lengths = [len(seq) for seq in sequences] padded_sequences = torch.zeros(len(sequences), max(lengths)).long() # padding value = 0 for i, seq in enumerate(sequences): end = lengths[i] padded_sequences[i, :end] = seq[:end] return padded_sequences, lengths item_info = {} for key in data[0].keys(): item_info[key] = [d[key] for d in data] x_batch, _ = pad_sequences(item_info["X"]) y_batch = torch.tensor(item_info["y"], dtype=torch.long) return x_batch, y_batch def cached_collate_fn(data): item_info = {} for key in data[0].keys(): item_info[key] = [d[key] for d in data] x_batch = torch.cat(item_info["X"], 0) y_batch = torch.tensor(item_info["y"], dtype=torch.long) return x_batch, y_batch def train_epoch(data_loader, discriminator, optimizer, epoch=0, log_interval=10, device="cpu"): samples_so_far = 0 discriminator.train_custom() for batch_idx, (input_t, target_t) in enumerate(data_loader): input_t, target_t = input_t.to(device), target_t.to(device) optimizer.zero_grad() output_t = discriminator(input_t) loss = nn.functional.nll_loss(output_t, target_t) loss.backward(retain_graph=True) optimizer.step() samples_so_far += len(input_t) if batch_idx % log_interval == 0: print( "Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format( epoch + 1, samples_so_far, len(data_loader.dataset), 100 * samples_so_far / len(data_loader.dataset), loss.item(), ) ) def evaluate_performance(data_loader, discriminator, device="cpu"): discriminator.eval() test_loss = 0 correct = 0 with torch.no_grad(): for input_t, target_t in data_loader: input_t, target_t = input_t.to(device), target_t.to(device) output_t = discriminator(input_t) # sum up batch loss test_loss += nn.functional.nll_loss(output_t, target_t, reduction="sum").item() # get the index of the max log-probability pred_t = output_t.argmax(dim=1, keepdim=True) correct += pred_t.eq(target_t.view_as(pred_t)).sum().item() test_loss /= len(data_loader.dataset) print( "Performance on test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)".format( test_loss, correct, len(data_loader.dataset), 100.0 * correct / len(data_loader.dataset) ) ) def predict(input_sentence, model, classes, cached=False, device="cpu"): input_t = model.tokenizer.encode(input_sentence) input_t = torch.tensor([input_t], dtype=torch.long, device=device) if cached: input_t = model.avg_representation(input_t) log_probs = model(input_t).data.cpu().numpy().flatten().tolist() print("Input sentence:", input_sentence) print( "Predictions:", ", ".join("{}: {:.4f}".format(c, math.exp(log_prob)) for c, log_prob in zip(classes, log_probs)), ) def get_cached_data_loader(dataset, batch_size, discriminator, shuffle=False, device="cpu"): data_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, collate_fn=collate_fn) xs = [] ys = [] for batch_idx, (x, y) in enumerate(tqdm(data_loader, ascii=True)): with torch.no_grad(): x = x.to(device) avg_rep = discriminator.avg_representation(x).cpu().detach() avg_rep_list = torch.unbind(avg_rep.unsqueeze(1)) xs += avg_rep_list ys += y.cpu().numpy().tolist() data_loader = torch.utils.data.DataLoader( dataset=Dataset(xs, ys), batch_size=batch_size, shuffle=shuffle, collate_fn=cached_collate_fn ) return data_loader def train_discriminator( dataset, dataset_fp=None, pretrained_model="openai-community/gpt2-medium", epochs=10, batch_size=64, log_interval=10, save_model=False, cached=False, no_cuda=False, ): device = "cuda" if torch.cuda.is_available() and not no_cuda else "cpu" print("Preprocessing {} dataset...".format(dataset)) start = time.time() if dataset == "SST": idx2class = ["positive", "negative", "very positive", "very negative", "neutral"] class2idx = {c: i for i, c in enumerate(idx2class)} discriminator = Discriminator( class_size=len(idx2class), pretrained_model=pretrained_model, cached_mode=cached, device=device ).to(device) text = torchtext_data.Field() label = torchtext_data.Field(sequential=False) train_data, val_data, test_data = datasets.SST.splits( text, label, fine_grained=True, train_subtrees=True, ) x = [] y = [] for i in trange(len(train_data), ascii=True): seq = TreebankWordDetokenizer().detokenize(vars(train_data[i])["text"]) seq = discriminator.tokenizer.encode(seq) seq = torch.tensor([50256] + seq, device=device, dtype=torch.long) x.append(seq) y.append(class2idx[vars(train_data[i])["label"]]) train_dataset = Dataset(x, y) test_x = [] test_y = [] for i in trange(len(test_data), ascii=True): seq = TreebankWordDetokenizer().detokenize(vars(test_data[i])["text"]) seq = discriminator.tokenizer.encode(seq) seq = torch.tensor([50256] + seq, device=device, dtype=torch.long) test_x.append(seq) test_y.append(class2idx[vars(test_data[i])["label"]]) test_dataset = Dataset(test_x, test_y) discriminator_meta = { "class_size": len(idx2class), "embed_size": discriminator.embed_size, "pretrained_model": pretrained_model, "class_vocab": class2idx, "default_class": 2, } elif dataset == "clickbait": idx2class = ["non_clickbait", "clickbait"] class2idx = {c: i for i, c in enumerate(idx2class)} discriminator = Discriminator( class_size=len(idx2class), pretrained_model=pretrained_model, cached_mode=cached, device=device ).to(device) with open("datasets/clickbait/clickbait_train_prefix.txt") as f: data = [] for i, line in enumerate(f): try: data.append(eval(line)) except Exception: print("Error evaluating line {}: {}".format(i, line)) continue x = [] y = [] with open("datasets/clickbait/clickbait_train_prefix.txt") as f: for i, line in enumerate(tqdm(f, ascii=True)): try: d = eval(line) seq = discriminator.tokenizer.encode(d["text"]) if len(seq) < max_length_seq: seq = torch.tensor([50256] + seq, device=device, dtype=torch.long) else: print("Line {} is longer than maximum length {}".format(i, max_length_seq)) continue x.append(seq) y.append(d["label"]) except Exception: print("Error evaluating / tokenizing line {}, skipping it".format(i)) pass full_dataset = Dataset(x, y) train_size = int(0.9 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) discriminator_meta = { "class_size": len(idx2class), "embed_size": discriminator.embed_size, "pretrained_model": pretrained_model, "class_vocab": class2idx, "default_class": 1, } elif dataset == "toxic": idx2class = ["non_toxic", "toxic"] class2idx = {c: i for i, c in enumerate(idx2class)} discriminator = Discriminator( class_size=len(idx2class), pretrained_model=pretrained_model, cached_mode=cached, device=device ).to(device) x = [] y = [] with open("datasets/toxic/toxic_train.txt") as f: for i, line in enumerate(tqdm(f, ascii=True)): try: d = eval(line) seq = discriminator.tokenizer.encode(d["text"]) if len(seq) < max_length_seq: seq = torch.tensor([50256] + seq, device=device, dtype=torch.long) else: print("Line {} is longer than maximum length {}".format(i, max_length_seq)) continue x.append(seq) y.append(int(np.sum(d["label"]) > 0)) except Exception: print("Error evaluating / tokenizing line {}, skipping it".format(i)) pass full_dataset = Dataset(x, y) train_size = int(0.9 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) discriminator_meta = { "class_size": len(idx2class), "embed_size": discriminator.embed_size, "pretrained_model": pretrained_model, "class_vocab": class2idx, "default_class": 0, } else: # if dataset == "generic": # This assumes the input dataset is a TSV with the following structure: # class \t text if dataset_fp is None: raise ValueError("When generic dataset is selected, dataset_fp needs to be specified aswell.") classes = set() with open(dataset_fp) as f: csv_reader = csv.reader(f, delimiter="\t") for row in tqdm(csv_reader, ascii=True): if row: classes.add(row[0]) idx2class = sorted(classes) class2idx = {c: i for i, c in enumerate(idx2class)} discriminator = Discriminator( class_size=len(idx2class), pretrained_model=pretrained_model, cached_mode=cached, device=device ).to(device) x = [] y = [] with open(dataset_fp) as f: csv_reader = csv.reader(f, delimiter="\t") for i, row in enumerate(tqdm(csv_reader, ascii=True)): if row: label = row[0] text = row[1] try: seq = discriminator.tokenizer.encode(text) if len(seq) < max_length_seq: seq = torch.tensor([50256] + seq, device=device, dtype=torch.long) else: print("Line {} is longer than maximum length {}".format(i, max_length_seq)) continue x.append(seq) y.append(class2idx[label]) except Exception: print("Error tokenizing line {}, skipping it".format(i)) pass full_dataset = Dataset(x, y) train_size = int(0.9 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) discriminator_meta = { "class_size": len(idx2class), "embed_size": discriminator.embed_size, "pretrained_model": pretrained_model, "class_vocab": class2idx, "default_class": 0, } end = time.time() print("Preprocessed {} data points".format(len(train_dataset) + len(test_dataset))) print("Data preprocessing took: {:.3f}s".format(end - start)) if cached: print("Building representation cache...") start = time.time() train_loader = get_cached_data_loader(train_dataset, batch_size, discriminator, shuffle=True, device=device) test_loader = get_cached_data_loader(test_dataset, batch_size, discriminator, device=device) end = time.time() print("Building representation cache took: {:.3f}s".format(end - start)) else: train_loader = torch.utils.data.DataLoader( dataset=train_dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn ) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, collate_fn=collate_fn) if save_model: with open("{}_classifier_head_meta.json".format(dataset), "w") as meta_file: json.dump(discriminator_meta, meta_file) optimizer = optim.Adam(discriminator.parameters(), lr=0.0001) for epoch in range(epochs): start = time.time() print("\nEpoch", epoch + 1) train_epoch( discriminator=discriminator, data_loader=train_loader, optimizer=optimizer, epoch=epoch, log_interval=log_interval, device=device, ) evaluate_performance(data_loader=test_loader, discriminator=discriminator, device=device) end = time.time() print("Epoch took: {:.3f}s".format(end - start)) print("\nExample prediction") predict(example_sentence, discriminator, idx2class, cached=cached, device=device) if save_model: # torch.save(discriminator.state_dict(), # "{}_discriminator_{}.pt".format( # args.dataset, epoch + 1 # )) torch.save( discriminator.get_classifier().state_dict(), "{}_classifier_head_epoch_{}.pt".format(dataset, epoch + 1), ) if __name__ == "__main__": parser = argparse.ArgumentParser(description="Train a discriminator on top of GPT-2 representations") parser.add_argument( "--dataset", type=str, default="SST", choices=("SST", "clickbait", "toxic", "generic"), help=( "dataset to train the discriminator on. " "In case of generic, the dataset is expected " "to be a TSBV file with structure: class \\t text" ), ) parser.add_argument( "--dataset_fp", type=str, default="", help="File path of the dataset to use. Needed only in case of generic datadset", ) parser.add_argument( "--pretrained_model", type=str, default="openai-community/gpt2-medium", help="Pretrained model to use as encoder", ) parser.add_argument("--epochs", type=int, default=10, metavar="N", help="Number of training epochs") parser.add_argument( "--batch_size", type=int, default=64, metavar="N", help="input batch size for training (default: 64)" ) parser.add_argument( "--log_interval", type=int, default=10, metavar="N", help="how many batches to wait before logging training status", ) parser.add_argument("--save_model", action="store_true", help="whether to save the model") parser.add_argument("--cached", action="store_true", help="whether to cache the input representations") parser.add_argument("--no_cuda", action="store_true", help="use to turn off cuda") args = parser.parse_args() train_discriminator(**(vars(args)))
transformers/examples/research_projects/pplm/run_pplm_discrim_train.py/0
{ "file_path": "transformers/examples/research_projects/pplm/run_pplm_discrim_train.py", "repo_id": "transformers", "token_count": 8849 }
312
#!/usr/bin/env python3 import argparse import re from typing import Dict import torch from datasets import Audio, Dataset, load_dataset, load_metric from transformers import AutoFeatureExtractor, pipeline def log_results(result: Dataset, args: Dict[str, str]): """DO NOT CHANGE. This function computes and logs the result metrics.""" log_outputs = args.log_outputs dataset_id = "_".join(args.dataset.split("/") + [args.config, args.split]) # load metric wer = load_metric("wer") cer = load_metric("cer") # compute metrics wer_result = wer.compute(references=result["target"], predictions=result["prediction"]) cer_result = cer.compute(references=result["target"], predictions=result["prediction"]) # print & log results result_str = f"WER: {wer_result}\nCER: {cer_result}" print(result_str) with open(f"{dataset_id}_eval_results.txt", "w") as f: f.write(result_str) # log all results in text file. Possibly interesting for analysis if log_outputs is not None: pred_file = f"log_{dataset_id}_predictions.txt" target_file = f"log_{dataset_id}_targets.txt" with open(pred_file, "w") as p, open(target_file, "w") as t: # mapping function to write output def write_to_file(batch, i): p.write(f"{i}" + "\n") p.write(batch["prediction"] + "\n") t.write(f"{i}" + "\n") t.write(batch["target"] + "\n") result.map(write_to_file, with_indices=True) def normalize_text(text: str) -> str: """DO ADAPT FOR YOUR USE CASE. this function normalizes the target text.""" chars_to_ignore_regex = '[,?.!\-\;\:"“%‘”ᅵ—’ –]' # noqa: W605 IMPORTANT: this should correspond to the chars that were ignored during training text = re.sub(chars_to_ignore_regex, "", text.lower()) # In addition, we can normalize the target text, e.g. removing new lines characters etc... # note that order is important here! token_sequences_to_ignore = ["\n\n", "\n", " ", " "] for t in token_sequences_to_ignore: text = " ".join(text.split(t)) return text def main(args): # load dataset dataset = load_dataset(args.dataset, args.config, split=args.split, token=True) # for testing: only process the first two examples as a test # dataset = dataset.select(range(10)) # load processor feature_extractor = AutoFeatureExtractor.from_pretrained(args.model_id) sampling_rate = feature_extractor.sampling_rate # resample audio dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate)) # load eval pipeline if args.device is None: args.device = 0 if torch.cuda.is_available() else -1 asr = pipeline("automatic-speech-recognition", model=args.model_id, device=args.device) # map function to decode audio def map_to_pred(batch): prediction = asr( batch["audio"]["array"], chunk_length_s=args.chunk_length_s, stride_length_s=args.stride_length_s ) batch["prediction"] = prediction["text"] batch["target"] = normalize_text(batch["sentence"]) return batch # run inference on all examples result = dataset.map(map_to_pred, remove_columns=dataset.column_names) # compute and log_results # do not change function below log_results(result, args) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--model_id", type=str, required=True, help="Model identifier. Should be loadable with 🀗 Transformers" ) parser.add_argument( "--dataset", type=str, required=True, help="Dataset name to evaluate the `model_id`. Should be loadable with 🀗 Datasets", ) parser.add_argument( "--config", type=str, required=True, help="Config of the dataset. *E.g.* `'en'` for Common Voice" ) parser.add_argument("--split", type=str, required=True, help="Split of the dataset. *E.g.* `'test'`") parser.add_argument( "--chunk_length_s", type=float, default=None, help="Chunk length in seconds. Defaults to 5 seconds." ) parser.add_argument( "--stride_length_s", type=float, default=None, help="Stride of the audio chunks. Defaults to 1 second." ) parser.add_argument( "--log_outputs", action="store_true", help="If defined, write outputs to log file for analysis." ) parser.add_argument( "--device", type=int, default=None, help="The device to run the pipeline on. -1 for CPU (default), 0 for the first GPU and so on.", ) args = parser.parse_args() main(args)
transformers/examples/research_projects/robust-speech-event/eval.py/0
{ "file_path": "transformers/examples/research_projects/robust-speech-event/eval.py", "repo_id": "transformers", "token_count": 1852 }
313
#!/usr/bin/env bash export PYTHONPATH="../":"${PYTHONPATH}" export WANDB_PROJECT=dmar export MAX_LEN=128 python finetune.py \ --learning_rate=3e-4 \ --do_train \ --do_predict \ --fp16 \ --val_check_interval 0.25 \ --data_dir $ENRO_DIR \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \ --freeze_encoder --freeze_embeds \ --train_batch_size=$BS --eval_batch_size=$BS \ --tokenizer_name $m --model_name_or_path $m \ --warmup_steps 500 --sortish_sampler --logger_name wandb \ --gpus 1 --fp16_opt_level=O1 --task translation --num_sanity_val_steps=0 \ "$@"
transformers/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh/0
{ "file_path": "transformers/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh", "repo_id": "transformers", "token_count": 274 }
314
#!/usr/bin/env bash export PYTHONPATH="../":"${PYTHONPATH}" python finetune.py \ --learning_rate=3e-5 \ --fp16 \ --do_train \ --val_check_interval=0.25 \ --adam_eps 1e-06 \ --num_train_epochs 6 --src_lang en_XX --tgt_lang ro_RO \ --data_dir $ENRO_DIR \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \ --train_batch_size=$BS --eval_batch_size=$BS \ --task translation \ --warmup_steps 500 \ --freeze_embeds \ --model_name_or_path=facebook/mbart-large-cc25 \ "$@"
transformers/examples/research_projects/seq2seq-distillation/train_mbart_cc25_enro.sh/0
{ "file_path": "transformers/examples/research_projects/seq2seq-distillation/train_mbart_cc25_enro.sh", "repo_id": "transformers", "token_count": 273 }
315
#!/usr/bin/env bash python run_asr.py \ --output_dir="./wav2vec2-large-lv60-timit-asr" \ --num_train_epochs="30" \ --per_device_train_batch_size="2" \ --per_device_eval_batch_size="2" \ --gradient_accumulation_steps="4" \ --eval_strategy="steps" \ --save_steps="500" \ --eval_steps="100" \ --logging_steps="50" \ --learning_rate="5e-4" \ --warmup_steps="3000" \ --model_name_or_path="facebook/wav2vec2-large-lv60" \ --fp16 \ --dataset_name="timit_asr" \ --train_split_name="train" \ --validation_split_name="test" \ --orthography="timit" \ --preprocessing_num_workers="$(nproc)" \ --group_by_length \ --freeze_feature_extractor \ --verbose_logging \
transformers/examples/research_projects/wav2vec2/finetune_large_lv60_timit_asr.sh/0
{ "file_path": "transformers/examples/research_projects/wav2vec2/finetune_large_lv60_timit_asr.sh", "repo_id": "transformers", "token_count": 275 }
316
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Examples This folder contains actively maintained examples of the use of 🀗 Transformers organized into different ML tasks. All examples in this folder are **TensorFlow** examples and are written using native Keras. If you've previously only used 🀗 Transformers via `TFTrainer`, we highly recommend taking a look at the new style - we think it's a big improvement! In addition, all scripts here now support the [🀗 Datasets](https://github.com/huggingface/datasets) library - you can grab entire datasets just by changing one command-line argument! ## A note on code folding Most of these examples have been formatted with #region blocks. In IDEs such as PyCharm and VSCode, these blocks mark named regions of code that can be folded for easier viewing. If you find any of these scripts overwhelming or difficult to follow, we highly recommend beginning with all regions folded and then examining regions one at a time! ## The Big Table of Tasks Here is the list of all our examples: | Task | Example datasets | |---|---| | [**`language-modeling`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling) | WikiText-2 | [**`multiple-choice`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) | SWAG | [**`question-answering`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) | SQuAD | [**`summarization`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) | XSum | [**`text-classification`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) | GLUE | [**`token-classification`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) | CoNLL NER | [**`translation`**](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) | WMT ## Coming soon - **Colab notebooks** to easily run through these scripts!
transformers/examples/tensorflow/README.md/0
{ "file_path": "transformers/examples/tensorflow/README.md", "repo_id": "transformers", "token_count": 731 }
317
#!/usr/bin/env python # coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Script for training a Unigram tokenizer.""" import argparse import logging import datasets from tokenizers import Tokenizer, decoders, normalizers, pre_tokenizers, processors from tokenizers.models import Unigram from tokenizers.trainers import UnigramTrainer from transformers import AlbertTokenizerFast logger = logging.getLogger(__name__) def parse_args(): parser = argparse.ArgumentParser(description="Train a unigram tokenizer on the wikitext dataset.") parser.add_argument( "--dataset_name", type=str, default="wikitext", help="Name of the training. Explore datasets at: hf.co/datasets.", ) parser.add_argument( "--dataset_config", type=str, default="wikitext-103-raw-v1", help="Configuration name of the dataset." ) parser.add_argument( "--trust_remote_code", action="store_true", help=( "Whether to trust the execution of code from datasets/models defined on the Hub." " This option should only be set to `True` for repositories you trust and in which you have read the" " code, as it will execute code present on the Hub on your local machine." ), ) parser.add_argument( "--batch_size", type=int, default=1000, help="Batch size during training.", ) parser.add_argument( "--vocab_size", type=int, default=10048, help="Size of the desired vocabulary.", ) parser.add_argument( "--limit", default=None, type=int, help="Limit the number of shards (used for debugging).", ) parser.add_argument( "--export_to_hub", action="store_true", ) args = parser.parse_args() return args def main(args): dataset = datasets.load_dataset( args.dataset_name, args.dataset_config, split="train", trust_remote_code=args.trust_remote_code ) if args.limit is not None: max_train_samples = min(len(dataset), args.limit) dataset = dataset.select(range(max_train_samples)) logger.info(f"Limiting the dataset to {args.limit} entries.") def batch_iterator(): for i in range(0, len(dataset), args.batch_size): yield dataset[i : i + args.batch_size]["text"] # Prepare the tokenizer. tokenizer = Tokenizer(Unigram()) tokenizer.normalizer = normalizers.Sequence([normalizers.Replace("``", '"'), normalizers.Replace("''", '"')]) tokenizer.pre_tokenizer = pre_tokenizers.Metaspace() # Prepare the trainer. trainer = UnigramTrainer( unk_token="<unk>", special_tokens=["[CLS]", "[SEP]", "<unk>", "<pad>", "[MASK]"], vocab_size=args.vocab_size, ) logger.info("Training the tokenizer.") tokenizer.train_from_iterator(batch_iterator(), trainer=trainer) logger.info("Tokenizer training complete!") cls_token_id = tokenizer.token_to_id("[CLS]") sep_token_id = tokenizer.token_to_id("[SEP]") tokenizer.post_processor = processors.TemplateProcessing( single="[CLS]:0 $A:0 [SEP]:0", pair="[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1", special_tokens=[ ("[CLS]", cls_token_id), ("[SEP]", sep_token_id), ], ) tokenizer.decoder = decoders.Metaspace() if args.export_to_hub: logger.info("Exporting the trained tokenizer to Hub.") new_tokenizer = AlbertTokenizerFast(tokenizer_object=tokenizer) new_tokenizer.push_to_hub("unigram-tokenizer-dataset") if __name__ == "__main__": args = parse_args() main(args)
transformers/examples/tensorflow/language-modeling-tpu/train_unigram.py/0
{ "file_path": "transformers/examples/tensorflow/language-modeling-tpu/train_unigram.py", "repo_id": "transformers", "token_count": 1672 }
318
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Text classification examples This folder contains some scripts showing examples of *text classification* with the 🀗 Transformers library. For straightforward use-cases you may be able to use these scripts without modification, although we have also included comments in the code to indicate areas that you may need to adapt to your own projects. ## run_text_classification.py This script handles perhaps the single most common use-case for this entire library: Training an NLP classifier on your own training data. This can be whatever you want - you could classify text as abusive/hateful or allowable, or forum posts as spam or not-spam, or classify the genre of a headline as politics, sports or any number of other categories. Any task that involves classifying natural language into two or more different categories can work with this! You can even do regression, such as predicting the score on a 1-10 scale that a user gave, given the text of their review. The preferred input format is either a CSV or newline-delimited JSON file that contains a `sentence1` and `label` field. If your task involves comparing two texts (for example, if your classifier is deciding whether two sentences are paraphrases of each other, or were written by the same author) then you should also include a `sentence2` field in each example. If you do not have a `sentence1` field then the script will assume the non-label fields are the input text, which may not always be what you want, especially if you have more than two fields! Here is a snippet of a valid input JSON file, though note that your texts can be much longer than these, and are not constrained (despite the field name) to being single grammatical sentences: ```json {"sentence1": "COVID-19 vaccine updates: How is the rollout proceeding?", "label": "news"} {"sentence1": "Manchester United celebrates Europa League success", "label": "sports"} ``` ### Usage notes If your inputs are long (more than ~60-70 words), you may wish to increase the `--max_seq_length` argument beyond the default value of 128. The maximum supported value for most models is 512 (about 200-300 words), and some can handle even longer. This will come at a cost in runtime and memory use, however. We assume that your labels represent *categories*, even if they are integers, since text classification is a much more common task than text regression. If your labels are floats, however, the script will assume you want to do regression. This is something you can edit yourself if your use-case requires it! After training, the model will be saved to `--output_dir`. Once your model is trained, you can get predictions by calling the script without a `--train_file` or `--validation_file`; simply pass it the output_dir containing the trained model and a `--test_file` and it will write its predictions to a text file for you. ### Multi-GPU and TPU usage By default, the script uses a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs can also be used by passing the name of the TPU resource with the `--tpu` argument. ### Memory usage and data loading One thing to note is that all data is loaded into memory in this script. Most text classification datasets are small enough that this is not an issue, but if you have a very large dataset you will need to modify the script to handle data streaming. This is particularly challenging for TPUs, given the stricter requirements and the sheer volume of data required to keep them fed. A full explanation of all the possible pitfalls is a bit beyond this example script and README, but for more information you can see the 'Input Datasets' section of [this document](https://www.tensorflow.org/guide/tpu). ### Example command ```bash python run_text_classification.py \ --model_name_or_path distilbert/distilbert-base-cased \ --train_file training_data.json \ --validation_file validation_data.json \ --output_dir output/ \ --test_file data_to_predict.json \ --do_train \ --do_eval \ --do_predict ``` ## run_glue.py This script handles training on the GLUE dataset for various text classification and regression tasks. The GLUE datasets will be loaded automatically, so you only need to specify the task you want (with the `--task_name` argument). You can also supply your own files for prediction with the `--predict_file` argument, for example if you want to train a model on GLUE for e.g. paraphrase detection and then predict whether your own data contains paraphrases or not. Please ensure the names of your input fields match the names of the features in the relevant GLUE dataset - you can see a list of the column names in the `task_to_keys` dict in the `run_glue.py` file. ### Usage notes The `--do_train`, `--do_eval` and `--do_predict` arguments control whether training, evaluations or predictions are performed. After training, the model will be saved to `--output_dir`. Once your model is trained, you can call the script without the `--do_train` or `--do_eval` arguments to quickly get predictions from your saved model. ### Multi-GPU and TPU usage By default, the script uses a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs can also be used by passing the name of the TPU resource with the `--tpu` argument. ### Memory usage and data loading One thing to note is that all data is loaded into memory in this script. Most text classification datasets are small enough that this is not an issue, but if you have a very large dataset you will need to modify the script to handle data streaming. This is particularly challenging for TPUs, given the stricter requirements and the sheer volume of data required to keep them fed. A full explanation of all the possible pitfalls is a bit beyond this example script and README, but for more information you can see the 'Input Datasets' section of [this document](https://www.tensorflow.org/guide/tpu). ### Example command ```bash python run_glue.py \ --model_name_or_path distilbert/distilbert-base-cased \ --task_name mnli \ --do_train \ --do_eval \ --do_predict \ --predict_file data_to_predict.json ```
transformers/examples/tensorflow/text-classification/README.md/0
{ "file_path": "transformers/examples/tensorflow/text-classification/README.md", "repo_id": "transformers", "token_count": 1693 }
319
#!/usr/bin/env bash # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # this script acquires data and converts it to fsmt model # it covers: # - facebook/wmt19-ru-en # - facebook/wmt19-en-ru # - facebook/wmt19-de-en # - facebook/wmt19-en-de # this script needs to be run from the top level of the transformers repo if [ ! -d "src/transformers" ]; then echo "Error: This script needs to be run from the top of the transformers repo" exit 1 fi mkdir data # get data (run once) cd data wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz tar -xvzf wmt19.en-de.joined-dict.ensemble.tar.gz tar -xvzf wmt19.de-en.joined-dict.ensemble.tar.gz tar -xvzf wmt19.en-ru.ensemble.tar.gz tar -xvzf wmt19.ru-en.ensemble.tar.gz cd - # run conversions and uploads export PAIR=ru-en PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.ensemble/model4.pt --pytorch_dump_folder_path data/wmt19-$PAIR export PAIR=en-ru PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.ensemble/model4.pt --pytorch_dump_folder_path data/wmt19-$PAIR export PAIR=de-en PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.joined-dict.ensemble/model4.pt --pytorch_dump_folder_path data/wmt19-$PAIR export PAIR=en-de PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.joined-dict.ensemble/model4.pt --pytorch_dump_folder_path data/wmt19-$PAIR # upload cd data transformers-cli upload -y wmt19-ru-en transformers-cli upload -y wmt19-en-ru transformers-cli upload -y wmt19-de-en transformers-cli upload -y wmt19-en-de cd - # if updating just small files and not the large models, here is a script to generate the right commands: perl -le 'for $f (@ARGV) { print qq[transformers-cli upload -y $_/$f --filename $_/$f] for map { "wmt19-$_" } ("en-ru", "ru-en", "de-en", "en-de")}' vocab-src.json vocab-tgt.json tokenizer_config.json config.json # add/remove files as needed
transformers/scripts/fsmt/convert-facebook-wmt19.sh/0
{ "file_path": "transformers/scripts/fsmt/convert-facebook-wmt19.sh", "repo_id": "transformers", "token_count": 1121 }
320
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # When adding a new object to this init, remember to add it twice: once inside the `_import_structure` dictionary and # once inside the `if TYPE_CHECKING` branch. The `TYPE_CHECKING` should have import statements as usual, but they are # only there for type checking. The `_import_structure` is a dictionary submodule to list of object names, and is used # to defer the actual importing for when the objects are requested. This way `import transformers` provides the names # in the namespace without actually importing anything (and especially none of the backends). __version__ = "4.45.0.dev0" from typing import TYPE_CHECKING # Check the dependencies satisfy the minimal versions required. from . import dependency_versions_check from .utils import ( OptionalDependencyNotAvailable, _LazyModule, is_bitsandbytes_available, is_essentia_available, is_flax_available, is_g2p_en_available, is_keras_nlp_available, is_librosa_available, is_pretty_midi_available, is_scipy_available, is_sentencepiece_available, is_speech_available, is_tensorflow_text_available, is_tf_available, is_timm_available, is_tokenizers_available, is_torch_available, is_torchaudio_available, is_torchvision_available, is_vision_available, logging, ) logger = logging.get_logger(__name__) # pylint: disable=invalid-name # Base objects, independent of any specific backend _import_structure = { "agents": [ "Agent", "CodeAgent", "HfApiEngine", "PipelineTool", "ReactAgent", "ReactCodeAgent", "ReactJsonAgent", "Tool", "Toolbox", "ToolCollection", "TransformersEngine", "launch_gradio_demo", "load_tool", "stream_to_gradio", ], "audio_utils": [], "benchmark": [], "commands": [], "configuration_utils": ["PretrainedConfig"], "convert_graph_to_onnx": [], "convert_slow_tokenizers_checkpoints_to_fast": [], "convert_tf_hub_seq_to_seq_bert_to_pytorch": [], "data": [ "DataProcessor", "InputExample", "InputFeatures", "SingleSentenceClassificationProcessor", "SquadExample", "SquadFeatures", "SquadV1Processor", "SquadV2Processor", "glue_compute_metrics", "glue_convert_examples_to_features", "glue_output_modes", "glue_processors", "glue_tasks_num_labels", "squad_convert_examples_to_features", "xnli_compute_metrics", "xnli_output_modes", "xnli_processors", "xnli_tasks_num_labels", ], "data.data_collator": [ "DataCollator", "DataCollatorForLanguageModeling", "DataCollatorForPermutationLanguageModeling", "DataCollatorForSeq2Seq", "DataCollatorForSOP", "DataCollatorForTokenClassification", "DataCollatorForWholeWordMask", "DataCollatorWithFlattening", "DataCollatorWithPadding", "DefaultDataCollator", "default_data_collator", ], "data.metrics": [], "data.processors": [], "debug_utils": [], "deepspeed": [], "dependency_versions_check": [], "dependency_versions_table": [], "dynamic_module_utils": [], "feature_extraction_sequence_utils": ["SequenceFeatureExtractor"], "feature_extraction_utils": ["BatchFeature", "FeatureExtractionMixin"], "file_utils": [], "generation": [ "GenerationConfig", "TextIteratorStreamer", "TextStreamer", "WatermarkingConfig", ], "hf_argparser": ["HfArgumentParser"], "hyperparameter_search": [], "image_transforms": [], "integrations": [ "is_clearml_available", "is_comet_available", "is_dvclive_available", "is_neptune_available", "is_optuna_available", "is_ray_available", "is_ray_tune_available", "is_sigopt_available", "is_tensorboard_available", "is_wandb_available", ], "modelcard": ["ModelCard"], "modeling_tf_pytorch_utils": [ "convert_tf_weight_name_to_pt_weight_name", "load_pytorch_checkpoint_in_tf2_model", "load_pytorch_model_in_tf2_model", "load_pytorch_weights_in_tf2_model", "load_tf2_checkpoint_in_pytorch_model", "load_tf2_model_in_pytorch_model", "load_tf2_weights_in_pytorch_model", ], # Models "models": [], "models.albert": ["AlbertConfig"], "models.align": [ "AlignConfig", "AlignProcessor", "AlignTextConfig", "AlignVisionConfig", ], "models.altclip": [ "AltCLIPConfig", "AltCLIPProcessor", "AltCLIPTextConfig", "AltCLIPVisionConfig", ], "models.audio_spectrogram_transformer": [ "ASTConfig", "ASTFeatureExtractor", ], "models.auto": [ "CONFIG_MAPPING", "FEATURE_EXTRACTOR_MAPPING", "IMAGE_PROCESSOR_MAPPING", "MODEL_NAMES_MAPPING", "PROCESSOR_MAPPING", "TOKENIZER_MAPPING", "AutoConfig", "AutoFeatureExtractor", "AutoImageProcessor", "AutoProcessor", "AutoTokenizer", ], "models.autoformer": ["AutoformerConfig"], "models.bark": [ "BarkCoarseConfig", "BarkConfig", "BarkFineConfig", "BarkProcessor", "BarkSemanticConfig", ], "models.bart": ["BartConfig", "BartTokenizer"], "models.barthez": [], "models.bartpho": [], "models.beit": ["BeitConfig"], "models.bert": [ "BasicTokenizer", "BertConfig", "BertTokenizer", "WordpieceTokenizer", ], "models.bert_generation": ["BertGenerationConfig"], "models.bert_japanese": [ "BertJapaneseTokenizer", "CharacterTokenizer", "MecabTokenizer", ], "models.bertweet": ["BertweetTokenizer"], "models.big_bird": ["BigBirdConfig"], "models.bigbird_pegasus": ["BigBirdPegasusConfig"], "models.biogpt": [ "BioGptConfig", "BioGptTokenizer", ], "models.bit": ["BitConfig"], "models.blenderbot": [ "BlenderbotConfig", "BlenderbotTokenizer", ], "models.blenderbot_small": [ "BlenderbotSmallConfig", "BlenderbotSmallTokenizer", ], "models.blip": [ "BlipConfig", "BlipProcessor", "BlipTextConfig", "BlipVisionConfig", ], "models.blip_2": [ "Blip2Config", "Blip2Processor", "Blip2QFormerConfig", "Blip2VisionConfig", ], "models.bloom": ["BloomConfig"], "models.bridgetower": [ "BridgeTowerConfig", "BridgeTowerProcessor", "BridgeTowerTextConfig", "BridgeTowerVisionConfig", ], "models.bros": [ "BrosConfig", "BrosProcessor", ], "models.byt5": ["ByT5Tokenizer"], "models.camembert": ["CamembertConfig"], "models.canine": [ "CanineConfig", "CanineTokenizer", ], "models.chameleon": [ "ChameleonConfig", "ChameleonProcessor", "ChameleonVQVAEConfig", ], "models.chinese_clip": [ "ChineseCLIPConfig", "ChineseCLIPProcessor", "ChineseCLIPTextConfig", "ChineseCLIPVisionConfig", ], "models.clap": [ "ClapAudioConfig", "ClapConfig", "ClapProcessor", "ClapTextConfig", ], "models.clip": [ "CLIPConfig", "CLIPProcessor", "CLIPTextConfig", "CLIPTokenizer", "CLIPVisionConfig", ], "models.clipseg": [ "CLIPSegConfig", "CLIPSegProcessor", "CLIPSegTextConfig", "CLIPSegVisionConfig", ], "models.clvp": [ "ClvpConfig", "ClvpDecoderConfig", "ClvpEncoderConfig", "ClvpFeatureExtractor", "ClvpProcessor", "ClvpTokenizer", ], "models.code_llama": [], "models.codegen": [ "CodeGenConfig", "CodeGenTokenizer", ], "models.cohere": ["CohereConfig"], "models.conditional_detr": ["ConditionalDetrConfig"], "models.convbert": [ "ConvBertConfig", "ConvBertTokenizer", ], "models.convnext": ["ConvNextConfig"], "models.convnextv2": ["ConvNextV2Config"], "models.cpm": [], "models.cpmant": [ "CpmAntConfig", "CpmAntTokenizer", ], "models.ctrl": [ "CTRLConfig", "CTRLTokenizer", ], "models.cvt": ["CvtConfig"], "models.dac": ["DacConfig", "DacFeatureExtractor"], "models.data2vec": [ "Data2VecAudioConfig", "Data2VecTextConfig", "Data2VecVisionConfig", ], "models.dbrx": ["DbrxConfig"], "models.deberta": [ "DebertaConfig", "DebertaTokenizer", ], "models.deberta_v2": ["DebertaV2Config"], "models.decision_transformer": ["DecisionTransformerConfig"], "models.deformable_detr": ["DeformableDetrConfig"], "models.deit": ["DeiTConfig"], "models.deprecated": [], "models.deprecated.bort": [], "models.deprecated.deta": ["DetaConfig"], "models.deprecated.efficientformer": ["EfficientFormerConfig"], "models.deprecated.ernie_m": ["ErnieMConfig"], "models.deprecated.gptsan_japanese": [ "GPTSanJapaneseConfig", "GPTSanJapaneseTokenizer", ], "models.deprecated.graphormer": ["GraphormerConfig"], "models.deprecated.jukebox": [ "JukeboxConfig", "JukeboxPriorConfig", "JukeboxTokenizer", "JukeboxVQVAEConfig", ], "models.deprecated.mctct": [ "MCTCTConfig", "MCTCTFeatureExtractor", "MCTCTProcessor", ], "models.deprecated.mega": ["MegaConfig"], "models.deprecated.mmbt": ["MMBTConfig"], "models.deprecated.nat": ["NatConfig"], "models.deprecated.nezha": ["NezhaConfig"], "models.deprecated.open_llama": ["OpenLlamaConfig"], "models.deprecated.qdqbert": ["QDQBertConfig"], "models.deprecated.realm": [ "RealmConfig", "RealmTokenizer", ], "models.deprecated.retribert": [ "RetriBertConfig", "RetriBertTokenizer", ], "models.deprecated.speech_to_text_2": [ "Speech2Text2Config", "Speech2Text2Processor", "Speech2Text2Tokenizer", ], "models.deprecated.tapex": ["TapexTokenizer"], "models.deprecated.trajectory_transformer": ["TrajectoryTransformerConfig"], "models.deprecated.transfo_xl": [ "TransfoXLConfig", "TransfoXLCorpus", "TransfoXLTokenizer", ], "models.deprecated.tvlt": [ "TvltConfig", "TvltFeatureExtractor", "TvltProcessor", ], "models.deprecated.van": ["VanConfig"], "models.deprecated.vit_hybrid": ["ViTHybridConfig"], "models.deprecated.xlm_prophetnet": ["XLMProphetNetConfig"], "models.depth_anything": ["DepthAnythingConfig"], "models.detr": ["DetrConfig"], "models.dialogpt": [], "models.dinat": ["DinatConfig"], "models.dinov2": ["Dinov2Config"], "models.distilbert": [ "DistilBertConfig", "DistilBertTokenizer", ], "models.dit": [], "models.donut": [ "DonutProcessor", "DonutSwinConfig", ], "models.dpr": [ "DPRConfig", "DPRContextEncoderTokenizer", "DPRQuestionEncoderTokenizer", "DPRReaderOutput", "DPRReaderTokenizer", ], "models.dpt": ["DPTConfig"], "models.efficientnet": ["EfficientNetConfig"], "models.electra": [ "ElectraConfig", "ElectraTokenizer", ], "models.encodec": [ "EncodecConfig", "EncodecFeatureExtractor", ], "models.encoder_decoder": ["EncoderDecoderConfig"], "models.ernie": ["ErnieConfig"], "models.esm": ["EsmConfig", "EsmTokenizer"], "models.falcon": ["FalconConfig"], "models.falcon_mamba": ["FalconMambaConfig"], "models.fastspeech2_conformer": [ "FastSpeech2ConformerConfig", "FastSpeech2ConformerHifiGanConfig", "FastSpeech2ConformerTokenizer", "FastSpeech2ConformerWithHifiGanConfig", ], "models.flaubert": ["FlaubertConfig", "FlaubertTokenizer"], "models.flava": [ "FlavaConfig", "FlavaImageCodebookConfig", "FlavaImageConfig", "FlavaMultimodalConfig", "FlavaTextConfig", ], "models.fnet": ["FNetConfig"], "models.focalnet": ["FocalNetConfig"], "models.fsmt": [ "FSMTConfig", "FSMTTokenizer", ], "models.funnel": [ "FunnelConfig", "FunnelTokenizer", ], "models.fuyu": ["FuyuConfig"], "models.gemma": ["GemmaConfig"], "models.gemma2": ["Gemma2Config"], "models.git": [ "GitConfig", "GitProcessor", "GitVisionConfig", ], "models.glpn": ["GLPNConfig"], "models.gpt2": [ "GPT2Config", "GPT2Tokenizer", ], "models.gpt_bigcode": ["GPTBigCodeConfig"], "models.gpt_neo": ["GPTNeoConfig"], "models.gpt_neox": ["GPTNeoXConfig"], "models.gpt_neox_japanese": ["GPTNeoXJapaneseConfig"], "models.gpt_sw3": [], "models.gptj": ["GPTJConfig"], "models.granite": ["GraniteConfig"], "models.grounding_dino": [ "GroundingDinoConfig", "GroundingDinoProcessor", ], "models.groupvit": [ "GroupViTConfig", "GroupViTTextConfig", "GroupViTVisionConfig", ], "models.herbert": ["HerbertTokenizer"], "models.hiera": ["HieraConfig"], "models.hubert": ["HubertConfig"], "models.ibert": ["IBertConfig"], "models.idefics": ["IdeficsConfig"], "models.idefics2": ["Idefics2Config"], "models.imagegpt": ["ImageGPTConfig"], "models.informer": ["InformerConfig"], "models.instructblip": [ "InstructBlipConfig", "InstructBlipProcessor", "InstructBlipQFormerConfig", "InstructBlipVisionConfig", ], "models.instructblipvideo": [ "InstructBlipVideoConfig", "InstructBlipVideoProcessor", "InstructBlipVideoQFormerConfig", "InstructBlipVideoVisionConfig", ], "models.jamba": ["JambaConfig"], "models.jetmoe": ["JetMoeConfig"], "models.kosmos2": [ "Kosmos2Config", "Kosmos2Processor", ], "models.layoutlm": [ "LayoutLMConfig", "LayoutLMTokenizer", ], "models.layoutlmv2": [ "LayoutLMv2Config", "LayoutLMv2FeatureExtractor", "LayoutLMv2ImageProcessor", "LayoutLMv2Processor", "LayoutLMv2Tokenizer", ], "models.layoutlmv3": [ "LayoutLMv3Config", "LayoutLMv3FeatureExtractor", "LayoutLMv3ImageProcessor", "LayoutLMv3Processor", "LayoutLMv3Tokenizer", ], "models.layoutxlm": ["LayoutXLMProcessor"], "models.led": ["LEDConfig", "LEDTokenizer"], "models.levit": ["LevitConfig"], "models.lilt": ["LiltConfig"], "models.llama": ["LlamaConfig"], "models.llava": [ "LlavaConfig", "LlavaProcessor", ], "models.llava_next": [ "LlavaNextConfig", "LlavaNextProcessor", ], "models.llava_next_video": [ "LlavaNextVideoConfig", "LlavaNextVideoProcessor", ], "models.longformer": [ "LongformerConfig", "LongformerTokenizer", ], "models.longt5": ["LongT5Config"], "models.luke": [ "LukeConfig", "LukeTokenizer", ], "models.lxmert": [ "LxmertConfig", "LxmertTokenizer", ], "models.m2m_100": ["M2M100Config"], "models.mamba": ["MambaConfig"], "models.mamba2": ["Mamba2Config"], "models.marian": ["MarianConfig"], "models.markuplm": [ "MarkupLMConfig", "MarkupLMFeatureExtractor", "MarkupLMProcessor", "MarkupLMTokenizer", ], "models.mask2former": ["Mask2FormerConfig"], "models.maskformer": [ "MaskFormerConfig", "MaskFormerSwinConfig", ], "models.mbart": ["MBartConfig"], "models.mbart50": [], "models.megatron_bert": ["MegatronBertConfig"], "models.megatron_gpt2": [], "models.mgp_str": [ "MgpstrConfig", "MgpstrProcessor", "MgpstrTokenizer", ], "models.mistral": ["MistralConfig"], "models.mixtral": ["MixtralConfig"], "models.mluke": [], "models.mobilebert": [ "MobileBertConfig", "MobileBertTokenizer", ], "models.mobilenet_v1": ["MobileNetV1Config"], "models.mobilenet_v2": ["MobileNetV2Config"], "models.mobilevit": ["MobileViTConfig"], "models.mobilevitv2": ["MobileViTV2Config"], "models.mpnet": [ "MPNetConfig", "MPNetTokenizer", ], "models.mpt": ["MptConfig"], "models.mra": ["MraConfig"], "models.mt5": ["MT5Config"], "models.musicgen": [ "MusicgenConfig", "MusicgenDecoderConfig", ], "models.musicgen_melody": [ "MusicgenMelodyConfig", "MusicgenMelodyDecoderConfig", ], "models.mvp": ["MvpConfig", "MvpTokenizer"], "models.nemotron": ["NemotronConfig"], "models.nllb": [], "models.nllb_moe": ["NllbMoeConfig"], "models.nougat": ["NougatProcessor"], "models.nystromformer": ["NystromformerConfig"], "models.olmo": ["OlmoConfig"], "models.oneformer": [ "OneFormerConfig", "OneFormerProcessor", ], "models.openai": [ "OpenAIGPTConfig", "OpenAIGPTTokenizer", ], "models.opt": ["OPTConfig"], "models.owlv2": [ "Owlv2Config", "Owlv2Processor", "Owlv2TextConfig", "Owlv2VisionConfig", ], "models.owlvit": [ "OwlViTConfig", "OwlViTProcessor", "OwlViTTextConfig", "OwlViTVisionConfig", ], "models.paligemma": ["PaliGemmaConfig"], "models.patchtsmixer": ["PatchTSMixerConfig"], "models.patchtst": ["PatchTSTConfig"], "models.pegasus": [ "PegasusConfig", "PegasusTokenizer", ], "models.pegasus_x": ["PegasusXConfig"], "models.perceiver": [ "PerceiverConfig", "PerceiverTokenizer", ], "models.persimmon": ["PersimmonConfig"], "models.phi": ["PhiConfig"], "models.phi3": ["Phi3Config"], "models.phobert": ["PhobertTokenizer"], "models.pix2struct": [ "Pix2StructConfig", "Pix2StructProcessor", "Pix2StructTextConfig", "Pix2StructVisionConfig", ], "models.plbart": ["PLBartConfig"], "models.poolformer": ["PoolFormerConfig"], "models.pop2piano": ["Pop2PianoConfig"], "models.prophetnet": [ "ProphetNetConfig", "ProphetNetTokenizer", ], "models.pvt": ["PvtConfig"], "models.pvt_v2": ["PvtV2Config"], "models.qwen2": [ "Qwen2Config", "Qwen2Tokenizer", ], "models.qwen2_audio": [ "Qwen2AudioConfig", "Qwen2AudioEncoderConfig", "Qwen2AudioProcessor", ], "models.qwen2_moe": ["Qwen2MoeConfig"], "models.qwen2_vl": [ "Qwen2VLConfig", "Qwen2VLProcessor", ], "models.rag": ["RagConfig", "RagRetriever", "RagTokenizer"], "models.recurrent_gemma": ["RecurrentGemmaConfig"], "models.reformer": ["ReformerConfig"], "models.regnet": ["RegNetConfig"], "models.rembert": ["RemBertConfig"], "models.resnet": ["ResNetConfig"], "models.roberta": [ "RobertaConfig", "RobertaTokenizer", ], "models.roberta_prelayernorm": ["RobertaPreLayerNormConfig"], "models.roc_bert": [ "RoCBertConfig", "RoCBertTokenizer", ], "models.roformer": [ "RoFormerConfig", "RoFormerTokenizer", ], "models.rt_detr": ["RTDetrConfig", "RTDetrResNetConfig"], "models.rwkv": ["RwkvConfig"], "models.sam": [ "SamConfig", "SamMaskDecoderConfig", "SamProcessor", "SamPromptEncoderConfig", "SamVisionConfig", ], "models.seamless_m4t": [ "SeamlessM4TConfig", "SeamlessM4TFeatureExtractor", "SeamlessM4TProcessor", ], "models.seamless_m4t_v2": ["SeamlessM4Tv2Config"], "models.segformer": ["SegformerConfig"], "models.seggpt": ["SegGptConfig"], "models.sew": ["SEWConfig"], "models.sew_d": ["SEWDConfig"], "models.siglip": [ "SiglipConfig", "SiglipProcessor", "SiglipTextConfig", "SiglipVisionConfig", ], "models.speech_encoder_decoder": ["SpeechEncoderDecoderConfig"], "models.speech_to_text": [ "Speech2TextConfig", "Speech2TextFeatureExtractor", "Speech2TextProcessor", ], "models.speecht5": [ "SpeechT5Config", "SpeechT5FeatureExtractor", "SpeechT5HifiGanConfig", "SpeechT5Processor", ], "models.splinter": [ "SplinterConfig", "SplinterTokenizer", ], "models.squeezebert": [ "SqueezeBertConfig", "SqueezeBertTokenizer", ], "models.stablelm": ["StableLmConfig"], "models.starcoder2": ["Starcoder2Config"], "models.superpoint": ["SuperPointConfig"], "models.swiftformer": ["SwiftFormerConfig"], "models.swin": ["SwinConfig"], "models.swin2sr": ["Swin2SRConfig"], "models.swinv2": ["Swinv2Config"], "models.switch_transformers": ["SwitchTransformersConfig"], "models.t5": ["T5Config"], "models.table_transformer": ["TableTransformerConfig"], "models.tapas": [ "TapasConfig", "TapasTokenizer", ], "models.time_series_transformer": ["TimeSeriesTransformerConfig"], "models.timesformer": ["TimesformerConfig"], "models.timm_backbone": ["TimmBackboneConfig"], "models.trocr": [ "TrOCRConfig", "TrOCRProcessor", ], "models.tvp": [ "TvpConfig", "TvpProcessor", ], "models.udop": [ "UdopConfig", "UdopProcessor", ], "models.umt5": ["UMT5Config"], "models.unispeech": ["UniSpeechConfig"], "models.unispeech_sat": ["UniSpeechSatConfig"], "models.univnet": [ "UnivNetConfig", "UnivNetFeatureExtractor", ], "models.upernet": ["UperNetConfig"], "models.video_llava": ["VideoLlavaConfig"], "models.videomae": ["VideoMAEConfig"], "models.vilt": [ "ViltConfig", "ViltFeatureExtractor", "ViltImageProcessor", "ViltProcessor", ], "models.vipllava": ["VipLlavaConfig"], "models.vision_encoder_decoder": ["VisionEncoderDecoderConfig"], "models.vision_text_dual_encoder": [ "VisionTextDualEncoderConfig", "VisionTextDualEncoderProcessor", ], "models.visual_bert": ["VisualBertConfig"], "models.vit": ["ViTConfig"], "models.vit_mae": ["ViTMAEConfig"], "models.vit_msn": ["ViTMSNConfig"], "models.vitdet": ["VitDetConfig"], "models.vitmatte": ["VitMatteConfig"], "models.vits": [ "VitsConfig", "VitsTokenizer", ], "models.vivit": ["VivitConfig"], "models.wav2vec2": [ "Wav2Vec2Config", "Wav2Vec2CTCTokenizer", "Wav2Vec2FeatureExtractor", "Wav2Vec2Processor", "Wav2Vec2Tokenizer", ], "models.wav2vec2_bert": [ "Wav2Vec2BertConfig", "Wav2Vec2BertProcessor", ], "models.wav2vec2_conformer": ["Wav2Vec2ConformerConfig"], "models.wav2vec2_phoneme": ["Wav2Vec2PhonemeCTCTokenizer"], "models.wav2vec2_with_lm": ["Wav2Vec2ProcessorWithLM"], "models.wavlm": ["WavLMConfig"], "models.whisper": [ "WhisperConfig", "WhisperFeatureExtractor", "WhisperProcessor", "WhisperTokenizer", ], "models.x_clip": [ "XCLIPConfig", "XCLIPProcessor", "XCLIPTextConfig", "XCLIPVisionConfig", ], "models.xglm": ["XGLMConfig"], "models.xlm": ["XLMConfig", "XLMTokenizer"], "models.xlm_roberta": ["XLMRobertaConfig"], "models.xlm_roberta_xl": ["XLMRobertaXLConfig"], "models.xlnet": ["XLNetConfig"], "models.xmod": ["XmodConfig"], "models.yolos": ["YolosConfig"], "models.yoso": ["YosoConfig"], "models.zoedepth": ["ZoeDepthConfig"], "onnx": [], "pipelines": [ "AudioClassificationPipeline", "AutomaticSpeechRecognitionPipeline", "CsvPipelineDataFormat", "DepthEstimationPipeline", "DocumentQuestionAnsweringPipeline", "FeatureExtractionPipeline", "FillMaskPipeline", "ImageClassificationPipeline", "ImageFeatureExtractionPipeline", "ImageSegmentationPipeline", "ImageToImagePipeline", "ImageToTextPipeline", "JsonPipelineDataFormat", "MaskGenerationPipeline", "NerPipeline", "ObjectDetectionPipeline", "PipedPipelineDataFormat", "Pipeline", "PipelineDataFormat", "QuestionAnsweringPipeline", "SummarizationPipeline", "TableQuestionAnsweringPipeline", "Text2TextGenerationPipeline", "TextClassificationPipeline", "TextGenerationPipeline", "TextToAudioPipeline", "TokenClassificationPipeline", "TranslationPipeline", "VideoClassificationPipeline", "VisualQuestionAnsweringPipeline", "ZeroShotAudioClassificationPipeline", "ZeroShotClassificationPipeline", "ZeroShotImageClassificationPipeline", "ZeroShotObjectDetectionPipeline", "pipeline", ], "processing_utils": ["ProcessorMixin"], "quantizers": [], "testing_utils": [], "tokenization_utils": ["PreTrainedTokenizer"], "tokenization_utils_base": [ "AddedToken", "BatchEncoding", "CharSpan", "PreTrainedTokenizerBase", "SpecialTokensMixin", "TokenSpan", ], "trainer_callback": [ "DefaultFlowCallback", "EarlyStoppingCallback", "PrinterCallback", "ProgressCallback", "TrainerCallback", "TrainerControl", "TrainerState", ], "trainer_utils": [ "EvalPrediction", "IntervalStrategy", "SchedulerType", "enable_full_determinism", "set_seed", ], "training_args": ["TrainingArguments"], "training_args_seq2seq": ["Seq2SeqTrainingArguments"], "training_args_tf": ["TFTrainingArguments"], "utils": [ "CONFIG_NAME", "MODEL_CARD_NAME", "PYTORCH_PRETRAINED_BERT_CACHE", "PYTORCH_TRANSFORMERS_CACHE", "SPIECE_UNDERLINE", "TF2_WEIGHTS_NAME", "TF_WEIGHTS_NAME", "TRANSFORMERS_CACHE", "WEIGHTS_NAME", "TensorType", "add_end_docstrings", "add_start_docstrings", "is_apex_available", "is_av_available", "is_bitsandbytes_available", "is_datasets_available", "is_decord_available", "is_faiss_available", "is_flax_available", "is_keras_nlp_available", "is_phonemizer_available", "is_psutil_available", "is_py3nvml_available", "is_pyctcdecode_available", "is_sacremoses_available", "is_safetensors_available", "is_scipy_available", "is_sentencepiece_available", "is_sklearn_available", "is_speech_available", "is_tensorflow_text_available", "is_tf_available", "is_timm_available", "is_tokenizers_available", "is_torch_available", "is_torch_mlu_available", "is_torch_musa_available", "is_torch_neuroncore_available", "is_torch_npu_available", "is_torch_tpu_available", "is_torchvision_available", "is_torch_xla_available", "is_torch_xpu_available", "is_vision_available", "logging", ], "utils.quantization_config": [ "AqlmConfig", "AwqConfig", "BitsAndBytesConfig", "EetqConfig", "FbgemmFp8Config", "GPTQConfig", "HqqConfig", "QuantoConfig", "TorchAoConfig", ], } # sentencepiece-backed objects try: if not is_sentencepiece_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_sentencepiece_objects _import_structure["utils.dummy_sentencepiece_objects"] = [ name for name in dir(dummy_sentencepiece_objects) if not name.startswith("_") ] else: _import_structure["models.albert"].append("AlbertTokenizer") _import_structure["models.barthez"].append("BarthezTokenizer") _import_structure["models.bartpho"].append("BartphoTokenizer") _import_structure["models.bert_generation"].append("BertGenerationTokenizer") _import_structure["models.big_bird"].append("BigBirdTokenizer") _import_structure["models.camembert"].append("CamembertTokenizer") _import_structure["models.code_llama"].append("CodeLlamaTokenizer") _import_structure["models.cpm"].append("CpmTokenizer") _import_structure["models.deberta_v2"].append("DebertaV2Tokenizer") _import_structure["models.deprecated.ernie_m"].append("ErnieMTokenizer") _import_structure["models.deprecated.xlm_prophetnet"].append("XLMProphetNetTokenizer") _import_structure["models.fnet"].append("FNetTokenizer") _import_structure["models.gemma"].append("GemmaTokenizer") _import_structure["models.gpt_sw3"].append("GPTSw3Tokenizer") _import_structure["models.layoutxlm"].append("LayoutXLMTokenizer") _import_structure["models.llama"].append("LlamaTokenizer") _import_structure["models.m2m_100"].append("M2M100Tokenizer") _import_structure["models.marian"].append("MarianTokenizer") _import_structure["models.mbart"].append("MBartTokenizer") _import_structure["models.mbart50"].append("MBart50Tokenizer") _import_structure["models.mluke"].append("MLukeTokenizer") _import_structure["models.mt5"].append("MT5Tokenizer") _import_structure["models.nllb"].append("NllbTokenizer") _import_structure["models.pegasus"].append("PegasusTokenizer") _import_structure["models.plbart"].append("PLBartTokenizer") _import_structure["models.reformer"].append("ReformerTokenizer") _import_structure["models.rembert"].append("RemBertTokenizer") _import_structure["models.seamless_m4t"].append("SeamlessM4TTokenizer") _import_structure["models.siglip"].append("SiglipTokenizer") _import_structure["models.speech_to_text"].append("Speech2TextTokenizer") _import_structure["models.speecht5"].append("SpeechT5Tokenizer") _import_structure["models.t5"].append("T5Tokenizer") _import_structure["models.udop"].append("UdopTokenizer") _import_structure["models.xglm"].append("XGLMTokenizer") _import_structure["models.xlm_roberta"].append("XLMRobertaTokenizer") _import_structure["models.xlnet"].append("XLNetTokenizer") # tokenizers-backed objects try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_tokenizers_objects _import_structure["utils.dummy_tokenizers_objects"] = [ name for name in dir(dummy_tokenizers_objects) if not name.startswith("_") ] else: # Fast tokenizers structure _import_structure["models.albert"].append("AlbertTokenizerFast") _import_structure["models.bart"].append("BartTokenizerFast") _import_structure["models.barthez"].append("BarthezTokenizerFast") _import_structure["models.bert"].append("BertTokenizerFast") _import_structure["models.big_bird"].append("BigBirdTokenizerFast") _import_structure["models.blenderbot"].append("BlenderbotTokenizerFast") _import_structure["models.blenderbot_small"].append("BlenderbotSmallTokenizerFast") _import_structure["models.bloom"].append("BloomTokenizerFast") _import_structure["models.camembert"].append("CamembertTokenizerFast") _import_structure["models.clip"].append("CLIPTokenizerFast") _import_structure["models.code_llama"].append("CodeLlamaTokenizerFast") _import_structure["models.codegen"].append("CodeGenTokenizerFast") _import_structure["models.cohere"].append("CohereTokenizerFast") _import_structure["models.convbert"].append("ConvBertTokenizerFast") _import_structure["models.cpm"].append("CpmTokenizerFast") _import_structure["models.deberta"].append("DebertaTokenizerFast") _import_structure["models.deberta_v2"].append("DebertaV2TokenizerFast") _import_structure["models.deprecated.realm"].append("RealmTokenizerFast") _import_structure["models.deprecated.retribert"].append("RetriBertTokenizerFast") _import_structure["models.distilbert"].append("DistilBertTokenizerFast") _import_structure["models.dpr"].extend( [ "DPRContextEncoderTokenizerFast", "DPRQuestionEncoderTokenizerFast", "DPRReaderTokenizerFast", ] ) _import_structure["models.electra"].append("ElectraTokenizerFast") _import_structure["models.fnet"].append("FNetTokenizerFast") _import_structure["models.funnel"].append("FunnelTokenizerFast") _import_structure["models.gemma"].append("GemmaTokenizerFast") _import_structure["models.gpt2"].append("GPT2TokenizerFast") _import_structure["models.gpt_neox"].append("GPTNeoXTokenizerFast") _import_structure["models.gpt_neox_japanese"].append("GPTNeoXJapaneseTokenizer") _import_structure["models.herbert"].append("HerbertTokenizerFast") _import_structure["models.layoutlm"].append("LayoutLMTokenizerFast") _import_structure["models.layoutlmv2"].append("LayoutLMv2TokenizerFast") _import_structure["models.layoutlmv3"].append("LayoutLMv3TokenizerFast") _import_structure["models.layoutxlm"].append("LayoutXLMTokenizerFast") _import_structure["models.led"].append("LEDTokenizerFast") _import_structure["models.llama"].append("LlamaTokenizerFast") _import_structure["models.longformer"].append("LongformerTokenizerFast") _import_structure["models.lxmert"].append("LxmertTokenizerFast") _import_structure["models.markuplm"].append("MarkupLMTokenizerFast") _import_structure["models.mbart"].append("MBartTokenizerFast") _import_structure["models.mbart50"].append("MBart50TokenizerFast") _import_structure["models.mobilebert"].append("MobileBertTokenizerFast") _import_structure["models.mpnet"].append("MPNetTokenizerFast") _import_structure["models.mt5"].append("MT5TokenizerFast") _import_structure["models.mvp"].append("MvpTokenizerFast") _import_structure["models.nllb"].append("NllbTokenizerFast") _import_structure["models.nougat"].append("NougatTokenizerFast") _import_structure["models.openai"].append("OpenAIGPTTokenizerFast") _import_structure["models.pegasus"].append("PegasusTokenizerFast") _import_structure["models.qwen2"].append("Qwen2TokenizerFast") _import_structure["models.reformer"].append("ReformerTokenizerFast") _import_structure["models.rembert"].append("RemBertTokenizerFast") _import_structure["models.roberta"].append("RobertaTokenizerFast") _import_structure["models.roformer"].append("RoFormerTokenizerFast") _import_structure["models.seamless_m4t"].append("SeamlessM4TTokenizerFast") _import_structure["models.splinter"].append("SplinterTokenizerFast") _import_structure["models.squeezebert"].append("SqueezeBertTokenizerFast") _import_structure["models.t5"].append("T5TokenizerFast") _import_structure["models.udop"].append("UdopTokenizerFast") _import_structure["models.whisper"].append("WhisperTokenizerFast") _import_structure["models.xglm"].append("XGLMTokenizerFast") _import_structure["models.xlm_roberta"].append("XLMRobertaTokenizerFast") _import_structure["models.xlnet"].append("XLNetTokenizerFast") _import_structure["tokenization_utils_fast"] = ["PreTrainedTokenizerFast"] try: if not (is_sentencepiece_available() and is_tokenizers_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_sentencepiece_and_tokenizers_objects _import_structure["utils.dummy_sentencepiece_and_tokenizers_objects"] = [ name for name in dir(dummy_sentencepiece_and_tokenizers_objects) if not name.startswith("_") ] else: _import_structure["convert_slow_tokenizer"] = [ "SLOW_TO_FAST_CONVERTERS", "convert_slow_tokenizer", ] # Tensorflow-text-specific objects try: if not is_tensorflow_text_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_tensorflow_text_objects _import_structure["utils.dummy_tensorflow_text_objects"] = [ name for name in dir(dummy_tensorflow_text_objects) if not name.startswith("_") ] else: _import_structure["models.bert"].append("TFBertTokenizer") # keras-nlp-specific objects try: if not is_keras_nlp_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_keras_nlp_objects _import_structure["utils.dummy_keras_nlp_objects"] = [ name for name in dir(dummy_keras_nlp_objects) if not name.startswith("_") ] else: _import_structure["models.gpt2"].append("TFGPT2Tokenizer") # Vision-specific objects try: if not is_vision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_vision_objects _import_structure["utils.dummy_vision_objects"] = [ name for name in dir(dummy_vision_objects) if not name.startswith("_") ] else: _import_structure["image_processing_base"] = ["ImageProcessingMixin"] _import_structure["image_processing_utils"] = ["BaseImageProcessor"] _import_structure["image_utils"] = ["ImageFeatureExtractionMixin"] _import_structure["models.beit"].extend(["BeitFeatureExtractor", "BeitImageProcessor"]) _import_structure["models.bit"].extend(["BitImageProcessor"]) _import_structure["models.blip"].extend(["BlipImageProcessor"]) _import_structure["models.bridgetower"].append("BridgeTowerImageProcessor") _import_structure["models.chameleon"].append("ChameleonImageProcessor") _import_structure["models.chinese_clip"].extend(["ChineseCLIPFeatureExtractor", "ChineseCLIPImageProcessor"]) _import_structure["models.clip"].extend(["CLIPFeatureExtractor", "CLIPImageProcessor"]) _import_structure["models.conditional_detr"].extend( ["ConditionalDetrFeatureExtractor", "ConditionalDetrImageProcessor"] ) _import_structure["models.convnext"].extend(["ConvNextFeatureExtractor", "ConvNextImageProcessor"]) _import_structure["models.deformable_detr"].extend( ["DeformableDetrFeatureExtractor", "DeformableDetrImageProcessor"] ) _import_structure["models.deit"].extend(["DeiTFeatureExtractor", "DeiTImageProcessor"]) _import_structure["models.deprecated.deta"].append("DetaImageProcessor") _import_structure["models.deprecated.efficientformer"].append("EfficientFormerImageProcessor") _import_structure["models.deprecated.tvlt"].append("TvltImageProcessor") _import_structure["models.deprecated.vit_hybrid"].extend(["ViTHybridImageProcessor"]) _import_structure["models.detr"].extend(["DetrFeatureExtractor", "DetrImageProcessor"]) _import_structure["models.donut"].extend(["DonutFeatureExtractor", "DonutImageProcessor"]) _import_structure["models.dpt"].extend(["DPTFeatureExtractor", "DPTImageProcessor"]) _import_structure["models.efficientnet"].append("EfficientNetImageProcessor") _import_structure["models.flava"].extend(["FlavaFeatureExtractor", "FlavaImageProcessor", "FlavaProcessor"]) _import_structure["models.fuyu"].extend(["FuyuImageProcessor", "FuyuProcessor"]) _import_structure["models.glpn"].extend(["GLPNFeatureExtractor", "GLPNImageProcessor"]) _import_structure["models.grounding_dino"].extend(["GroundingDinoImageProcessor"]) _import_structure["models.idefics"].extend(["IdeficsImageProcessor"]) _import_structure["models.idefics2"].extend(["Idefics2ImageProcessor"]) _import_structure["models.imagegpt"].extend(["ImageGPTFeatureExtractor", "ImageGPTImageProcessor"]) _import_structure["models.instructblipvideo"].extend(["InstructBlipVideoImageProcessor"]) _import_structure["models.layoutlmv2"].extend(["LayoutLMv2FeatureExtractor", "LayoutLMv2ImageProcessor"]) _import_structure["models.layoutlmv3"].extend(["LayoutLMv3FeatureExtractor", "LayoutLMv3ImageProcessor"]) _import_structure["models.levit"].extend(["LevitFeatureExtractor", "LevitImageProcessor"]) _import_structure["models.llava_next"].append("LlavaNextImageProcessor") _import_structure["models.llava_next_video"].append("LlavaNextVideoImageProcessor") _import_structure["models.mask2former"].append("Mask2FormerImageProcessor") _import_structure["models.maskformer"].extend(["MaskFormerFeatureExtractor", "MaskFormerImageProcessor"]) _import_structure["models.mobilenet_v1"].extend(["MobileNetV1FeatureExtractor", "MobileNetV1ImageProcessor"]) _import_structure["models.mobilenet_v2"].extend(["MobileNetV2FeatureExtractor", "MobileNetV2ImageProcessor"]) _import_structure["models.mobilevit"].extend(["MobileViTFeatureExtractor", "MobileViTImageProcessor"]) _import_structure["models.nougat"].append("NougatImageProcessor") _import_structure["models.oneformer"].extend(["OneFormerImageProcessor"]) _import_structure["models.owlv2"].append("Owlv2ImageProcessor") _import_structure["models.owlvit"].extend(["OwlViTFeatureExtractor", "OwlViTImageProcessor"]) _import_structure["models.perceiver"].extend(["PerceiverFeatureExtractor", "PerceiverImageProcessor"]) _import_structure["models.pix2struct"].extend(["Pix2StructImageProcessor"]) _import_structure["models.poolformer"].extend(["PoolFormerFeatureExtractor", "PoolFormerImageProcessor"]) _import_structure["models.pvt"].extend(["PvtImageProcessor"]) _import_structure["models.qwen2_vl"].extend(["Qwen2VLImageProcessor"]) _import_structure["models.rt_detr"].extend(["RTDetrImageProcessor"]) _import_structure["models.sam"].extend(["SamImageProcessor"]) _import_structure["models.segformer"].extend(["SegformerFeatureExtractor", "SegformerImageProcessor"]) _import_structure["models.seggpt"].extend(["SegGptImageProcessor"]) _import_structure["models.siglip"].append("SiglipImageProcessor") _import_structure["models.superpoint"].extend(["SuperPointImageProcessor"]) _import_structure["models.swin2sr"].append("Swin2SRImageProcessor") _import_structure["models.tvp"].append("TvpImageProcessor") _import_structure["models.video_llava"].append("VideoLlavaImageProcessor") _import_structure["models.videomae"].extend(["VideoMAEFeatureExtractor", "VideoMAEImageProcessor"]) _import_structure["models.vilt"].extend(["ViltFeatureExtractor", "ViltImageProcessor", "ViltProcessor"]) _import_structure["models.vit"].extend(["ViTFeatureExtractor", "ViTImageProcessor"]) _import_structure["models.vitmatte"].append("VitMatteImageProcessor") _import_structure["models.vivit"].append("VivitImageProcessor") _import_structure["models.yolos"].extend(["YolosFeatureExtractor", "YolosImageProcessor"]) _import_structure["models.zoedepth"].append("ZoeDepthImageProcessor") try: if not is_torchvision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_torchvision_objects _import_structure["utils.dummy_torchvision_objects"] = [ name for name in dir(dummy_torchvision_objects) if not name.startswith("_") ] else: _import_structure["image_processing_utils_fast"] = ["BaseImageProcessorFast"] _import_structure["models.vit"].append("ViTImageProcessorFast") # PyTorch-backed objects try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_pt_objects _import_structure["utils.dummy_pt_objects"] = [name for name in dir(dummy_pt_objects) if not name.startswith("_")] else: _import_structure["activations"] = [] _import_structure["benchmark.benchmark"] = ["PyTorchBenchmark"] _import_structure["benchmark.benchmark_args"] = ["PyTorchBenchmarkArguments"] _import_structure["cache_utils"] = [ "Cache", "CacheConfig", "DynamicCache", "EncoderDecoderCache", "HQQQuantizedCache", "HybridCache", "MambaCache", "OffloadedCache", "OffloadedStaticCache", "QuantizedCache", "QuantizedCacheConfig", "QuantoQuantizedCache", "SinkCache", "SlidingWindowCache", "StaticCache", ] _import_structure["data.datasets"] = [ "GlueDataset", "GlueDataTrainingArguments", "LineByLineTextDataset", "LineByLineWithRefDataset", "LineByLineWithSOPTextDataset", "SquadDataset", "SquadDataTrainingArguments", "TextDataset", "TextDatasetForNextSentencePrediction", ] _import_structure["generation"].extend( [ "AlternatingCodebooksLogitsProcessor", "BeamScorer", "BeamSearchScorer", "ClassifierFreeGuidanceLogitsProcessor", "ConstrainedBeamSearchScorer", "Constraint", "ConstraintListState", "DisjunctiveConstraint", "EncoderNoRepeatNGramLogitsProcessor", "EncoderRepetitionPenaltyLogitsProcessor", "EosTokenCriteria", "EpsilonLogitsWarper", "EtaLogitsWarper", "ExponentialDecayLengthPenalty", "ForcedBOSTokenLogitsProcessor", "ForcedEOSTokenLogitsProcessor", "GenerationMixin", "HammingDiversityLogitsProcessor", "InfNanRemoveLogitsProcessor", "LogitNormalization", "LogitsProcessor", "LogitsProcessorList", "LogitsWarper", "MaxLengthCriteria", "MaxTimeCriteria", "MinLengthLogitsProcessor", "MinNewTokensLengthLogitsProcessor", "MinPLogitsWarper", "NoBadWordsLogitsProcessor", "NoRepeatNGramLogitsProcessor", "PhrasalConstraint", "PrefixConstrainedLogitsProcessor", "RepetitionPenaltyLogitsProcessor", "SequenceBiasLogitsProcessor", "StoppingCriteria", "StoppingCriteriaList", "StopStringCriteria", "SuppressTokensAtBeginLogitsProcessor", "SuppressTokensLogitsProcessor", "TemperatureLogitsWarper", "TopKLogitsWarper", "TopPLogitsWarper", "TypicalLogitsWarper", "UnbatchedClassifierFreeGuidanceLogitsProcessor", "WatermarkDetector", "WatermarkLogitsProcessor", "WhisperTimeStampLogitsProcessor", ] ) _import_structure["modeling_flash_attention_utils"] = [] _import_structure["modeling_outputs"] = [] _import_structure["modeling_rope_utils"] = ["ROPE_INIT_FUNCTIONS"] _import_structure["modeling_utils"] = ["PreTrainedModel"] # PyTorch models structure _import_structure["models.albert"].extend( [ "AlbertForMaskedLM", "AlbertForMultipleChoice", "AlbertForPreTraining", "AlbertForQuestionAnswering", "AlbertForSequenceClassification", "AlbertForTokenClassification", "AlbertModel", "AlbertPreTrainedModel", "load_tf_weights_in_albert", ] ) _import_structure["models.align"].extend( [ "AlignModel", "AlignPreTrainedModel", "AlignTextModel", "AlignVisionModel", ] ) _import_structure["models.altclip"].extend( [ "AltCLIPModel", "AltCLIPPreTrainedModel", "AltCLIPTextModel", "AltCLIPVisionModel", ] ) _import_structure["models.audio_spectrogram_transformer"].extend( [ "ASTForAudioClassification", "ASTModel", "ASTPreTrainedModel", ] ) _import_structure["models.auto"].extend( [ "MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING", "MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING", "MODEL_FOR_AUDIO_XVECTOR_MAPPING", "MODEL_FOR_BACKBONE_MAPPING", "MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING", "MODEL_FOR_CAUSAL_LM_MAPPING", "MODEL_FOR_CTC_MAPPING", "MODEL_FOR_DEPTH_ESTIMATION_MAPPING", "MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING", "MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING", "MODEL_FOR_IMAGE_MAPPING", "MODEL_FOR_IMAGE_SEGMENTATION_MAPPING", "MODEL_FOR_IMAGE_TO_IMAGE_MAPPING", "MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING", "MODEL_FOR_KEYPOINT_DETECTION_MAPPING", "MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING", "MODEL_FOR_MASKED_LM_MAPPING", "MODEL_FOR_MASK_GENERATION_MAPPING", "MODEL_FOR_MULTIPLE_CHOICE_MAPPING", "MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING", "MODEL_FOR_OBJECT_DETECTION_MAPPING", "MODEL_FOR_PRETRAINING_MAPPING", "MODEL_FOR_QUESTION_ANSWERING_MAPPING", "MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING", "MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING", "MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING", "MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING", "MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING", "MODEL_FOR_TEXT_ENCODING_MAPPING", "MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING", "MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING", "MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING", "MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING", "MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING", "MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING", "MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING", "MODEL_FOR_VISION_2_SEQ_MAPPING", "MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING", "MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING", "MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING", "MODEL_MAPPING", "MODEL_WITH_LM_HEAD_MAPPING", "AutoBackbone", "AutoModel", "AutoModelForAudioClassification", "AutoModelForAudioFrameClassification", "AutoModelForAudioXVector", "AutoModelForCausalLM", "AutoModelForCTC", "AutoModelForDepthEstimation", "AutoModelForDocumentQuestionAnswering", "AutoModelForImageClassification", "AutoModelForImageSegmentation", "AutoModelForImageToImage", "AutoModelForInstanceSegmentation", "AutoModelForKeypointDetection", "AutoModelForMaskedImageModeling", "AutoModelForMaskedLM", "AutoModelForMaskGeneration", "AutoModelForMultipleChoice", "AutoModelForNextSentencePrediction", "AutoModelForObjectDetection", "AutoModelForPreTraining", "AutoModelForQuestionAnswering", "AutoModelForSemanticSegmentation", "AutoModelForSeq2SeqLM", "AutoModelForSequenceClassification", "AutoModelForSpeechSeq2Seq", "AutoModelForTableQuestionAnswering", "AutoModelForTextEncoding", "AutoModelForTextToSpectrogram", "AutoModelForTextToWaveform", "AutoModelForTokenClassification", "AutoModelForUniversalSegmentation", "AutoModelForVideoClassification", "AutoModelForVision2Seq", "AutoModelForVisualQuestionAnswering", "AutoModelForZeroShotImageClassification", "AutoModelForZeroShotObjectDetection", "AutoModelWithLMHead", ] ) _import_structure["models.autoformer"].extend( [ "AutoformerForPrediction", "AutoformerModel", "AutoformerPreTrainedModel", ] ) _import_structure["models.bark"].extend( [ "BarkCausalModel", "BarkCoarseModel", "BarkFineModel", "BarkModel", "BarkPreTrainedModel", "BarkSemanticModel", ] ) _import_structure["models.bart"].extend( [ "BartForCausalLM", "BartForConditionalGeneration", "BartForQuestionAnswering", "BartForSequenceClassification", "BartModel", "BartPretrainedModel", "BartPreTrainedModel", "PretrainedBartModel", ] ) _import_structure["models.beit"].extend( [ "BeitBackbone", "BeitForImageClassification", "BeitForMaskedImageModeling", "BeitForSemanticSegmentation", "BeitModel", "BeitPreTrainedModel", ] ) _import_structure["models.bert"].extend( [ "BertForMaskedLM", "BertForMultipleChoice", "BertForNextSentencePrediction", "BertForPreTraining", "BertForQuestionAnswering", "BertForSequenceClassification", "BertForTokenClassification", "BertLayer", "BertLMHeadModel", "BertModel", "BertPreTrainedModel", "load_tf_weights_in_bert", ] ) _import_structure["models.bert_generation"].extend( [ "BertGenerationDecoder", "BertGenerationEncoder", "BertGenerationPreTrainedModel", "load_tf_weights_in_bert_generation", ] ) _import_structure["models.big_bird"].extend( [ "BigBirdForCausalLM", "BigBirdForMaskedLM", "BigBirdForMultipleChoice", "BigBirdForPreTraining", "BigBirdForQuestionAnswering", "BigBirdForSequenceClassification", "BigBirdForTokenClassification", "BigBirdLayer", "BigBirdModel", "BigBirdPreTrainedModel", "load_tf_weights_in_big_bird", ] ) _import_structure["models.bigbird_pegasus"].extend( [ "BigBirdPegasusForCausalLM", "BigBirdPegasusForConditionalGeneration", "BigBirdPegasusForQuestionAnswering", "BigBirdPegasusForSequenceClassification", "BigBirdPegasusModel", "BigBirdPegasusPreTrainedModel", ] ) _import_structure["models.biogpt"].extend( [ "BioGptForCausalLM", "BioGptForSequenceClassification", "BioGptForTokenClassification", "BioGptModel", "BioGptPreTrainedModel", ] ) _import_structure["models.bit"].extend( [ "BitBackbone", "BitForImageClassification", "BitModel", "BitPreTrainedModel", ] ) _import_structure["models.blenderbot"].extend( [ "BlenderbotForCausalLM", "BlenderbotForConditionalGeneration", "BlenderbotModel", "BlenderbotPreTrainedModel", ] ) _import_structure["models.blenderbot_small"].extend( [ "BlenderbotSmallForCausalLM", "BlenderbotSmallForConditionalGeneration", "BlenderbotSmallModel", "BlenderbotSmallPreTrainedModel", ] ) _import_structure["models.blip"].extend( [ "BlipForConditionalGeneration", "BlipForImageTextRetrieval", "BlipForQuestionAnswering", "BlipModel", "BlipPreTrainedModel", "BlipTextModel", "BlipVisionModel", ] ) _import_structure["models.blip_2"].extend( [ "Blip2ForConditionalGeneration", "Blip2ForImageTextRetrieval", "Blip2Model", "Blip2PreTrainedModel", "Blip2QFormerModel", "Blip2TextModelWithProjection", "Blip2VisionModel", "Blip2VisionModelWithProjection", ] ) _import_structure["models.bloom"].extend( [ "BloomForCausalLM", "BloomForQuestionAnswering", "BloomForSequenceClassification", "BloomForTokenClassification", "BloomModel", "BloomPreTrainedModel", ] ) _import_structure["models.bridgetower"].extend( [ "BridgeTowerForContrastiveLearning", "BridgeTowerForImageAndTextRetrieval", "BridgeTowerForMaskedLM", "BridgeTowerModel", "BridgeTowerPreTrainedModel", ] ) _import_structure["models.bros"].extend( [ "BrosForTokenClassification", "BrosModel", "BrosPreTrainedModel", "BrosProcessor", "BrosSpadeEEForTokenClassification", "BrosSpadeELForTokenClassification", ] ) _import_structure["models.camembert"].extend( [ "CamembertForCausalLM", "CamembertForMaskedLM", "CamembertForMultipleChoice", "CamembertForQuestionAnswering", "CamembertForSequenceClassification", "CamembertForTokenClassification", "CamembertModel", "CamembertPreTrainedModel", ] ) _import_structure["models.canine"].extend( [ "CanineForMultipleChoice", "CanineForQuestionAnswering", "CanineForSequenceClassification", "CanineForTokenClassification", "CanineLayer", "CanineModel", "CaninePreTrainedModel", "load_tf_weights_in_canine", ] ) _import_structure["models.chameleon"].extend( [ "ChameleonForConditionalGeneration", "ChameleonModel", "ChameleonPreTrainedModel", "ChameleonProcessor", "ChameleonVQVAE", ] ) _import_structure["models.chinese_clip"].extend( [ "ChineseCLIPModel", "ChineseCLIPPreTrainedModel", "ChineseCLIPTextModel", "ChineseCLIPVisionModel", ] ) _import_structure["models.clap"].extend( [ "ClapAudioModel", "ClapAudioModelWithProjection", "ClapFeatureExtractor", "ClapModel", "ClapPreTrainedModel", "ClapTextModel", "ClapTextModelWithProjection", ] ) _import_structure["models.clip"].extend( [ "CLIPForImageClassification", "CLIPModel", "CLIPPreTrainedModel", "CLIPTextModel", "CLIPTextModelWithProjection", "CLIPVisionModel", "CLIPVisionModelWithProjection", ] ) _import_structure["models.clipseg"].extend( [ "CLIPSegForImageSegmentation", "CLIPSegModel", "CLIPSegPreTrainedModel", "CLIPSegTextModel", "CLIPSegVisionModel", ] ) _import_structure["models.clvp"].extend( [ "ClvpDecoder", "ClvpEncoder", "ClvpForCausalLM", "ClvpModel", "ClvpModelForConditionalGeneration", "ClvpPreTrainedModel", ] ) _import_structure["models.codegen"].extend( [ "CodeGenForCausalLM", "CodeGenModel", "CodeGenPreTrainedModel", ] ) _import_structure["models.cohere"].extend(["CohereForCausalLM", "CohereModel", "CoherePreTrainedModel"]) _import_structure["models.conditional_detr"].extend( [ "ConditionalDetrForObjectDetection", "ConditionalDetrForSegmentation", "ConditionalDetrModel", "ConditionalDetrPreTrainedModel", ] ) _import_structure["models.convbert"].extend( [ "ConvBertForMaskedLM", "ConvBertForMultipleChoice", "ConvBertForQuestionAnswering", "ConvBertForSequenceClassification", "ConvBertForTokenClassification", "ConvBertLayer", "ConvBertModel", "ConvBertPreTrainedModel", "load_tf_weights_in_convbert", ] ) _import_structure["models.convnext"].extend( [ "ConvNextBackbone", "ConvNextForImageClassification", "ConvNextModel", "ConvNextPreTrainedModel", ] ) _import_structure["models.convnextv2"].extend( [ "ConvNextV2Backbone", "ConvNextV2ForImageClassification", "ConvNextV2Model", "ConvNextV2PreTrainedModel", ] ) _import_structure["models.cpmant"].extend( [ "CpmAntForCausalLM", "CpmAntModel", "CpmAntPreTrainedModel", ] ) _import_structure["models.ctrl"].extend( [ "CTRLForSequenceClassification", "CTRLLMHeadModel", "CTRLModel", "CTRLPreTrainedModel", ] ) _import_structure["models.cvt"].extend( [ "CvtForImageClassification", "CvtModel", "CvtPreTrainedModel", ] ) _import_structure["models.dac"].extend( [ "DacModel", "DacPreTrainedModel", ] ) _import_structure["models.data2vec"].extend( [ "Data2VecAudioForAudioFrameClassification", "Data2VecAudioForCTC", "Data2VecAudioForSequenceClassification", "Data2VecAudioForXVector", "Data2VecAudioModel", "Data2VecAudioPreTrainedModel", "Data2VecTextForCausalLM", "Data2VecTextForMaskedLM", "Data2VecTextForMultipleChoice", "Data2VecTextForQuestionAnswering", "Data2VecTextForSequenceClassification", "Data2VecTextForTokenClassification", "Data2VecTextModel", "Data2VecTextPreTrainedModel", "Data2VecVisionForImageClassification", "Data2VecVisionForSemanticSegmentation", "Data2VecVisionModel", "Data2VecVisionPreTrainedModel", ] ) _import_structure["models.dbrx"].extend( [ "DbrxForCausalLM", "DbrxModel", "DbrxPreTrainedModel", ] ) _import_structure["models.deberta"].extend( [ "DebertaForMaskedLM", "DebertaForQuestionAnswering", "DebertaForSequenceClassification", "DebertaForTokenClassification", "DebertaModel", "DebertaPreTrainedModel", ] ) _import_structure["models.deberta_v2"].extend( [ "DebertaV2ForMaskedLM", "DebertaV2ForMultipleChoice", "DebertaV2ForQuestionAnswering", "DebertaV2ForSequenceClassification", "DebertaV2ForTokenClassification", "DebertaV2Model", "DebertaV2PreTrainedModel", ] ) _import_structure["models.decision_transformer"].extend( [ "DecisionTransformerGPT2Model", "DecisionTransformerGPT2PreTrainedModel", "DecisionTransformerModel", "DecisionTransformerPreTrainedModel", ] ) _import_structure["models.deformable_detr"].extend( [ "DeformableDetrForObjectDetection", "DeformableDetrModel", "DeformableDetrPreTrainedModel", ] ) _import_structure["models.deit"].extend( [ "DeiTForImageClassification", "DeiTForImageClassificationWithTeacher", "DeiTForMaskedImageModeling", "DeiTModel", "DeiTPreTrainedModel", ] ) _import_structure["models.deprecated.deta"].extend( [ "DetaForObjectDetection", "DetaModel", "DetaPreTrainedModel", ] ) _import_structure["models.deprecated.efficientformer"].extend( [ "EfficientFormerForImageClassification", "EfficientFormerForImageClassificationWithTeacher", "EfficientFormerModel", "EfficientFormerPreTrainedModel", ] ) _import_structure["models.deprecated.ernie_m"].extend( [ "ErnieMForInformationExtraction", "ErnieMForMultipleChoice", "ErnieMForQuestionAnswering", "ErnieMForSequenceClassification", "ErnieMForTokenClassification", "ErnieMModel", "ErnieMPreTrainedModel", ] ) _import_structure["models.deprecated.gptsan_japanese"].extend( [ "GPTSanJapaneseForConditionalGeneration", "GPTSanJapaneseModel", "GPTSanJapanesePreTrainedModel", ] ) _import_structure["models.deprecated.graphormer"].extend( [ "GraphormerForGraphClassification", "GraphormerModel", "GraphormerPreTrainedModel", ] ) _import_structure["models.deprecated.jukebox"].extend( [ "JukeboxModel", "JukeboxPreTrainedModel", "JukeboxPrior", "JukeboxVQVAE", ] ) _import_structure["models.deprecated.mctct"].extend( [ "MCTCTForCTC", "MCTCTModel", "MCTCTPreTrainedModel", ] ) _import_structure["models.deprecated.mega"].extend( [ "MegaForCausalLM", "MegaForMaskedLM", "MegaForMultipleChoice", "MegaForQuestionAnswering", "MegaForSequenceClassification", "MegaForTokenClassification", "MegaModel", "MegaPreTrainedModel", ] ) _import_structure["models.deprecated.mmbt"].extend(["MMBTForClassification", "MMBTModel", "ModalEmbeddings"]) _import_structure["models.deprecated.nat"].extend( [ "NatBackbone", "NatForImageClassification", "NatModel", "NatPreTrainedModel", ] ) _import_structure["models.deprecated.nezha"].extend( [ "NezhaForMaskedLM", "NezhaForMultipleChoice", "NezhaForNextSentencePrediction", "NezhaForPreTraining", "NezhaForQuestionAnswering", "NezhaForSequenceClassification", "NezhaForTokenClassification", "NezhaModel", "NezhaPreTrainedModel", ] ) _import_structure["models.deprecated.open_llama"].extend( [ "OpenLlamaForCausalLM", "OpenLlamaForSequenceClassification", "OpenLlamaModel", "OpenLlamaPreTrainedModel", ] ) _import_structure["models.deprecated.qdqbert"].extend( [ "QDQBertForMaskedLM", "QDQBertForMultipleChoice", "QDQBertForNextSentencePrediction", "QDQBertForQuestionAnswering", "QDQBertForSequenceClassification", "QDQBertForTokenClassification", "QDQBertLayer", "QDQBertLMHeadModel", "QDQBertModel", "QDQBertPreTrainedModel", "load_tf_weights_in_qdqbert", ] ) _import_structure["models.deprecated.realm"].extend( [ "RealmEmbedder", "RealmForOpenQA", "RealmKnowledgeAugEncoder", "RealmPreTrainedModel", "RealmReader", "RealmRetriever", "RealmScorer", "load_tf_weights_in_realm", ] ) _import_structure["models.deprecated.retribert"].extend( [ "RetriBertModel", "RetriBertPreTrainedModel", ] ) _import_structure["models.deprecated.speech_to_text_2"].extend( ["Speech2Text2ForCausalLM", "Speech2Text2PreTrainedModel"] ) _import_structure["models.deprecated.trajectory_transformer"].extend( [ "TrajectoryTransformerModel", "TrajectoryTransformerPreTrainedModel", ] ) _import_structure["models.deprecated.transfo_xl"].extend( [ "AdaptiveEmbedding", "TransfoXLForSequenceClassification", "TransfoXLLMHeadModel", "TransfoXLModel", "TransfoXLPreTrainedModel", "load_tf_weights_in_transfo_xl", ] ) _import_structure["models.deprecated.tvlt"].extend( [ "TvltForAudioVisualClassification", "TvltForPreTraining", "TvltModel", "TvltPreTrainedModel", ] ) _import_structure["models.deprecated.van"].extend( [ "VanForImageClassification", "VanModel", "VanPreTrainedModel", ] ) _import_structure["models.deprecated.vit_hybrid"].extend( [ "ViTHybridForImageClassification", "ViTHybridModel", "ViTHybridPreTrainedModel", ] ) _import_structure["models.deprecated.xlm_prophetnet"].extend( [ "XLMProphetNetDecoder", "XLMProphetNetEncoder", "XLMProphetNetForCausalLM", "XLMProphetNetForConditionalGeneration", "XLMProphetNetModel", "XLMProphetNetPreTrainedModel", ] ) _import_structure["models.depth_anything"].extend( [ "DepthAnythingForDepthEstimation", "DepthAnythingPreTrainedModel", ] ) _import_structure["models.detr"].extend( [ "DetrForObjectDetection", "DetrForSegmentation", "DetrModel", "DetrPreTrainedModel", ] ) _import_structure["models.dinat"].extend( [ "DinatBackbone", "DinatForImageClassification", "DinatModel", "DinatPreTrainedModel", ] ) _import_structure["models.dinov2"].extend( [ "Dinov2Backbone", "Dinov2ForImageClassification", "Dinov2Model", "Dinov2PreTrainedModel", ] ) _import_structure["models.distilbert"].extend( [ "DistilBertForMaskedLM", "DistilBertForMultipleChoice", "DistilBertForQuestionAnswering", "DistilBertForSequenceClassification", "DistilBertForTokenClassification", "DistilBertModel", "DistilBertPreTrainedModel", ] ) _import_structure["models.donut"].extend( [ "DonutSwinModel", "DonutSwinPreTrainedModel", ] ) _import_structure["models.dpr"].extend( [ "DPRContextEncoder", "DPRPretrainedContextEncoder", "DPRPreTrainedModel", "DPRPretrainedQuestionEncoder", "DPRPretrainedReader", "DPRQuestionEncoder", "DPRReader", ] ) _import_structure["models.dpt"].extend( [ "DPTForDepthEstimation", "DPTForSemanticSegmentation", "DPTModel", "DPTPreTrainedModel", ] ) _import_structure["models.efficientnet"].extend( [ "EfficientNetForImageClassification", "EfficientNetModel", "EfficientNetPreTrainedModel", ] ) _import_structure["models.electra"].extend( [ "ElectraForCausalLM", "ElectraForMaskedLM", "ElectraForMultipleChoice", "ElectraForPreTraining", "ElectraForQuestionAnswering", "ElectraForSequenceClassification", "ElectraForTokenClassification", "ElectraModel", "ElectraPreTrainedModel", "load_tf_weights_in_electra", ] ) _import_structure["models.encodec"].extend( [ "EncodecModel", "EncodecPreTrainedModel", ] ) _import_structure["models.encoder_decoder"].append("EncoderDecoderModel") _import_structure["models.ernie"].extend( [ "ErnieForCausalLM", "ErnieForMaskedLM", "ErnieForMultipleChoice", "ErnieForNextSentencePrediction", "ErnieForPreTraining", "ErnieForQuestionAnswering", "ErnieForSequenceClassification", "ErnieForTokenClassification", "ErnieModel", "ErniePreTrainedModel", ] ) _import_structure["models.esm"].extend( [ "EsmFoldPreTrainedModel", "EsmForMaskedLM", "EsmForProteinFolding", "EsmForSequenceClassification", "EsmForTokenClassification", "EsmModel", "EsmPreTrainedModel", ] ) _import_structure["models.falcon"].extend( [ "FalconForCausalLM", "FalconForQuestionAnswering", "FalconForSequenceClassification", "FalconForTokenClassification", "FalconModel", "FalconPreTrainedModel", ] ) _import_structure["models.falcon_mamba"].extend( [ "FalconMambaForCausalLM", "FalconMambaModel", "FalconMambaPreTrainedModel", ] ) _import_structure["models.fastspeech2_conformer"].extend( [ "FastSpeech2ConformerHifiGan", "FastSpeech2ConformerModel", "FastSpeech2ConformerPreTrainedModel", "FastSpeech2ConformerWithHifiGan", ] ) _import_structure["models.flaubert"].extend( [ "FlaubertForMultipleChoice", "FlaubertForQuestionAnswering", "FlaubertForQuestionAnsweringSimple", "FlaubertForSequenceClassification", "FlaubertForTokenClassification", "FlaubertModel", "FlaubertPreTrainedModel", "FlaubertWithLMHeadModel", ] ) _import_structure["models.flava"].extend( [ "FlavaForPreTraining", "FlavaImageCodebook", "FlavaImageModel", "FlavaModel", "FlavaMultimodalModel", "FlavaPreTrainedModel", "FlavaTextModel", ] ) _import_structure["models.fnet"].extend( [ "FNetForMaskedLM", "FNetForMultipleChoice", "FNetForNextSentencePrediction", "FNetForPreTraining", "FNetForQuestionAnswering", "FNetForSequenceClassification", "FNetForTokenClassification", "FNetLayer", "FNetModel", "FNetPreTrainedModel", ] ) _import_structure["models.focalnet"].extend( [ "FocalNetBackbone", "FocalNetForImageClassification", "FocalNetForMaskedImageModeling", "FocalNetModel", "FocalNetPreTrainedModel", ] ) _import_structure["models.fsmt"].extend(["FSMTForConditionalGeneration", "FSMTModel", "PretrainedFSMTModel"]) _import_structure["models.funnel"].extend( [ "FunnelBaseModel", "FunnelForMaskedLM", "FunnelForMultipleChoice", "FunnelForPreTraining", "FunnelForQuestionAnswering", "FunnelForSequenceClassification", "FunnelForTokenClassification", "FunnelModel", "FunnelPreTrainedModel", "load_tf_weights_in_funnel", ] ) _import_structure["models.fuyu"].extend(["FuyuForCausalLM", "FuyuPreTrainedModel"]) _import_structure["models.gemma"].extend( [ "GemmaForCausalLM", "GemmaForSequenceClassification", "GemmaForTokenClassification", "GemmaModel", "GemmaPreTrainedModel", ] ) _import_structure["models.gemma2"].extend( [ "Gemma2ForCausalLM", "Gemma2ForSequenceClassification", "Gemma2ForTokenClassification", "Gemma2Model", "Gemma2PreTrainedModel", ] ) _import_structure["models.git"].extend( [ "GitForCausalLM", "GitModel", "GitPreTrainedModel", "GitVisionModel", ] ) _import_structure["models.glpn"].extend( [ "GLPNForDepthEstimation", "GLPNModel", "GLPNPreTrainedModel", ] ) _import_structure["models.gpt2"].extend( [ "GPT2DoubleHeadsModel", "GPT2ForQuestionAnswering", "GPT2ForSequenceClassification", "GPT2ForTokenClassification", "GPT2LMHeadModel", "GPT2Model", "GPT2PreTrainedModel", "load_tf_weights_in_gpt2", ] ) _import_structure["models.gpt_bigcode"].extend( [ "GPTBigCodeForCausalLM", "GPTBigCodeForSequenceClassification", "GPTBigCodeForTokenClassification", "GPTBigCodeModel", "GPTBigCodePreTrainedModel", ] ) _import_structure["models.gpt_neo"].extend( [ "GPTNeoForCausalLM", "GPTNeoForQuestionAnswering", "GPTNeoForSequenceClassification", "GPTNeoForTokenClassification", "GPTNeoModel", "GPTNeoPreTrainedModel", "load_tf_weights_in_gpt_neo", ] ) _import_structure["models.gpt_neox"].extend( [ "GPTNeoXForCausalLM", "GPTNeoXForQuestionAnswering", "GPTNeoXForSequenceClassification", "GPTNeoXForTokenClassification", "GPTNeoXLayer", "GPTNeoXModel", "GPTNeoXPreTrainedModel", ] ) _import_structure["models.gpt_neox_japanese"].extend( [ "GPTNeoXJapaneseForCausalLM", "GPTNeoXJapaneseLayer", "GPTNeoXJapaneseModel", "GPTNeoXJapanesePreTrainedModel", ] ) _import_structure["models.gptj"].extend( [ "GPTJForCausalLM", "GPTJForQuestionAnswering", "GPTJForSequenceClassification", "GPTJModel", "GPTJPreTrainedModel", ] ) _import_structure["models.granite"].extend( [ "GraniteForCausalLM", "GraniteModel", "GranitePreTrainedModel", ] ) _import_structure["models.grounding_dino"].extend( [ "GroundingDinoForObjectDetection", "GroundingDinoModel", "GroundingDinoPreTrainedModel", ] ) _import_structure["models.groupvit"].extend( [ "GroupViTModel", "GroupViTPreTrainedModel", "GroupViTTextModel", "GroupViTVisionModel", ] ) _import_structure["models.hiera"].extend( [ "HieraBackbone", "HieraForImageClassification", "HieraForPreTraining", "HieraModel", "HieraPreTrainedModel", ] ) _import_structure["models.hubert"].extend( [ "HubertForCTC", "HubertForSequenceClassification", "HubertModel", "HubertPreTrainedModel", ] ) _import_structure["models.ibert"].extend( [ "IBertForMaskedLM", "IBertForMultipleChoice", "IBertForQuestionAnswering", "IBertForSequenceClassification", "IBertForTokenClassification", "IBertModel", "IBertPreTrainedModel", ] ) _import_structure["models.idefics"].extend( [ "IdeficsForVisionText2Text", "IdeficsModel", "IdeficsPreTrainedModel", "IdeficsProcessor", ] ) _import_structure["models.idefics2"].extend( [ "Idefics2ForConditionalGeneration", "Idefics2Model", "Idefics2PreTrainedModel", "Idefics2Processor", ] ) _import_structure["models.imagegpt"].extend( [ "ImageGPTForCausalImageModeling", "ImageGPTForImageClassification", "ImageGPTModel", "ImageGPTPreTrainedModel", "load_tf_weights_in_imagegpt", ] ) _import_structure["models.informer"].extend( [ "InformerForPrediction", "InformerModel", "InformerPreTrainedModel", ] ) _import_structure["models.instructblip"].extend( [ "InstructBlipForConditionalGeneration", "InstructBlipPreTrainedModel", "InstructBlipQFormerModel", "InstructBlipVisionModel", ] ) _import_structure["models.instructblipvideo"].extend( [ "InstructBlipVideoForConditionalGeneration", "InstructBlipVideoPreTrainedModel", "InstructBlipVideoQFormerModel", "InstructBlipVideoVisionModel", ] ) _import_structure["models.jamba"].extend( [ "JambaForCausalLM", "JambaForSequenceClassification", "JambaModel", "JambaPreTrainedModel", ] ) _import_structure["models.jetmoe"].extend( [ "JetMoeForCausalLM", "JetMoeForSequenceClassification", "JetMoeModel", "JetMoePreTrainedModel", ] ) _import_structure["models.kosmos2"].extend( [ "Kosmos2ForConditionalGeneration", "Kosmos2Model", "Kosmos2PreTrainedModel", ] ) _import_structure["models.layoutlm"].extend( [ "LayoutLMForMaskedLM", "LayoutLMForQuestionAnswering", "LayoutLMForSequenceClassification", "LayoutLMForTokenClassification", "LayoutLMModel", "LayoutLMPreTrainedModel", ] ) _import_structure["models.layoutlmv2"].extend( [ "LayoutLMv2ForQuestionAnswering", "LayoutLMv2ForSequenceClassification", "LayoutLMv2ForTokenClassification", "LayoutLMv2Model", "LayoutLMv2PreTrainedModel", ] ) _import_structure["models.layoutlmv3"].extend( [ "LayoutLMv3ForQuestionAnswering", "LayoutLMv3ForSequenceClassification", "LayoutLMv3ForTokenClassification", "LayoutLMv3Model", "LayoutLMv3PreTrainedModel", ] ) _import_structure["models.led"].extend( [ "LEDForConditionalGeneration", "LEDForQuestionAnswering", "LEDForSequenceClassification", "LEDModel", "LEDPreTrainedModel", ] ) _import_structure["models.levit"].extend( [ "LevitForImageClassification", "LevitForImageClassificationWithTeacher", "LevitModel", "LevitPreTrainedModel", ] ) _import_structure["models.lilt"].extend( [ "LiltForQuestionAnswering", "LiltForSequenceClassification", "LiltForTokenClassification", "LiltModel", "LiltPreTrainedModel", ] ) _import_structure["models.llama"].extend( [ "LlamaForCausalLM", "LlamaForQuestionAnswering", "LlamaForSequenceClassification", "LlamaForTokenClassification", "LlamaModel", "LlamaPreTrainedModel", ] ) _import_structure["models.llava"].extend( [ "LlavaForConditionalGeneration", "LlavaPreTrainedModel", ] ) _import_structure["models.llava_next"].extend( [ "LlavaNextForConditionalGeneration", "LlavaNextPreTrainedModel", ] ) _import_structure["models.llava_next_video"].extend( [ "LlavaNextVideoForConditionalGeneration", "LlavaNextVideoPreTrainedModel", ] ) _import_structure["models.longformer"].extend( [ "LongformerForMaskedLM", "LongformerForMultipleChoice", "LongformerForQuestionAnswering", "LongformerForSequenceClassification", "LongformerForTokenClassification", "LongformerModel", "LongformerPreTrainedModel", "LongformerSelfAttention", ] ) _import_structure["models.longt5"].extend( [ "LongT5EncoderModel", "LongT5ForConditionalGeneration", "LongT5Model", "LongT5PreTrainedModel", ] ) _import_structure["models.luke"].extend( [ "LukeForEntityClassification", "LukeForEntityPairClassification", "LukeForEntitySpanClassification", "LukeForMaskedLM", "LukeForMultipleChoice", "LukeForQuestionAnswering", "LukeForSequenceClassification", "LukeForTokenClassification", "LukeModel", "LukePreTrainedModel", ] ) _import_structure["models.lxmert"].extend( [ "LxmertEncoder", "LxmertForPreTraining", "LxmertForQuestionAnswering", "LxmertModel", "LxmertPreTrainedModel", "LxmertVisualFeatureEncoder", "LxmertXLayer", ] ) _import_structure["models.m2m_100"].extend( [ "M2M100ForConditionalGeneration", "M2M100Model", "M2M100PreTrainedModel", ] ) _import_structure["models.mamba"].extend( [ "MambaForCausalLM", "MambaModel", "MambaPreTrainedModel", ] ) _import_structure["models.mamba2"].extend( [ "Mamba2ForCausalLM", "Mamba2Model", "Mamba2PreTrainedModel", ] ) _import_structure["models.marian"].extend(["MarianForCausalLM", "MarianModel", "MarianMTModel"]) _import_structure["models.markuplm"].extend( [ "MarkupLMForQuestionAnswering", "MarkupLMForSequenceClassification", "MarkupLMForTokenClassification", "MarkupLMModel", "MarkupLMPreTrainedModel", ] ) _import_structure["models.mask2former"].extend( [ "Mask2FormerForUniversalSegmentation", "Mask2FormerModel", "Mask2FormerPreTrainedModel", ] ) _import_structure["models.maskformer"].extend( [ "MaskFormerForInstanceSegmentation", "MaskFormerModel", "MaskFormerPreTrainedModel", "MaskFormerSwinBackbone", ] ) _import_structure["models.mbart"].extend( [ "MBartForCausalLM", "MBartForConditionalGeneration", "MBartForQuestionAnswering", "MBartForSequenceClassification", "MBartModel", "MBartPreTrainedModel", ] ) _import_structure["models.megatron_bert"].extend( [ "MegatronBertForCausalLM", "MegatronBertForMaskedLM", "MegatronBertForMultipleChoice", "MegatronBertForNextSentencePrediction", "MegatronBertForPreTraining", "MegatronBertForQuestionAnswering", "MegatronBertForSequenceClassification", "MegatronBertForTokenClassification", "MegatronBertModel", "MegatronBertPreTrainedModel", ] ) _import_structure["models.mgp_str"].extend( [ "MgpstrForSceneTextRecognition", "MgpstrModel", "MgpstrPreTrainedModel", ] ) _import_structure["models.mistral"].extend( [ "MistralForCausalLM", "MistralForSequenceClassification", "MistralForTokenClassification", "MistralModel", "MistralPreTrainedModel", ] ) _import_structure["models.mixtral"].extend( [ "MixtralForCausalLM", "MixtralForSequenceClassification", "MixtralForTokenClassification", "MixtralModel", "MixtralPreTrainedModel", ] ) _import_structure["models.mobilebert"].extend( [ "MobileBertForMaskedLM", "MobileBertForMultipleChoice", "MobileBertForNextSentencePrediction", "MobileBertForPreTraining", "MobileBertForQuestionAnswering", "MobileBertForSequenceClassification", "MobileBertForTokenClassification", "MobileBertLayer", "MobileBertModel", "MobileBertPreTrainedModel", "load_tf_weights_in_mobilebert", ] ) _import_structure["models.mobilenet_v1"].extend( [ "MobileNetV1ForImageClassification", "MobileNetV1Model", "MobileNetV1PreTrainedModel", "load_tf_weights_in_mobilenet_v1", ] ) _import_structure["models.mobilenet_v2"].extend( [ "MobileNetV2ForImageClassification", "MobileNetV2ForSemanticSegmentation", "MobileNetV2Model", "MobileNetV2PreTrainedModel", "load_tf_weights_in_mobilenet_v2", ] ) _import_structure["models.mobilevit"].extend( [ "MobileViTForImageClassification", "MobileViTForSemanticSegmentation", "MobileViTModel", "MobileViTPreTrainedModel", ] ) _import_structure["models.mobilevitv2"].extend( [ "MobileViTV2ForImageClassification", "MobileViTV2ForSemanticSegmentation", "MobileViTV2Model", "MobileViTV2PreTrainedModel", ] ) _import_structure["models.mpnet"].extend( [ "MPNetForMaskedLM", "MPNetForMultipleChoice", "MPNetForQuestionAnswering", "MPNetForSequenceClassification", "MPNetForTokenClassification", "MPNetLayer", "MPNetModel", "MPNetPreTrainedModel", ] ) _import_structure["models.mpt"].extend( [ "MptForCausalLM", "MptForQuestionAnswering", "MptForSequenceClassification", "MptForTokenClassification", "MptModel", "MptPreTrainedModel", ] ) _import_structure["models.mra"].extend( [ "MraForMaskedLM", "MraForMultipleChoice", "MraForQuestionAnswering", "MraForSequenceClassification", "MraForTokenClassification", "MraModel", "MraPreTrainedModel", ] ) _import_structure["models.mt5"].extend( [ "MT5EncoderModel", "MT5ForConditionalGeneration", "MT5ForQuestionAnswering", "MT5ForSequenceClassification", "MT5ForTokenClassification", "MT5Model", "MT5PreTrainedModel", ] ) _import_structure["models.musicgen"].extend( [ "MusicgenForCausalLM", "MusicgenForConditionalGeneration", "MusicgenModel", "MusicgenPreTrainedModel", "MusicgenProcessor", ] ) _import_structure["models.musicgen_melody"].extend( [ "MusicgenMelodyForCausalLM", "MusicgenMelodyForConditionalGeneration", "MusicgenMelodyModel", "MusicgenMelodyPreTrainedModel", ] ) _import_structure["models.mvp"].extend( [ "MvpForCausalLM", "MvpForConditionalGeneration", "MvpForQuestionAnswering", "MvpForSequenceClassification", "MvpModel", "MvpPreTrainedModel", ] ) _import_structure["models.nemotron"].extend( [ "NemotronForCausalLM", "NemotronForQuestionAnswering", "NemotronForSequenceClassification", "NemotronForTokenClassification", "NemotronModel", "NemotronPreTrainedModel", ] ) _import_structure["models.nllb_moe"].extend( [ "NllbMoeForConditionalGeneration", "NllbMoeModel", "NllbMoePreTrainedModel", "NllbMoeSparseMLP", "NllbMoeTop2Router", ] ) _import_structure["models.nystromformer"].extend( [ "NystromformerForMaskedLM", "NystromformerForMultipleChoice", "NystromformerForQuestionAnswering", "NystromformerForSequenceClassification", "NystromformerForTokenClassification", "NystromformerLayer", "NystromformerModel", "NystromformerPreTrainedModel", ] ) _import_structure["models.olmo"].extend( [ "OlmoForCausalLM", "OlmoModel", "OlmoPreTrainedModel", ] ) _import_structure["models.oneformer"].extend( [ "OneFormerForUniversalSegmentation", "OneFormerModel", "OneFormerPreTrainedModel", ] ) _import_structure["models.openai"].extend( [ "OpenAIGPTDoubleHeadsModel", "OpenAIGPTForSequenceClassification", "OpenAIGPTLMHeadModel", "OpenAIGPTModel", "OpenAIGPTPreTrainedModel", "load_tf_weights_in_openai_gpt", ] ) _import_structure["models.opt"].extend( [ "OPTForCausalLM", "OPTForQuestionAnswering", "OPTForSequenceClassification", "OPTModel", "OPTPreTrainedModel", ] ) _import_structure["models.owlv2"].extend( [ "Owlv2ForObjectDetection", "Owlv2Model", "Owlv2PreTrainedModel", "Owlv2TextModel", "Owlv2VisionModel", ] ) _import_structure["models.owlvit"].extend( [ "OwlViTForObjectDetection", "OwlViTModel", "OwlViTPreTrainedModel", "OwlViTTextModel", "OwlViTVisionModel", ] ) _import_structure["models.paligemma"].extend( [ "PaliGemmaForConditionalGeneration", "PaliGemmaPreTrainedModel", "PaliGemmaProcessor", ] ) _import_structure["models.patchtsmixer"].extend( [ "PatchTSMixerForPrediction", "PatchTSMixerForPretraining", "PatchTSMixerForRegression", "PatchTSMixerForTimeSeriesClassification", "PatchTSMixerModel", "PatchTSMixerPreTrainedModel", ] ) _import_structure["models.patchtst"].extend( [ "PatchTSTForClassification", "PatchTSTForPrediction", "PatchTSTForPretraining", "PatchTSTForRegression", "PatchTSTModel", "PatchTSTPreTrainedModel", ] ) _import_structure["models.pegasus"].extend( [ "PegasusForCausalLM", "PegasusForConditionalGeneration", "PegasusModel", "PegasusPreTrainedModel", ] ) _import_structure["models.pegasus_x"].extend( [ "PegasusXForConditionalGeneration", "PegasusXModel", "PegasusXPreTrainedModel", ] ) _import_structure["models.perceiver"].extend( [ "PerceiverForImageClassificationConvProcessing", "PerceiverForImageClassificationFourier", "PerceiverForImageClassificationLearned", "PerceiverForMaskedLM", "PerceiverForMultimodalAutoencoding", "PerceiverForOpticalFlow", "PerceiverForSequenceClassification", "PerceiverLayer", "PerceiverModel", "PerceiverPreTrainedModel", ] ) _import_structure["models.persimmon"].extend( [ "PersimmonForCausalLM", "PersimmonForSequenceClassification", "PersimmonForTokenClassification", "PersimmonModel", "PersimmonPreTrainedModel", ] ) _import_structure["models.phi"].extend( [ "PhiForCausalLM", "PhiForSequenceClassification", "PhiForTokenClassification", "PhiModel", "PhiPreTrainedModel", ] ) _import_structure["models.phi3"].extend( [ "Phi3ForCausalLM", "Phi3ForSequenceClassification", "Phi3ForTokenClassification", "Phi3Model", "Phi3PreTrainedModel", ] ) _import_structure["models.pix2struct"].extend( [ "Pix2StructForConditionalGeneration", "Pix2StructPreTrainedModel", "Pix2StructTextModel", "Pix2StructVisionModel", ] ) _import_structure["models.plbart"].extend( [ "PLBartForCausalLM", "PLBartForConditionalGeneration", "PLBartForSequenceClassification", "PLBartModel", "PLBartPreTrainedModel", ] ) _import_structure["models.poolformer"].extend( [ "PoolFormerForImageClassification", "PoolFormerModel", "PoolFormerPreTrainedModel", ] ) _import_structure["models.pop2piano"].extend( [ "Pop2PianoForConditionalGeneration", "Pop2PianoPreTrainedModel", ] ) _import_structure["models.prophetnet"].extend( [ "ProphetNetDecoder", "ProphetNetEncoder", "ProphetNetForCausalLM", "ProphetNetForConditionalGeneration", "ProphetNetModel", "ProphetNetPreTrainedModel", ] ) _import_structure["models.pvt"].extend( [ "PvtForImageClassification", "PvtModel", "PvtPreTrainedModel", ] ) _import_structure["models.pvt_v2"].extend( [ "PvtV2Backbone", "PvtV2ForImageClassification", "PvtV2Model", "PvtV2PreTrainedModel", ] ) _import_structure["models.qwen2"].extend( [ "Qwen2ForCausalLM", "Qwen2ForSequenceClassification", "Qwen2ForTokenClassification", "Qwen2Model", "Qwen2PreTrainedModel", ] ) _import_structure["models.qwen2_audio"].extend( [ "Qwen2AudioEncoder", "Qwen2AudioForConditionalGeneration", "Qwen2AudioPreTrainedModel", ] ) _import_structure["models.qwen2_moe"].extend( [ "Qwen2MoeForCausalLM", "Qwen2MoeForSequenceClassification", "Qwen2MoeForTokenClassification", "Qwen2MoeModel", "Qwen2MoePreTrainedModel", ] ) _import_structure["models.qwen2_vl"].extend( [ "Qwen2VLForConditionalGeneration", "Qwen2VLModel", "Qwen2VLPreTrainedModel", ] ) _import_structure["models.rag"].extend( [ "RagModel", "RagPreTrainedModel", "RagSequenceForGeneration", "RagTokenForGeneration", ] ) _import_structure["models.recurrent_gemma"].extend( [ "RecurrentGemmaForCausalLM", "RecurrentGemmaModel", "RecurrentGemmaPreTrainedModel", ] ) _import_structure["models.reformer"].extend( [ "ReformerAttention", "ReformerForMaskedLM", "ReformerForQuestionAnswering", "ReformerForSequenceClassification", "ReformerLayer", "ReformerModel", "ReformerModelWithLMHead", "ReformerPreTrainedModel", ] ) _import_structure["models.regnet"].extend( [ "RegNetForImageClassification", "RegNetModel", "RegNetPreTrainedModel", ] ) _import_structure["models.rembert"].extend( [ "RemBertForCausalLM", "RemBertForMaskedLM", "RemBertForMultipleChoice", "RemBertForQuestionAnswering", "RemBertForSequenceClassification", "RemBertForTokenClassification", "RemBertLayer", "RemBertModel", "RemBertPreTrainedModel", "load_tf_weights_in_rembert", ] ) _import_structure["models.resnet"].extend( [ "ResNetBackbone", "ResNetForImageClassification", "ResNetModel", "ResNetPreTrainedModel", ] ) _import_structure["models.roberta"].extend( [ "RobertaForCausalLM", "RobertaForMaskedLM", "RobertaForMultipleChoice", "RobertaForQuestionAnswering", "RobertaForSequenceClassification", "RobertaForTokenClassification", "RobertaModel", "RobertaPreTrainedModel", ] ) _import_structure["models.roberta_prelayernorm"].extend( [ "RobertaPreLayerNormForCausalLM", "RobertaPreLayerNormForMaskedLM", "RobertaPreLayerNormForMultipleChoice", "RobertaPreLayerNormForQuestionAnswering", "RobertaPreLayerNormForSequenceClassification", "RobertaPreLayerNormForTokenClassification", "RobertaPreLayerNormModel", "RobertaPreLayerNormPreTrainedModel", ] ) _import_structure["models.roc_bert"].extend( [ "RoCBertForCausalLM", "RoCBertForMaskedLM", "RoCBertForMultipleChoice", "RoCBertForPreTraining", "RoCBertForQuestionAnswering", "RoCBertForSequenceClassification", "RoCBertForTokenClassification", "RoCBertLayer", "RoCBertModel", "RoCBertPreTrainedModel", "load_tf_weights_in_roc_bert", ] ) _import_structure["models.roformer"].extend( [ "RoFormerForCausalLM", "RoFormerForMaskedLM", "RoFormerForMultipleChoice", "RoFormerForQuestionAnswering", "RoFormerForSequenceClassification", "RoFormerForTokenClassification", "RoFormerLayer", "RoFormerModel", "RoFormerPreTrainedModel", "load_tf_weights_in_roformer", ] ) _import_structure["models.rt_detr"].extend( [ "RTDetrForObjectDetection", "RTDetrModel", "RTDetrPreTrainedModel", "RTDetrResNetBackbone", "RTDetrResNetPreTrainedModel", ] ) _import_structure["models.rwkv"].extend( [ "RwkvForCausalLM", "RwkvModel", "RwkvPreTrainedModel", ] ) _import_structure["models.sam"].extend( [ "SamModel", "SamPreTrainedModel", ] ) _import_structure["models.seamless_m4t"].extend( [ "SeamlessM4TCodeHifiGan", "SeamlessM4TForSpeechToSpeech", "SeamlessM4TForSpeechToText", "SeamlessM4TForTextToSpeech", "SeamlessM4TForTextToText", "SeamlessM4THifiGan", "SeamlessM4TModel", "SeamlessM4TPreTrainedModel", "SeamlessM4TTextToUnitForConditionalGeneration", "SeamlessM4TTextToUnitModel", ] ) _import_structure["models.seamless_m4t_v2"].extend( [ "SeamlessM4Tv2ForSpeechToSpeech", "SeamlessM4Tv2ForSpeechToText", "SeamlessM4Tv2ForTextToSpeech", "SeamlessM4Tv2ForTextToText", "SeamlessM4Tv2Model", "SeamlessM4Tv2PreTrainedModel", ] ) _import_structure["models.segformer"].extend( [ "SegformerDecodeHead", "SegformerForImageClassification", "SegformerForSemanticSegmentation", "SegformerLayer", "SegformerModel", "SegformerPreTrainedModel", ] ) _import_structure["models.seggpt"].extend( [ "SegGptForImageSegmentation", "SegGptModel", "SegGptPreTrainedModel", ] ) _import_structure["models.sew"].extend( [ "SEWForCTC", "SEWForSequenceClassification", "SEWModel", "SEWPreTrainedModel", ] ) _import_structure["models.sew_d"].extend( [ "SEWDForCTC", "SEWDForSequenceClassification", "SEWDModel", "SEWDPreTrainedModel", ] ) _import_structure["models.siglip"].extend( [ "SiglipForImageClassification", "SiglipModel", "SiglipPreTrainedModel", "SiglipTextModel", "SiglipVisionModel", ] ) _import_structure["models.speech_encoder_decoder"].extend(["SpeechEncoderDecoderModel"]) _import_structure["models.speech_to_text"].extend( [ "Speech2TextForConditionalGeneration", "Speech2TextModel", "Speech2TextPreTrainedModel", ] ) _import_structure["models.speecht5"].extend( [ "SpeechT5ForSpeechToSpeech", "SpeechT5ForSpeechToText", "SpeechT5ForTextToSpeech", "SpeechT5HifiGan", "SpeechT5Model", "SpeechT5PreTrainedModel", ] ) _import_structure["models.splinter"].extend( [ "SplinterForPreTraining", "SplinterForQuestionAnswering", "SplinterLayer", "SplinterModel", "SplinterPreTrainedModel", ] ) _import_structure["models.squeezebert"].extend( [ "SqueezeBertForMaskedLM", "SqueezeBertForMultipleChoice", "SqueezeBertForQuestionAnswering", "SqueezeBertForSequenceClassification", "SqueezeBertForTokenClassification", "SqueezeBertModel", "SqueezeBertModule", "SqueezeBertPreTrainedModel", ] ) _import_structure["models.stablelm"].extend( [ "StableLmForCausalLM", "StableLmForSequenceClassification", "StableLmForTokenClassification", "StableLmModel", "StableLmPreTrainedModel", ] ) _import_structure["models.starcoder2"].extend( [ "Starcoder2ForCausalLM", "Starcoder2ForSequenceClassification", "Starcoder2ForTokenClassification", "Starcoder2Model", "Starcoder2PreTrainedModel", ] ) _import_structure["models.superpoint"].extend( [ "SuperPointForKeypointDetection", "SuperPointPreTrainedModel", ] ) _import_structure["models.swiftformer"].extend( [ "SwiftFormerForImageClassification", "SwiftFormerModel", "SwiftFormerPreTrainedModel", ] ) _import_structure["models.swin"].extend( [ "SwinBackbone", "SwinForImageClassification", "SwinForMaskedImageModeling", "SwinModel", "SwinPreTrainedModel", ] ) _import_structure["models.swin2sr"].extend( [ "Swin2SRForImageSuperResolution", "Swin2SRModel", "Swin2SRPreTrainedModel", ] ) _import_structure["models.swinv2"].extend( [ "Swinv2Backbone", "Swinv2ForImageClassification", "Swinv2ForMaskedImageModeling", "Swinv2Model", "Swinv2PreTrainedModel", ] ) _import_structure["models.switch_transformers"].extend( [ "SwitchTransformersEncoderModel", "SwitchTransformersForConditionalGeneration", "SwitchTransformersModel", "SwitchTransformersPreTrainedModel", "SwitchTransformersSparseMLP", "SwitchTransformersTop1Router", ] ) _import_structure["models.t5"].extend( [ "T5EncoderModel", "T5ForConditionalGeneration", "T5ForQuestionAnswering", "T5ForSequenceClassification", "T5ForTokenClassification", "T5Model", "T5PreTrainedModel", "load_tf_weights_in_t5", ] ) _import_structure["models.table_transformer"].extend( [ "TableTransformerForObjectDetection", "TableTransformerModel", "TableTransformerPreTrainedModel", ] ) _import_structure["models.tapas"].extend( [ "TapasForMaskedLM", "TapasForQuestionAnswering", "TapasForSequenceClassification", "TapasModel", "TapasPreTrainedModel", "load_tf_weights_in_tapas", ] ) _import_structure["models.time_series_transformer"].extend( [ "TimeSeriesTransformerForPrediction", "TimeSeriesTransformerModel", "TimeSeriesTransformerPreTrainedModel", ] ) _import_structure["models.timesformer"].extend( [ "TimesformerForVideoClassification", "TimesformerModel", "TimesformerPreTrainedModel", ] ) _import_structure["models.timm_backbone"].extend(["TimmBackbone"]) _import_structure["models.trocr"].extend( [ "TrOCRForCausalLM", "TrOCRPreTrainedModel", ] ) _import_structure["models.tvp"].extend( [ "TvpForVideoGrounding", "TvpModel", "TvpPreTrainedModel", ] ) _import_structure["models.udop"].extend( [ "UdopEncoderModel", "UdopForConditionalGeneration", "UdopModel", "UdopPreTrainedModel", ], ) _import_structure["models.umt5"].extend( [ "UMT5EncoderModel", "UMT5ForConditionalGeneration", "UMT5ForQuestionAnswering", "UMT5ForSequenceClassification", "UMT5ForTokenClassification", "UMT5Model", "UMT5PreTrainedModel", ] ) _import_structure["models.unispeech"].extend( [ "UniSpeechForCTC", "UniSpeechForPreTraining", "UniSpeechForSequenceClassification", "UniSpeechModel", "UniSpeechPreTrainedModel", ] ) _import_structure["models.unispeech_sat"].extend( [ "UniSpeechSatForAudioFrameClassification", "UniSpeechSatForCTC", "UniSpeechSatForPreTraining", "UniSpeechSatForSequenceClassification", "UniSpeechSatForXVector", "UniSpeechSatModel", "UniSpeechSatPreTrainedModel", ] ) _import_structure["models.univnet"].extend( [ "UnivNetModel", ] ) _import_structure["models.upernet"].extend( [ "UperNetForSemanticSegmentation", "UperNetPreTrainedModel", ] ) _import_structure["models.video_llava"].extend( [ "VideoLlavaForConditionalGeneration", "VideoLlavaPreTrainedModel", "VideoLlavaProcessor", ] ) _import_structure["models.videomae"].extend( [ "VideoMAEForPreTraining", "VideoMAEForVideoClassification", "VideoMAEModel", "VideoMAEPreTrainedModel", ] ) _import_structure["models.vilt"].extend( [ "ViltForImageAndTextRetrieval", "ViltForImagesAndTextClassification", "ViltForMaskedLM", "ViltForQuestionAnswering", "ViltForTokenClassification", "ViltLayer", "ViltModel", "ViltPreTrainedModel", ] ) _import_structure["models.vipllava"].extend( [ "VipLlavaForConditionalGeneration", "VipLlavaPreTrainedModel", ] ) _import_structure["models.vision_encoder_decoder"].extend(["VisionEncoderDecoderModel"]) _import_structure["models.vision_text_dual_encoder"].extend(["VisionTextDualEncoderModel"]) _import_structure["models.visual_bert"].extend( [ "VisualBertForMultipleChoice", "VisualBertForPreTraining", "VisualBertForQuestionAnswering", "VisualBertForRegionToPhraseAlignment", "VisualBertForVisualReasoning", "VisualBertLayer", "VisualBertModel", "VisualBertPreTrainedModel", ] ) _import_structure["models.vit"].extend( [ "ViTForImageClassification", "ViTForMaskedImageModeling", "ViTModel", "ViTPreTrainedModel", ] ) _import_structure["models.vit_mae"].extend( [ "ViTMAEForPreTraining", "ViTMAELayer", "ViTMAEModel", "ViTMAEPreTrainedModel", ] ) _import_structure["models.vit_msn"].extend( [ "ViTMSNForImageClassification", "ViTMSNModel", "ViTMSNPreTrainedModel", ] ) _import_structure["models.vitdet"].extend( [ "VitDetBackbone", "VitDetModel", "VitDetPreTrainedModel", ] ) _import_structure["models.vitmatte"].extend( [ "VitMatteForImageMatting", "VitMattePreTrainedModel", ] ) _import_structure["models.vits"].extend( [ "VitsModel", "VitsPreTrainedModel", ] ) _import_structure["models.vivit"].extend( [ "VivitForVideoClassification", "VivitModel", "VivitPreTrainedModel", ] ) _import_structure["models.wav2vec2"].extend( [ "Wav2Vec2ForAudioFrameClassification", "Wav2Vec2ForCTC", "Wav2Vec2ForMaskedLM", "Wav2Vec2ForPreTraining", "Wav2Vec2ForSequenceClassification", "Wav2Vec2ForXVector", "Wav2Vec2Model", "Wav2Vec2PreTrainedModel", ] ) _import_structure["models.wav2vec2_bert"].extend( [ "Wav2Vec2BertForAudioFrameClassification", "Wav2Vec2BertForCTC", "Wav2Vec2BertForSequenceClassification", "Wav2Vec2BertForXVector", "Wav2Vec2BertModel", "Wav2Vec2BertPreTrainedModel", ] ) _import_structure["models.wav2vec2_conformer"].extend( [ "Wav2Vec2ConformerForAudioFrameClassification", "Wav2Vec2ConformerForCTC", "Wav2Vec2ConformerForPreTraining", "Wav2Vec2ConformerForSequenceClassification", "Wav2Vec2ConformerForXVector", "Wav2Vec2ConformerModel", "Wav2Vec2ConformerPreTrainedModel", ] ) _import_structure["models.wavlm"].extend( [ "WavLMForAudioFrameClassification", "WavLMForCTC", "WavLMForSequenceClassification", "WavLMForXVector", "WavLMModel", "WavLMPreTrainedModel", ] ) _import_structure["models.whisper"].extend( [ "WhisperForAudioClassification", "WhisperForCausalLM", "WhisperForConditionalGeneration", "WhisperModel", "WhisperPreTrainedModel", ] ) _import_structure["models.x_clip"].extend( [ "XCLIPModel", "XCLIPPreTrainedModel", "XCLIPTextModel", "XCLIPVisionModel", ] ) _import_structure["models.xglm"].extend( [ "XGLMForCausalLM", "XGLMModel", "XGLMPreTrainedModel", ] ) _import_structure["models.xlm"].extend( [ "XLMForMultipleChoice", "XLMForQuestionAnswering", "XLMForQuestionAnsweringSimple", "XLMForSequenceClassification", "XLMForTokenClassification", "XLMModel", "XLMPreTrainedModel", "XLMWithLMHeadModel", ] ) _import_structure["models.xlm_roberta"].extend( [ "XLMRobertaForCausalLM", "XLMRobertaForMaskedLM", "XLMRobertaForMultipleChoice", "XLMRobertaForQuestionAnswering", "XLMRobertaForSequenceClassification", "XLMRobertaForTokenClassification", "XLMRobertaModel", "XLMRobertaPreTrainedModel", ] ) _import_structure["models.xlm_roberta_xl"].extend( [ "XLMRobertaXLForCausalLM", "XLMRobertaXLForMaskedLM", "XLMRobertaXLForMultipleChoice", "XLMRobertaXLForQuestionAnswering", "XLMRobertaXLForSequenceClassification", "XLMRobertaXLForTokenClassification", "XLMRobertaXLModel", "XLMRobertaXLPreTrainedModel", ] ) _import_structure["models.xlnet"].extend( [ "XLNetForMultipleChoice", "XLNetForQuestionAnswering", "XLNetForQuestionAnsweringSimple", "XLNetForSequenceClassification", "XLNetForTokenClassification", "XLNetLMHeadModel", "XLNetModel", "XLNetPreTrainedModel", "load_tf_weights_in_xlnet", ] ) _import_structure["models.xmod"].extend( [ "XmodForCausalLM", "XmodForMaskedLM", "XmodForMultipleChoice", "XmodForQuestionAnswering", "XmodForSequenceClassification", "XmodForTokenClassification", "XmodModel", "XmodPreTrainedModel", ] ) _import_structure["models.yolos"].extend( [ "YolosForObjectDetection", "YolosModel", "YolosPreTrainedModel", ] ) _import_structure["models.yoso"].extend( [ "YosoForMaskedLM", "YosoForMultipleChoice", "YosoForQuestionAnswering", "YosoForSequenceClassification", "YosoForTokenClassification", "YosoLayer", "YosoModel", "YosoPreTrainedModel", ] ) _import_structure["models.zoedepth"].extend( [ "ZoeDepthForDepthEstimation", "ZoeDepthPreTrainedModel", ] ) _import_structure["optimization"] = [ "Adafactor", "AdamW", "get_constant_schedule", "get_constant_schedule_with_warmup", "get_cosine_schedule_with_warmup", "get_cosine_with_hard_restarts_schedule_with_warmup", "get_inverse_sqrt_schedule", "get_linear_schedule_with_warmup", "get_polynomial_decay_schedule_with_warmup", "get_scheduler", "get_wsd_schedule", ] _import_structure["pytorch_utils"] = [ "Conv1D", "apply_chunking_to_forward", "prune_layer", ] _import_structure["sagemaker"] = [] _import_structure["time_series_utils"] = [] _import_structure["trainer"] = ["Trainer"] _import_structure["trainer_pt_utils"] = ["torch_distributed_zero_first"] _import_structure["trainer_seq2seq"] = ["Seq2SeqTrainer"] # TensorFlow-backed objects try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_tf_objects _import_structure["utils.dummy_tf_objects"] = [name for name in dir(dummy_tf_objects) if not name.startswith("_")] else: _import_structure["activations_tf"] = [] _import_structure["benchmark.benchmark_args_tf"] = ["TensorFlowBenchmarkArguments"] _import_structure["benchmark.benchmark_tf"] = ["TensorFlowBenchmark"] _import_structure["generation"].extend( [ "TFForcedBOSTokenLogitsProcessor", "TFForcedEOSTokenLogitsProcessor", "TFForceTokensLogitsProcessor", "TFGenerationMixin", "TFLogitsProcessor", "TFLogitsProcessorList", "TFLogitsWarper", "TFMinLengthLogitsProcessor", "TFNoBadWordsLogitsProcessor", "TFNoRepeatNGramLogitsProcessor", "TFRepetitionPenaltyLogitsProcessor", "TFSuppressTokensAtBeginLogitsProcessor", "TFSuppressTokensLogitsProcessor", "TFTemperatureLogitsWarper", "TFTopKLogitsWarper", "TFTopPLogitsWarper", ] ) _import_structure["keras_callbacks"] = ["KerasMetricCallback", "PushToHubCallback"] _import_structure["modeling_tf_outputs"] = [] _import_structure["modeling_tf_utils"] = [ "TFPreTrainedModel", "TFSequenceSummary", "TFSharedEmbeddings", "shape_list", ] # TensorFlow models structure _import_structure["models.albert"].extend( [ "TFAlbertForMaskedLM", "TFAlbertForMultipleChoice", "TFAlbertForPreTraining", "TFAlbertForQuestionAnswering", "TFAlbertForSequenceClassification", "TFAlbertForTokenClassification", "TFAlbertMainLayer", "TFAlbertModel", "TFAlbertPreTrainedModel", ] ) _import_structure["models.auto"].extend( [ "TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING", "TF_MODEL_FOR_CAUSAL_LM_MAPPING", "TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING", "TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING", "TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING", "TF_MODEL_FOR_MASKED_LM_MAPPING", "TF_MODEL_FOR_MASK_GENERATION_MAPPING", "TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING", "TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING", "TF_MODEL_FOR_PRETRAINING_MAPPING", "TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING", "TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING", "TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING", "TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING", "TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING", "TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING", "TF_MODEL_FOR_TEXT_ENCODING_MAPPING", "TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING", "TF_MODEL_FOR_VISION_2_SEQ_MAPPING", "TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING", "TF_MODEL_MAPPING", "TF_MODEL_WITH_LM_HEAD_MAPPING", "TFAutoModel", "TFAutoModelForAudioClassification", "TFAutoModelForCausalLM", "TFAutoModelForDocumentQuestionAnswering", "TFAutoModelForImageClassification", "TFAutoModelForMaskedImageModeling", "TFAutoModelForMaskedLM", "TFAutoModelForMaskGeneration", "TFAutoModelForMultipleChoice", "TFAutoModelForNextSentencePrediction", "TFAutoModelForPreTraining", "TFAutoModelForQuestionAnswering", "TFAutoModelForSemanticSegmentation", "TFAutoModelForSeq2SeqLM", "TFAutoModelForSequenceClassification", "TFAutoModelForSpeechSeq2Seq", "TFAutoModelForTableQuestionAnswering", "TFAutoModelForTextEncoding", "TFAutoModelForTokenClassification", "TFAutoModelForVision2Seq", "TFAutoModelForZeroShotImageClassification", "TFAutoModelWithLMHead", ] ) _import_structure["models.bart"].extend( [ "TFBartForConditionalGeneration", "TFBartForSequenceClassification", "TFBartModel", "TFBartPretrainedModel", ] ) _import_structure["models.bert"].extend( [ "TFBertEmbeddings", "TFBertForMaskedLM", "TFBertForMultipleChoice", "TFBertForNextSentencePrediction", "TFBertForPreTraining", "TFBertForQuestionAnswering", "TFBertForSequenceClassification", "TFBertForTokenClassification", "TFBertLMHeadModel", "TFBertMainLayer", "TFBertModel", "TFBertPreTrainedModel", ] ) _import_structure["models.blenderbot"].extend( [ "TFBlenderbotForConditionalGeneration", "TFBlenderbotModel", "TFBlenderbotPreTrainedModel", ] ) _import_structure["models.blenderbot_small"].extend( [ "TFBlenderbotSmallForConditionalGeneration", "TFBlenderbotSmallModel", "TFBlenderbotSmallPreTrainedModel", ] ) _import_structure["models.blip"].extend( [ "TFBlipForConditionalGeneration", "TFBlipForImageTextRetrieval", "TFBlipForQuestionAnswering", "TFBlipModel", "TFBlipPreTrainedModel", "TFBlipTextModel", "TFBlipVisionModel", ] ) _import_structure["models.camembert"].extend( [ "TFCamembertForCausalLM", "TFCamembertForMaskedLM", "TFCamembertForMultipleChoice", "TFCamembertForQuestionAnswering", "TFCamembertForSequenceClassification", "TFCamembertForTokenClassification", "TFCamembertModel", "TFCamembertPreTrainedModel", ] ) _import_structure["models.clip"].extend( [ "TFCLIPModel", "TFCLIPPreTrainedModel", "TFCLIPTextModel", "TFCLIPVisionModel", ] ) _import_structure["models.convbert"].extend( [ "TFConvBertForMaskedLM", "TFConvBertForMultipleChoice", "TFConvBertForQuestionAnswering", "TFConvBertForSequenceClassification", "TFConvBertForTokenClassification", "TFConvBertLayer", "TFConvBertModel", "TFConvBertPreTrainedModel", ] ) _import_structure["models.convnext"].extend( [ "TFConvNextForImageClassification", "TFConvNextModel", "TFConvNextPreTrainedModel", ] ) _import_structure["models.convnextv2"].extend( [ "TFConvNextV2ForImageClassification", "TFConvNextV2Model", "TFConvNextV2PreTrainedModel", ] ) _import_structure["models.ctrl"].extend( [ "TFCTRLForSequenceClassification", "TFCTRLLMHeadModel", "TFCTRLModel", "TFCTRLPreTrainedModel", ] ) _import_structure["models.cvt"].extend( [ "TFCvtForImageClassification", "TFCvtModel", "TFCvtPreTrainedModel", ] ) _import_structure["models.data2vec"].extend( [ "TFData2VecVisionForImageClassification", "TFData2VecVisionForSemanticSegmentation", "TFData2VecVisionModel", "TFData2VecVisionPreTrainedModel", ] ) _import_structure["models.deberta"].extend( [ "TFDebertaForMaskedLM", "TFDebertaForQuestionAnswering", "TFDebertaForSequenceClassification", "TFDebertaForTokenClassification", "TFDebertaModel", "TFDebertaPreTrainedModel", ] ) _import_structure["models.deberta_v2"].extend( [ "TFDebertaV2ForMaskedLM", "TFDebertaV2ForMultipleChoice", "TFDebertaV2ForQuestionAnswering", "TFDebertaV2ForSequenceClassification", "TFDebertaV2ForTokenClassification", "TFDebertaV2Model", "TFDebertaV2PreTrainedModel", ] ) _import_structure["models.deit"].extend( [ "TFDeiTForImageClassification", "TFDeiTForImageClassificationWithTeacher", "TFDeiTForMaskedImageModeling", "TFDeiTModel", "TFDeiTPreTrainedModel", ] ) _import_structure["models.deprecated.efficientformer"].extend( [ "TFEfficientFormerForImageClassification", "TFEfficientFormerForImageClassificationWithTeacher", "TFEfficientFormerModel", "TFEfficientFormerPreTrainedModel", ] ) _import_structure["models.deprecated.transfo_xl"].extend( [ "TFAdaptiveEmbedding", "TFTransfoXLForSequenceClassification", "TFTransfoXLLMHeadModel", "TFTransfoXLMainLayer", "TFTransfoXLModel", "TFTransfoXLPreTrainedModel", ] ) _import_structure["models.distilbert"].extend( [ "TFDistilBertForMaskedLM", "TFDistilBertForMultipleChoice", "TFDistilBertForQuestionAnswering", "TFDistilBertForSequenceClassification", "TFDistilBertForTokenClassification", "TFDistilBertMainLayer", "TFDistilBertModel", "TFDistilBertPreTrainedModel", ] ) _import_structure["models.dpr"].extend( [ "TFDPRContextEncoder", "TFDPRPretrainedContextEncoder", "TFDPRPretrainedQuestionEncoder", "TFDPRPretrainedReader", "TFDPRQuestionEncoder", "TFDPRReader", ] ) _import_structure["models.electra"].extend( [ "TFElectraForMaskedLM", "TFElectraForMultipleChoice", "TFElectraForPreTraining", "TFElectraForQuestionAnswering", "TFElectraForSequenceClassification", "TFElectraForTokenClassification", "TFElectraModel", "TFElectraPreTrainedModel", ] ) _import_structure["models.encoder_decoder"].append("TFEncoderDecoderModel") _import_structure["models.esm"].extend( [ "TFEsmForMaskedLM", "TFEsmForSequenceClassification", "TFEsmForTokenClassification", "TFEsmModel", "TFEsmPreTrainedModel", ] ) _import_structure["models.flaubert"].extend( [ "TFFlaubertForMultipleChoice", "TFFlaubertForQuestionAnsweringSimple", "TFFlaubertForSequenceClassification", "TFFlaubertForTokenClassification", "TFFlaubertModel", "TFFlaubertPreTrainedModel", "TFFlaubertWithLMHeadModel", ] ) _import_structure["models.funnel"].extend( [ "TFFunnelBaseModel", "TFFunnelForMaskedLM", "TFFunnelForMultipleChoice", "TFFunnelForPreTraining", "TFFunnelForQuestionAnswering", "TFFunnelForSequenceClassification", "TFFunnelForTokenClassification", "TFFunnelModel", "TFFunnelPreTrainedModel", ] ) _import_structure["models.gpt2"].extend( [ "TFGPT2DoubleHeadsModel", "TFGPT2ForSequenceClassification", "TFGPT2LMHeadModel", "TFGPT2MainLayer", "TFGPT2Model", "TFGPT2PreTrainedModel", ] ) _import_structure["models.gptj"].extend( [ "TFGPTJForCausalLM", "TFGPTJForQuestionAnswering", "TFGPTJForSequenceClassification", "TFGPTJModel", "TFGPTJPreTrainedModel", ] ) _import_structure["models.groupvit"].extend( [ "TFGroupViTModel", "TFGroupViTPreTrainedModel", "TFGroupViTTextModel", "TFGroupViTVisionModel", ] ) _import_structure["models.hubert"].extend( [ "TFHubertForCTC", "TFHubertModel", "TFHubertPreTrainedModel", ] ) _import_structure["models.idefics"].extend( [ "TFIdeficsForVisionText2Text", "TFIdeficsModel", "TFIdeficsPreTrainedModel", ] ) _import_structure["models.layoutlm"].extend( [ "TFLayoutLMForMaskedLM", "TFLayoutLMForQuestionAnswering", "TFLayoutLMForSequenceClassification", "TFLayoutLMForTokenClassification", "TFLayoutLMMainLayer", "TFLayoutLMModel", "TFLayoutLMPreTrainedModel", ] ) _import_structure["models.layoutlmv3"].extend( [ "TFLayoutLMv3ForQuestionAnswering", "TFLayoutLMv3ForSequenceClassification", "TFLayoutLMv3ForTokenClassification", "TFLayoutLMv3Model", "TFLayoutLMv3PreTrainedModel", ] ) _import_structure["models.led"].extend(["TFLEDForConditionalGeneration", "TFLEDModel", "TFLEDPreTrainedModel"]) _import_structure["models.longformer"].extend( [ "TFLongformerForMaskedLM", "TFLongformerForMultipleChoice", "TFLongformerForQuestionAnswering", "TFLongformerForSequenceClassification", "TFLongformerForTokenClassification", "TFLongformerModel", "TFLongformerPreTrainedModel", "TFLongformerSelfAttention", ] ) _import_structure["models.lxmert"].extend( [ "TFLxmertForPreTraining", "TFLxmertMainLayer", "TFLxmertModel", "TFLxmertPreTrainedModel", "TFLxmertVisualFeatureEncoder", ] ) _import_structure["models.marian"].extend(["TFMarianModel", "TFMarianMTModel", "TFMarianPreTrainedModel"]) _import_structure["models.mbart"].extend( ["TFMBartForConditionalGeneration", "TFMBartModel", "TFMBartPreTrainedModel"] ) _import_structure["models.mistral"].extend( ["TFMistralForCausalLM", "TFMistralForSequenceClassification", "TFMistralModel", "TFMistralPreTrainedModel"] ) _import_structure["models.mobilebert"].extend( [ "TFMobileBertForMaskedLM", "TFMobileBertForMultipleChoice", "TFMobileBertForNextSentencePrediction", "TFMobileBertForPreTraining", "TFMobileBertForQuestionAnswering", "TFMobileBertForSequenceClassification", "TFMobileBertForTokenClassification", "TFMobileBertMainLayer", "TFMobileBertModel", "TFMobileBertPreTrainedModel", ] ) _import_structure["models.mobilevit"].extend( [ "TFMobileViTForImageClassification", "TFMobileViTForSemanticSegmentation", "TFMobileViTModel", "TFMobileViTPreTrainedModel", ] ) _import_structure["models.mpnet"].extend( [ "TFMPNetForMaskedLM", "TFMPNetForMultipleChoice", "TFMPNetForQuestionAnswering", "TFMPNetForSequenceClassification", "TFMPNetForTokenClassification", "TFMPNetMainLayer", "TFMPNetModel", "TFMPNetPreTrainedModel", ] ) _import_structure["models.mt5"].extend(["TFMT5EncoderModel", "TFMT5ForConditionalGeneration", "TFMT5Model"]) _import_structure["models.openai"].extend( [ "TFOpenAIGPTDoubleHeadsModel", "TFOpenAIGPTForSequenceClassification", "TFOpenAIGPTLMHeadModel", "TFOpenAIGPTMainLayer", "TFOpenAIGPTModel", "TFOpenAIGPTPreTrainedModel", ] ) _import_structure["models.opt"].extend( [ "TFOPTForCausalLM", "TFOPTModel", "TFOPTPreTrainedModel", ] ) _import_structure["models.pegasus"].extend( [ "TFPegasusForConditionalGeneration", "TFPegasusModel", "TFPegasusPreTrainedModel", ] ) _import_structure["models.rag"].extend( [ "TFRagModel", "TFRagPreTrainedModel", "TFRagSequenceForGeneration", "TFRagTokenForGeneration", ] ) _import_structure["models.regnet"].extend( [ "TFRegNetForImageClassification", "TFRegNetModel", "TFRegNetPreTrainedModel", ] ) _import_structure["models.rembert"].extend( [ "TFRemBertForCausalLM", "TFRemBertForMaskedLM", "TFRemBertForMultipleChoice", "TFRemBertForQuestionAnswering", "TFRemBertForSequenceClassification", "TFRemBertForTokenClassification", "TFRemBertLayer", "TFRemBertModel", "TFRemBertPreTrainedModel", ] ) _import_structure["models.resnet"].extend( [ "TFResNetForImageClassification", "TFResNetModel", "TFResNetPreTrainedModel", ] ) _import_structure["models.roberta"].extend( [ "TFRobertaForCausalLM", "TFRobertaForMaskedLM", "TFRobertaForMultipleChoice", "TFRobertaForQuestionAnswering", "TFRobertaForSequenceClassification", "TFRobertaForTokenClassification", "TFRobertaMainLayer", "TFRobertaModel", "TFRobertaPreTrainedModel", ] ) _import_structure["models.roberta_prelayernorm"].extend( [ "TFRobertaPreLayerNormForCausalLM", "TFRobertaPreLayerNormForMaskedLM", "TFRobertaPreLayerNormForMultipleChoice", "TFRobertaPreLayerNormForQuestionAnswering", "TFRobertaPreLayerNormForSequenceClassification", "TFRobertaPreLayerNormForTokenClassification", "TFRobertaPreLayerNormMainLayer", "TFRobertaPreLayerNormModel", "TFRobertaPreLayerNormPreTrainedModel", ] ) _import_structure["models.roformer"].extend( [ "TFRoFormerForCausalLM", "TFRoFormerForMaskedLM", "TFRoFormerForMultipleChoice", "TFRoFormerForQuestionAnswering", "TFRoFormerForSequenceClassification", "TFRoFormerForTokenClassification", "TFRoFormerLayer", "TFRoFormerModel", "TFRoFormerPreTrainedModel", ] ) _import_structure["models.sam"].extend( [ "TFSamModel", "TFSamPreTrainedModel", ] ) _import_structure["models.segformer"].extend( [ "TFSegformerDecodeHead", "TFSegformerForImageClassification", "TFSegformerForSemanticSegmentation", "TFSegformerModel", "TFSegformerPreTrainedModel", ] ) _import_structure["models.speech_to_text"].extend( [ "TFSpeech2TextForConditionalGeneration", "TFSpeech2TextModel", "TFSpeech2TextPreTrainedModel", ] ) _import_structure["models.swiftformer"].extend( [ "TFSwiftFormerForImageClassification", "TFSwiftFormerModel", "TFSwiftFormerPreTrainedModel", ] ) _import_structure["models.swin"].extend( [ "TFSwinForImageClassification", "TFSwinForMaskedImageModeling", "TFSwinModel", "TFSwinPreTrainedModel", ] ) _import_structure["models.t5"].extend( [ "TFT5EncoderModel", "TFT5ForConditionalGeneration", "TFT5Model", "TFT5PreTrainedModel", ] ) _import_structure["models.tapas"].extend( [ "TFTapasForMaskedLM", "TFTapasForQuestionAnswering", "TFTapasForSequenceClassification", "TFTapasModel", "TFTapasPreTrainedModel", ] ) _import_structure["models.vision_encoder_decoder"].extend(["TFVisionEncoderDecoderModel"]) _import_structure["models.vision_text_dual_encoder"].extend(["TFVisionTextDualEncoderModel"]) _import_structure["models.vit"].extend( [ "TFViTForImageClassification", "TFViTModel", "TFViTPreTrainedModel", ] ) _import_structure["models.vit_mae"].extend( [ "TFViTMAEForPreTraining", "TFViTMAEModel", "TFViTMAEPreTrainedModel", ] ) _import_structure["models.wav2vec2"].extend( [ "TFWav2Vec2ForCTC", "TFWav2Vec2ForSequenceClassification", "TFWav2Vec2Model", "TFWav2Vec2PreTrainedModel", ] ) _import_structure["models.whisper"].extend( [ "TFWhisperForConditionalGeneration", "TFWhisperModel", "TFWhisperPreTrainedModel", ] ) _import_structure["models.xglm"].extend( [ "TFXGLMForCausalLM", "TFXGLMModel", "TFXGLMPreTrainedModel", ] ) _import_structure["models.xlm"].extend( [ "TFXLMForMultipleChoice", "TFXLMForQuestionAnsweringSimple", "TFXLMForSequenceClassification", "TFXLMForTokenClassification", "TFXLMMainLayer", "TFXLMModel", "TFXLMPreTrainedModel", "TFXLMWithLMHeadModel", ] ) _import_structure["models.xlm_roberta"].extend( [ "TFXLMRobertaForCausalLM", "TFXLMRobertaForMaskedLM", "TFXLMRobertaForMultipleChoice", "TFXLMRobertaForQuestionAnswering", "TFXLMRobertaForSequenceClassification", "TFXLMRobertaForTokenClassification", "TFXLMRobertaModel", "TFXLMRobertaPreTrainedModel", ] ) _import_structure["models.xlnet"].extend( [ "TFXLNetForMultipleChoice", "TFXLNetForQuestionAnsweringSimple", "TFXLNetForSequenceClassification", "TFXLNetForTokenClassification", "TFXLNetLMHeadModel", "TFXLNetMainLayer", "TFXLNetModel", "TFXLNetPreTrainedModel", ] ) _import_structure["optimization_tf"] = [ "AdamWeightDecay", "GradientAccumulator", "WarmUp", "create_optimizer", ] _import_structure["tf_utils"] = [] try: if not ( is_librosa_available() and is_essentia_available() and is_scipy_available() and is_torch_available() and is_pretty_midi_available() ): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import ( dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects, ) _import_structure["utils.dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects"] = [ name for name in dir(dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects) if not name.startswith("_") ] else: _import_structure["models.pop2piano"].append("Pop2PianoFeatureExtractor") _import_structure["models.pop2piano"].append("Pop2PianoTokenizer") _import_structure["models.pop2piano"].append("Pop2PianoProcessor") try: if not is_torchaudio_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import ( dummy_torchaudio_objects, ) _import_structure["utils.dummy_torchaudio_objects"] = [ name for name in dir(dummy_torchaudio_objects) if not name.startswith("_") ] else: _import_structure["models.musicgen_melody"].append("MusicgenMelodyFeatureExtractor") _import_structure["models.musicgen_melody"].append("MusicgenMelodyProcessor") # FLAX-backed objects try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils import dummy_flax_objects _import_structure["utils.dummy_flax_objects"] = [ name for name in dir(dummy_flax_objects) if not name.startswith("_") ] else: _import_structure["generation"].extend( [ "FlaxForcedBOSTokenLogitsProcessor", "FlaxForcedEOSTokenLogitsProcessor", "FlaxForceTokensLogitsProcessor", "FlaxGenerationMixin", "FlaxLogitsProcessor", "FlaxLogitsProcessorList", "FlaxLogitsWarper", "FlaxMinLengthLogitsProcessor", "FlaxTemperatureLogitsWarper", "FlaxSuppressTokensAtBeginLogitsProcessor", "FlaxSuppressTokensLogitsProcessor", "FlaxTopKLogitsWarper", "FlaxTopPLogitsWarper", "FlaxWhisperTimeStampLogitsProcessor", ] ) _import_structure["modeling_flax_outputs"] = [] _import_structure["modeling_flax_utils"] = ["FlaxPreTrainedModel"] _import_structure["models.albert"].extend( [ "FlaxAlbertForMaskedLM", "FlaxAlbertForMultipleChoice", "FlaxAlbertForPreTraining", "FlaxAlbertForQuestionAnswering", "FlaxAlbertForSequenceClassification", "FlaxAlbertForTokenClassification", "FlaxAlbertModel", "FlaxAlbertPreTrainedModel", ] ) _import_structure["models.auto"].extend( [ "FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING", "FLAX_MODEL_FOR_CAUSAL_LM_MAPPING", "FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING", "FLAX_MODEL_FOR_MASKED_LM_MAPPING", "FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING", "FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING", "FLAX_MODEL_FOR_PRETRAINING_MAPPING", "FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING", "FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING", "FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING", "FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING", "FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING", "FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING", "FLAX_MODEL_MAPPING", "FlaxAutoModel", "FlaxAutoModelForCausalLM", "FlaxAutoModelForImageClassification", "FlaxAutoModelForMaskedLM", "FlaxAutoModelForMultipleChoice", "FlaxAutoModelForNextSentencePrediction", "FlaxAutoModelForPreTraining", "FlaxAutoModelForQuestionAnswering", "FlaxAutoModelForSeq2SeqLM", "FlaxAutoModelForSequenceClassification", "FlaxAutoModelForSpeechSeq2Seq", "FlaxAutoModelForTokenClassification", "FlaxAutoModelForVision2Seq", ] ) # Flax models structure _import_structure["models.bart"].extend( [ "FlaxBartDecoderPreTrainedModel", "FlaxBartForCausalLM", "FlaxBartForConditionalGeneration", "FlaxBartForQuestionAnswering", "FlaxBartForSequenceClassification", "FlaxBartModel", "FlaxBartPreTrainedModel", ] ) _import_structure["models.beit"].extend( [ "FlaxBeitForImageClassification", "FlaxBeitForMaskedImageModeling", "FlaxBeitModel", "FlaxBeitPreTrainedModel", ] ) _import_structure["models.bert"].extend( [ "FlaxBertForCausalLM", "FlaxBertForMaskedLM", "FlaxBertForMultipleChoice", "FlaxBertForNextSentencePrediction", "FlaxBertForPreTraining", "FlaxBertForQuestionAnswering", "FlaxBertForSequenceClassification", "FlaxBertForTokenClassification", "FlaxBertModel", "FlaxBertPreTrainedModel", ] ) _import_structure["models.big_bird"].extend( [ "FlaxBigBirdForCausalLM", "FlaxBigBirdForMaskedLM", "FlaxBigBirdForMultipleChoice", "FlaxBigBirdForPreTraining", "FlaxBigBirdForQuestionAnswering", "FlaxBigBirdForSequenceClassification", "FlaxBigBirdForTokenClassification", "FlaxBigBirdModel", "FlaxBigBirdPreTrainedModel", ] ) _import_structure["models.blenderbot"].extend( [ "FlaxBlenderbotForConditionalGeneration", "FlaxBlenderbotModel", "FlaxBlenderbotPreTrainedModel", ] ) _import_structure["models.blenderbot_small"].extend( [ "FlaxBlenderbotSmallForConditionalGeneration", "FlaxBlenderbotSmallModel", "FlaxBlenderbotSmallPreTrainedModel", ] ) _import_structure["models.bloom"].extend( [ "FlaxBloomForCausalLM", "FlaxBloomModel", "FlaxBloomPreTrainedModel", ] ) _import_structure["models.clip"].extend( [ "FlaxCLIPModel", "FlaxCLIPPreTrainedModel", "FlaxCLIPTextModel", "FlaxCLIPTextPreTrainedModel", "FlaxCLIPTextModelWithProjection", "FlaxCLIPVisionModel", "FlaxCLIPVisionPreTrainedModel", ] ) _import_structure["models.dinov2"].extend( [ "FlaxDinov2Model", "FlaxDinov2ForImageClassification", "FlaxDinov2PreTrainedModel", ] ) _import_structure["models.distilbert"].extend( [ "FlaxDistilBertForMaskedLM", "FlaxDistilBertForMultipleChoice", "FlaxDistilBertForQuestionAnswering", "FlaxDistilBertForSequenceClassification", "FlaxDistilBertForTokenClassification", "FlaxDistilBertModel", "FlaxDistilBertPreTrainedModel", ] ) _import_structure["models.electra"].extend( [ "FlaxElectraForCausalLM", "FlaxElectraForMaskedLM", "FlaxElectraForMultipleChoice", "FlaxElectraForPreTraining", "FlaxElectraForQuestionAnswering", "FlaxElectraForSequenceClassification", "FlaxElectraForTokenClassification", "FlaxElectraModel", "FlaxElectraPreTrainedModel", ] ) _import_structure["models.encoder_decoder"].append("FlaxEncoderDecoderModel") _import_structure["models.gpt2"].extend(["FlaxGPT2LMHeadModel", "FlaxGPT2Model", "FlaxGPT2PreTrainedModel"]) _import_structure["models.gpt_neo"].extend( ["FlaxGPTNeoForCausalLM", "FlaxGPTNeoModel", "FlaxGPTNeoPreTrainedModel"] ) _import_structure["models.gptj"].extend(["FlaxGPTJForCausalLM", "FlaxGPTJModel", "FlaxGPTJPreTrainedModel"]) _import_structure["models.llama"].extend(["FlaxLlamaForCausalLM", "FlaxLlamaModel", "FlaxLlamaPreTrainedModel"]) _import_structure["models.gemma"].extend(["FlaxGemmaForCausalLM", "FlaxGemmaModel", "FlaxGemmaPreTrainedModel"]) _import_structure["models.longt5"].extend( [ "FlaxLongT5ForConditionalGeneration", "FlaxLongT5Model", "FlaxLongT5PreTrainedModel", ] ) _import_structure["models.marian"].extend( [ "FlaxMarianModel", "FlaxMarianMTModel", "FlaxMarianPreTrainedModel", ] ) _import_structure["models.mbart"].extend( [ "FlaxMBartForConditionalGeneration", "FlaxMBartForQuestionAnswering", "FlaxMBartForSequenceClassification", "FlaxMBartModel", "FlaxMBartPreTrainedModel", ] ) _import_structure["models.mistral"].extend( [ "FlaxMistralForCausalLM", "FlaxMistralModel", "FlaxMistralPreTrainedModel", ] ) _import_structure["models.mt5"].extend(["FlaxMT5EncoderModel", "FlaxMT5ForConditionalGeneration", "FlaxMT5Model"]) _import_structure["models.opt"].extend( [ "FlaxOPTForCausalLM", "FlaxOPTModel", "FlaxOPTPreTrainedModel", ] ) _import_structure["models.pegasus"].extend( [ "FlaxPegasusForConditionalGeneration", "FlaxPegasusModel", "FlaxPegasusPreTrainedModel", ] ) _import_structure["models.regnet"].extend( [ "FlaxRegNetForImageClassification", "FlaxRegNetModel", "FlaxRegNetPreTrainedModel", ] ) _import_structure["models.resnet"].extend( [ "FlaxResNetForImageClassification", "FlaxResNetModel", "FlaxResNetPreTrainedModel", ] ) _import_structure["models.roberta"].extend( [ "FlaxRobertaForCausalLM", "FlaxRobertaForMaskedLM", "FlaxRobertaForMultipleChoice", "FlaxRobertaForQuestionAnswering", "FlaxRobertaForSequenceClassification", "FlaxRobertaForTokenClassification", "FlaxRobertaModel", "FlaxRobertaPreTrainedModel", ] ) _import_structure["models.roberta_prelayernorm"].extend( [ "FlaxRobertaPreLayerNormForCausalLM", "FlaxRobertaPreLayerNormForMaskedLM", "FlaxRobertaPreLayerNormForMultipleChoice", "FlaxRobertaPreLayerNormForQuestionAnswering", "FlaxRobertaPreLayerNormForSequenceClassification", "FlaxRobertaPreLayerNormForTokenClassification", "FlaxRobertaPreLayerNormModel", "FlaxRobertaPreLayerNormPreTrainedModel", ] ) _import_structure["models.roformer"].extend( [ "FlaxRoFormerForMaskedLM", "FlaxRoFormerForMultipleChoice", "FlaxRoFormerForQuestionAnswering", "FlaxRoFormerForSequenceClassification", "FlaxRoFormerForTokenClassification", "FlaxRoFormerModel", "FlaxRoFormerPreTrainedModel", ] ) _import_structure["models.speech_encoder_decoder"].append("FlaxSpeechEncoderDecoderModel") _import_structure["models.t5"].extend( [ "FlaxT5EncoderModel", "FlaxT5ForConditionalGeneration", "FlaxT5Model", "FlaxT5PreTrainedModel", ] ) _import_structure["models.vision_encoder_decoder"].append("FlaxVisionEncoderDecoderModel") _import_structure["models.vision_text_dual_encoder"].extend(["FlaxVisionTextDualEncoderModel"]) _import_structure["models.vit"].extend(["FlaxViTForImageClassification", "FlaxViTModel", "FlaxViTPreTrainedModel"]) _import_structure["models.wav2vec2"].extend( [ "FlaxWav2Vec2ForCTC", "FlaxWav2Vec2ForPreTraining", "FlaxWav2Vec2Model", "FlaxWav2Vec2PreTrainedModel", ] ) _import_structure["models.whisper"].extend( [ "FlaxWhisperForConditionalGeneration", "FlaxWhisperModel", "FlaxWhisperPreTrainedModel", "FlaxWhisperForAudioClassification", ] ) _import_structure["models.xglm"].extend( [ "FlaxXGLMForCausalLM", "FlaxXGLMModel", "FlaxXGLMPreTrainedModel", ] ) _import_structure["models.xlm_roberta"].extend( [ "FlaxXLMRobertaForMaskedLM", "FlaxXLMRobertaForMultipleChoice", "FlaxXLMRobertaForQuestionAnswering", "FlaxXLMRobertaForSequenceClassification", "FlaxXLMRobertaForTokenClassification", "FlaxXLMRobertaModel", "FlaxXLMRobertaForCausalLM", "FlaxXLMRobertaPreTrainedModel", ] ) # Direct imports for type-checking if TYPE_CHECKING: # Configuration # Agents from .agents import ( Agent, CodeAgent, HfApiEngine, PipelineTool, ReactAgent, ReactCodeAgent, ReactJsonAgent, Tool, Toolbox, ToolCollection, TransformersEngine, launch_gradio_demo, load_tool, stream_to_gradio, ) from .configuration_utils import PretrainedConfig # Data from .data import ( DataProcessor, InputExample, InputFeatures, SingleSentenceClassificationProcessor, SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, glue_compute_metrics, glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels, squad_convert_examples_to_features, xnli_compute_metrics, xnli_output_modes, xnli_processors, xnli_tasks_num_labels, ) from .data.data_collator import ( DataCollator, DataCollatorForLanguageModeling, DataCollatorForPermutationLanguageModeling, DataCollatorForSeq2Seq, DataCollatorForSOP, DataCollatorForTokenClassification, DataCollatorForWholeWordMask, DataCollatorWithFlattening, DataCollatorWithPadding, DefaultDataCollator, default_data_collator, ) from .feature_extraction_sequence_utils import SequenceFeatureExtractor # Feature Extractor from .feature_extraction_utils import BatchFeature, FeatureExtractionMixin # Generation from .generation import GenerationConfig, TextIteratorStreamer, TextStreamer, WatermarkingConfig from .hf_argparser import HfArgumentParser # Integrations from .integrations import ( is_clearml_available, is_comet_available, is_dvclive_available, is_neptune_available, is_optuna_available, is_ray_available, is_ray_tune_available, is_sigopt_available, is_tensorboard_available, is_wandb_available, ) # Model Cards from .modelcard import ModelCard # TF 2.0 <=> PyTorch conversion utilities from .modeling_tf_pytorch_utils import ( convert_tf_weight_name_to_pt_weight_name, load_pytorch_checkpoint_in_tf2_model, load_pytorch_model_in_tf2_model, load_pytorch_weights_in_tf2_model, load_tf2_checkpoint_in_pytorch_model, load_tf2_model_in_pytorch_model, load_tf2_weights_in_pytorch_model, ) from .models.albert import AlbertConfig from .models.align import ( AlignConfig, AlignProcessor, AlignTextConfig, AlignVisionConfig, ) from .models.altclip import ( AltCLIPConfig, AltCLIPProcessor, AltCLIPTextConfig, AltCLIPVisionConfig, ) from .models.audio_spectrogram_transformer import ( ASTConfig, ASTFeatureExtractor, ) from .models.auto import ( CONFIG_MAPPING, FEATURE_EXTRACTOR_MAPPING, IMAGE_PROCESSOR_MAPPING, MODEL_NAMES_MAPPING, PROCESSOR_MAPPING, TOKENIZER_MAPPING, AutoConfig, AutoFeatureExtractor, AutoImageProcessor, AutoProcessor, AutoTokenizer, ) from .models.autoformer import ( AutoformerConfig, ) from .models.bark import ( BarkCoarseConfig, BarkConfig, BarkFineConfig, BarkProcessor, BarkSemanticConfig, ) from .models.bart import BartConfig, BartTokenizer from .models.beit import BeitConfig from .models.bert import ( BasicTokenizer, BertConfig, BertTokenizer, WordpieceTokenizer, ) from .models.bert_generation import BertGenerationConfig from .models.bert_japanese import ( BertJapaneseTokenizer, CharacterTokenizer, MecabTokenizer, ) from .models.bertweet import BertweetTokenizer from .models.big_bird import BigBirdConfig from .models.bigbird_pegasus import ( BigBirdPegasusConfig, ) from .models.biogpt import ( BioGptConfig, BioGptTokenizer, ) from .models.bit import BitConfig from .models.blenderbot import ( BlenderbotConfig, BlenderbotTokenizer, ) from .models.blenderbot_small import ( BlenderbotSmallConfig, BlenderbotSmallTokenizer, ) from .models.blip import ( BlipConfig, BlipProcessor, BlipTextConfig, BlipVisionConfig, ) from .models.blip_2 import ( Blip2Config, Blip2Processor, Blip2QFormerConfig, Blip2VisionConfig, ) from .models.bloom import BloomConfig from .models.bridgetower import ( BridgeTowerConfig, BridgeTowerProcessor, BridgeTowerTextConfig, BridgeTowerVisionConfig, ) from .models.bros import ( BrosConfig, BrosProcessor, ) from .models.byt5 import ByT5Tokenizer from .models.camembert import ( CamembertConfig, ) from .models.canine import ( CanineConfig, CanineTokenizer, ) from .models.chameleon import ( ChameleonConfig, ChameleonProcessor, ChameleonVQVAEConfig, ) from .models.chinese_clip import ( ChineseCLIPConfig, ChineseCLIPProcessor, ChineseCLIPTextConfig, ChineseCLIPVisionConfig, ) from .models.clap import ( ClapAudioConfig, ClapConfig, ClapProcessor, ClapTextConfig, ) from .models.clip import ( CLIPConfig, CLIPProcessor, CLIPTextConfig, CLIPTokenizer, CLIPVisionConfig, ) from .models.clipseg import ( CLIPSegConfig, CLIPSegProcessor, CLIPSegTextConfig, CLIPSegVisionConfig, ) from .models.clvp import ( ClvpConfig, ClvpDecoderConfig, ClvpEncoderConfig, ClvpFeatureExtractor, ClvpProcessor, ClvpTokenizer, ) from .models.codegen import ( CodeGenConfig, CodeGenTokenizer, ) from .models.cohere import CohereConfig from .models.conditional_detr import ( ConditionalDetrConfig, ) from .models.convbert import ( ConvBertConfig, ConvBertTokenizer, ) from .models.convnext import ConvNextConfig from .models.convnextv2 import ( ConvNextV2Config, ) from .models.cpmant import ( CpmAntConfig, CpmAntTokenizer, ) from .models.ctrl import ( CTRLConfig, CTRLTokenizer, ) from .models.cvt import CvtConfig from .models.dac import ( DacConfig, DacFeatureExtractor, ) from .models.data2vec import ( Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, ) from .models.dbrx import DbrxConfig from .models.deberta import ( DebertaConfig, DebertaTokenizer, ) from .models.deberta_v2 import ( DebertaV2Config, ) from .models.decision_transformer import ( DecisionTransformerConfig, ) from .models.deformable_detr import ( DeformableDetrConfig, ) from .models.deit import DeiTConfig from .models.deprecated.deta import DetaConfig from .models.deprecated.efficientformer import ( EfficientFormerConfig, ) from .models.deprecated.ernie_m import ErnieMConfig from .models.deprecated.gptsan_japanese import ( GPTSanJapaneseConfig, GPTSanJapaneseTokenizer, ) from .models.deprecated.graphormer import GraphormerConfig from .models.deprecated.jukebox import ( JukeboxConfig, JukeboxPriorConfig, JukeboxTokenizer, JukeboxVQVAEConfig, ) from .models.deprecated.mctct import ( MCTCTConfig, MCTCTFeatureExtractor, MCTCTProcessor, ) from .models.deprecated.mega import MegaConfig from .models.deprecated.mmbt import MMBTConfig from .models.deprecated.nat import NatConfig from .models.deprecated.nezha import NezhaConfig from .models.deprecated.open_llama import ( OpenLlamaConfig, ) from .models.deprecated.qdqbert import QDQBertConfig from .models.deprecated.realm import ( RealmConfig, RealmTokenizer, ) from .models.deprecated.retribert import ( RetriBertConfig, RetriBertTokenizer, ) from .models.deprecated.speech_to_text_2 import ( Speech2Text2Config, Speech2Text2Processor, Speech2Text2Tokenizer, ) from .models.deprecated.tapex import TapexTokenizer from .models.deprecated.trajectory_transformer import ( TrajectoryTransformerConfig, ) from .models.deprecated.transfo_xl import ( TransfoXLConfig, TransfoXLCorpus, TransfoXLTokenizer, ) from .models.deprecated.tvlt import ( TvltConfig, TvltFeatureExtractor, TvltProcessor, ) from .models.deprecated.van import VanConfig from .models.deprecated.vit_hybrid import ( ViTHybridConfig, ) from .models.deprecated.xlm_prophetnet import ( XLMProphetNetConfig, ) from .models.depth_anything import DepthAnythingConfig from .models.detr import DetrConfig from .models.dinat import DinatConfig from .models.dinov2 import Dinov2Config from .models.distilbert import ( DistilBertConfig, DistilBertTokenizer, ) from .models.donut import ( DonutProcessor, DonutSwinConfig, ) from .models.dpr import ( DPRConfig, DPRContextEncoderTokenizer, DPRQuestionEncoderTokenizer, DPRReaderOutput, DPRReaderTokenizer, ) from .models.dpt import DPTConfig from .models.efficientnet import ( EfficientNetConfig, ) from .models.electra import ( ElectraConfig, ElectraTokenizer, ) from .models.encodec import ( EncodecConfig, EncodecFeatureExtractor, ) from .models.encoder_decoder import EncoderDecoderConfig from .models.ernie import ErnieConfig from .models.esm import EsmConfig, EsmTokenizer from .models.falcon import FalconConfig from .models.falcon_mamba import FalconMambaConfig from .models.fastspeech2_conformer import ( FastSpeech2ConformerConfig, FastSpeech2ConformerHifiGanConfig, FastSpeech2ConformerTokenizer, FastSpeech2ConformerWithHifiGanConfig, ) from .models.flaubert import FlaubertConfig, FlaubertTokenizer from .models.flava import ( FlavaConfig, FlavaImageCodebookConfig, FlavaImageConfig, FlavaMultimodalConfig, FlavaTextConfig, ) from .models.fnet import FNetConfig from .models.focalnet import FocalNetConfig from .models.fsmt import ( FSMTConfig, FSMTTokenizer, ) from .models.funnel import ( FunnelConfig, FunnelTokenizer, ) from .models.fuyu import FuyuConfig from .models.gemma import GemmaConfig from .models.gemma2 import Gemma2Config from .models.git import ( GitConfig, GitProcessor, GitVisionConfig, ) from .models.glpn import GLPNConfig from .models.gpt2 import ( GPT2Config, GPT2Tokenizer, ) from .models.gpt_bigcode import ( GPTBigCodeConfig, ) from .models.gpt_neo import GPTNeoConfig from .models.gpt_neox import GPTNeoXConfig from .models.gpt_neox_japanese import ( GPTNeoXJapaneseConfig, ) from .models.gptj import GPTJConfig from .models.granite import GraniteConfig from .models.grounding_dino import ( GroundingDinoConfig, GroundingDinoProcessor, ) from .models.groupvit import ( GroupViTConfig, GroupViTTextConfig, GroupViTVisionConfig, ) from .models.herbert import HerbertTokenizer from .models.hiera import HieraConfig from .models.hubert import HubertConfig from .models.ibert import IBertConfig from .models.idefics import ( IdeficsConfig, ) from .models.idefics2 import Idefics2Config from .models.imagegpt import ImageGPTConfig from .models.informer import InformerConfig from .models.instructblip import ( InstructBlipConfig, InstructBlipProcessor, InstructBlipQFormerConfig, InstructBlipVisionConfig, ) from .models.instructblipvideo import ( InstructBlipVideoConfig, InstructBlipVideoProcessor, InstructBlipVideoQFormerConfig, InstructBlipVideoVisionConfig, ) from .models.jamba import JambaConfig from .models.jetmoe import JetMoeConfig from .models.kosmos2 import ( Kosmos2Config, Kosmos2Processor, ) from .models.layoutlm import ( LayoutLMConfig, LayoutLMTokenizer, ) from .models.layoutlmv2 import ( LayoutLMv2Config, LayoutLMv2FeatureExtractor, LayoutLMv2ImageProcessor, LayoutLMv2Processor, LayoutLMv2Tokenizer, ) from .models.layoutlmv3 import ( LayoutLMv3Config, LayoutLMv3FeatureExtractor, LayoutLMv3ImageProcessor, LayoutLMv3Processor, LayoutLMv3Tokenizer, ) from .models.layoutxlm import LayoutXLMProcessor from .models.led import LEDConfig, LEDTokenizer from .models.levit import LevitConfig from .models.lilt import LiltConfig from .models.llama import LlamaConfig from .models.llava import ( LlavaConfig, LlavaProcessor, ) from .models.llava_next import ( LlavaNextConfig, LlavaNextProcessor, ) from .models.llava_next_video import ( LlavaNextVideoConfig, LlavaNextVideoProcessor, ) from .models.longformer import ( LongformerConfig, LongformerTokenizer, ) from .models.longt5 import LongT5Config from .models.luke import ( LukeConfig, LukeTokenizer, ) from .models.lxmert import ( LxmertConfig, LxmertTokenizer, ) from .models.m2m_100 import M2M100Config from .models.mamba import MambaConfig from .models.mamba2 import Mamba2Config from .models.marian import MarianConfig from .models.markuplm import ( MarkupLMConfig, MarkupLMFeatureExtractor, MarkupLMProcessor, MarkupLMTokenizer, ) from .models.mask2former import ( Mask2FormerConfig, ) from .models.maskformer import ( MaskFormerConfig, MaskFormerSwinConfig, ) from .models.mbart import MBartConfig from .models.megatron_bert import ( MegatronBertConfig, ) from .models.mgp_str import ( MgpstrConfig, MgpstrProcessor, MgpstrTokenizer, ) from .models.mistral import MistralConfig from .models.mixtral import MixtralConfig from .models.mobilebert import ( MobileBertConfig, MobileBertTokenizer, ) from .models.mobilenet_v1 import ( MobileNetV1Config, ) from .models.mobilenet_v2 import ( MobileNetV2Config, ) from .models.mobilevit import ( MobileViTConfig, ) from .models.mobilevitv2 import ( MobileViTV2Config, ) from .models.mpnet import ( MPNetConfig, MPNetTokenizer, ) from .models.mpt import MptConfig from .models.mra import MraConfig from .models.mt5 import MT5Config from .models.musicgen import ( MusicgenConfig, MusicgenDecoderConfig, ) from .models.musicgen_melody import ( MusicgenMelodyConfig, MusicgenMelodyDecoderConfig, ) from .models.mvp import MvpConfig, MvpTokenizer from .models.nemotron import NemotronConfig from .models.nllb_moe import NllbMoeConfig from .models.nougat import NougatProcessor from .models.nystromformer import ( NystromformerConfig, ) from .models.olmo import OlmoConfig from .models.oneformer import ( OneFormerConfig, OneFormerProcessor, ) from .models.openai import ( OpenAIGPTConfig, OpenAIGPTTokenizer, ) from .models.opt import OPTConfig from .models.owlv2 import ( Owlv2Config, Owlv2Processor, Owlv2TextConfig, Owlv2VisionConfig, ) from .models.owlvit import ( OwlViTConfig, OwlViTProcessor, OwlViTTextConfig, OwlViTVisionConfig, ) from .models.paligemma import ( PaliGemmaConfig, ) from .models.patchtsmixer import ( PatchTSMixerConfig, ) from .models.patchtst import PatchTSTConfig from .models.pegasus import ( PegasusConfig, PegasusTokenizer, ) from .models.pegasus_x import ( PegasusXConfig, ) from .models.perceiver import ( PerceiverConfig, PerceiverTokenizer, ) from .models.persimmon import ( PersimmonConfig, ) from .models.phi import PhiConfig from .models.phi3 import Phi3Config from .models.phobert import PhobertTokenizer from .models.pix2struct import ( Pix2StructConfig, Pix2StructProcessor, Pix2StructTextConfig, Pix2StructVisionConfig, ) from .models.plbart import PLBartConfig from .models.poolformer import ( PoolFormerConfig, ) from .models.pop2piano import ( Pop2PianoConfig, ) from .models.prophetnet import ( ProphetNetConfig, ProphetNetTokenizer, ) from .models.pvt import PvtConfig from .models.pvt_v2 import PvtV2Config from .models.qwen2 import Qwen2Config, Qwen2Tokenizer from .models.qwen2_audio import ( Qwen2AudioConfig, Qwen2AudioEncoderConfig, Qwen2AudioProcessor, ) from .models.qwen2_moe import Qwen2MoeConfig from .models.qwen2_vl import ( Qwen2VLConfig, Qwen2VLProcessor, ) from .models.rag import RagConfig, RagRetriever, RagTokenizer from .models.recurrent_gemma import RecurrentGemmaConfig from .models.reformer import ReformerConfig from .models.regnet import RegNetConfig from .models.rembert import RemBertConfig from .models.resnet import ResNetConfig from .models.roberta import ( RobertaConfig, RobertaTokenizer, ) from .models.roberta_prelayernorm import ( RobertaPreLayerNormConfig, ) from .models.roc_bert import ( RoCBertConfig, RoCBertTokenizer, ) from .models.roformer import ( RoFormerConfig, RoFormerTokenizer, ) from .models.rt_detr import ( RTDetrConfig, RTDetrResNetConfig, ) from .models.rwkv import RwkvConfig from .models.sam import ( SamConfig, SamMaskDecoderConfig, SamProcessor, SamPromptEncoderConfig, SamVisionConfig, ) from .models.seamless_m4t import ( SeamlessM4TConfig, SeamlessM4TFeatureExtractor, SeamlessM4TProcessor, ) from .models.seamless_m4t_v2 import ( SeamlessM4Tv2Config, ) from .models.segformer import SegformerConfig from .models.seggpt import SegGptConfig from .models.sew import SEWConfig from .models.sew_d import SEWDConfig from .models.siglip import ( SiglipConfig, SiglipProcessor, SiglipTextConfig, SiglipVisionConfig, ) from .models.speech_encoder_decoder import SpeechEncoderDecoderConfig from .models.speech_to_text import ( Speech2TextConfig, Speech2TextFeatureExtractor, Speech2TextProcessor, ) from .models.speecht5 import ( SpeechT5Config, SpeechT5FeatureExtractor, SpeechT5HifiGanConfig, SpeechT5Processor, ) from .models.splinter import ( SplinterConfig, SplinterTokenizer, ) from .models.squeezebert import ( SqueezeBertConfig, SqueezeBertTokenizer, ) from .models.stablelm import StableLmConfig from .models.starcoder2 import Starcoder2Config from .models.superpoint import SuperPointConfig from .models.swiftformer import ( SwiftFormerConfig, ) from .models.swin import SwinConfig from .models.swin2sr import Swin2SRConfig from .models.swinv2 import Swinv2Config from .models.switch_transformers import ( SwitchTransformersConfig, ) from .models.t5 import T5Config from .models.table_transformer import ( TableTransformerConfig, ) from .models.tapas import ( TapasConfig, TapasTokenizer, ) from .models.time_series_transformer import ( TimeSeriesTransformerConfig, ) from .models.timesformer import ( TimesformerConfig, ) from .models.timm_backbone import TimmBackboneConfig from .models.trocr import ( TrOCRConfig, TrOCRProcessor, ) from .models.tvp import ( TvpConfig, TvpProcessor, ) from .models.udop import UdopConfig, UdopProcessor from .models.umt5 import UMT5Config from .models.unispeech import ( UniSpeechConfig, ) from .models.unispeech_sat import ( UniSpeechSatConfig, ) from .models.univnet import ( UnivNetConfig, UnivNetFeatureExtractor, ) from .models.upernet import UperNetConfig from .models.video_llava import VideoLlavaConfig from .models.videomae import VideoMAEConfig from .models.vilt import ( ViltConfig, ViltFeatureExtractor, ViltImageProcessor, ViltProcessor, ) from .models.vipllava import ( VipLlavaConfig, ) from .models.vision_encoder_decoder import VisionEncoderDecoderConfig from .models.vision_text_dual_encoder import ( VisionTextDualEncoderConfig, VisionTextDualEncoderProcessor, ) from .models.visual_bert import ( VisualBertConfig, ) from .models.vit import ViTConfig from .models.vit_mae import ViTMAEConfig from .models.vit_msn import ViTMSNConfig from .models.vitdet import VitDetConfig from .models.vitmatte import VitMatteConfig from .models.vits import ( VitsConfig, VitsTokenizer, ) from .models.vivit import VivitConfig from .models.wav2vec2 import ( Wav2Vec2Config, Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2Tokenizer, ) from .models.wav2vec2_bert import ( Wav2Vec2BertConfig, Wav2Vec2BertProcessor, ) from .models.wav2vec2_conformer import ( Wav2Vec2ConformerConfig, ) from .models.wav2vec2_phoneme import Wav2Vec2PhonemeCTCTokenizer from .models.wav2vec2_with_lm import Wav2Vec2ProcessorWithLM from .models.wavlm import WavLMConfig from .models.whisper import ( WhisperConfig, WhisperFeatureExtractor, WhisperProcessor, WhisperTokenizer, ) from .models.x_clip import ( XCLIPConfig, XCLIPProcessor, XCLIPTextConfig, XCLIPVisionConfig, ) from .models.xglm import XGLMConfig from .models.xlm import XLMConfig, XLMTokenizer from .models.xlm_roberta import ( XLMRobertaConfig, ) from .models.xlm_roberta_xl import ( XLMRobertaXLConfig, ) from .models.xlnet import XLNetConfig from .models.xmod import XmodConfig from .models.yolos import YolosConfig from .models.yoso import YosoConfig from .models.zoedepth import ZoeDepthConfig # Pipelines from .pipelines import ( AudioClassificationPipeline, AutomaticSpeechRecognitionPipeline, CsvPipelineDataFormat, DepthEstimationPipeline, DocumentQuestionAnsweringPipeline, FeatureExtractionPipeline, FillMaskPipeline, ImageClassificationPipeline, ImageFeatureExtractionPipeline, ImageSegmentationPipeline, ImageToImagePipeline, ImageToTextPipeline, JsonPipelineDataFormat, MaskGenerationPipeline, NerPipeline, ObjectDetectionPipeline, PipedPipelineDataFormat, Pipeline, PipelineDataFormat, QuestionAnsweringPipeline, SummarizationPipeline, TableQuestionAnsweringPipeline, Text2TextGenerationPipeline, TextClassificationPipeline, TextGenerationPipeline, TextToAudioPipeline, TokenClassificationPipeline, TranslationPipeline, VideoClassificationPipeline, VisualQuestionAnsweringPipeline, ZeroShotAudioClassificationPipeline, ZeroShotClassificationPipeline, ZeroShotImageClassificationPipeline, ZeroShotObjectDetectionPipeline, pipeline, ) from .processing_utils import ProcessorMixin # Tokenization from .tokenization_utils import PreTrainedTokenizer from .tokenization_utils_base import ( AddedToken, BatchEncoding, CharSpan, PreTrainedTokenizerBase, SpecialTokensMixin, TokenSpan, ) # Trainer from .trainer_callback import ( DefaultFlowCallback, EarlyStoppingCallback, PrinterCallback, ProgressCallback, TrainerCallback, TrainerControl, TrainerState, ) from .trainer_utils import ( EvalPrediction, IntervalStrategy, SchedulerType, enable_full_determinism, set_seed, ) from .training_args import TrainingArguments from .training_args_seq2seq import Seq2SeqTrainingArguments from .training_args_tf import TFTrainingArguments # Files and general utilities from .utils import ( CONFIG_NAME, MODEL_CARD_NAME, PYTORCH_PRETRAINED_BERT_CACHE, PYTORCH_TRANSFORMERS_CACHE, SPIECE_UNDERLINE, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME, TRANSFORMERS_CACHE, WEIGHTS_NAME, TensorType, add_end_docstrings, add_start_docstrings, is_apex_available, is_av_available, is_bitsandbytes_available, is_datasets_available, is_decord_available, is_faiss_available, is_flax_available, is_keras_nlp_available, is_phonemizer_available, is_psutil_available, is_py3nvml_available, is_pyctcdecode_available, is_sacremoses_available, is_safetensors_available, is_scipy_available, is_sentencepiece_available, is_sklearn_available, is_speech_available, is_tensorflow_text_available, is_tf_available, is_timm_available, is_tokenizers_available, is_torch_available, is_torch_mlu_available, is_torch_musa_available, is_torch_neuroncore_available, is_torch_npu_available, is_torch_tpu_available, is_torch_xla_available, is_torch_xpu_available, is_torchvision_available, is_vision_available, logging, ) # bitsandbytes config from .utils.quantization_config import ( AqlmConfig, AwqConfig, BitsAndBytesConfig, EetqConfig, FbgemmFp8Config, GPTQConfig, HqqConfig, QuantoConfig, TorchAoConfig, ) try: if not is_sentencepiece_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_sentencepiece_objects import * else: from .models.albert import AlbertTokenizer from .models.barthez import BarthezTokenizer from .models.bartpho import BartphoTokenizer from .models.bert_generation import BertGenerationTokenizer from .models.big_bird import BigBirdTokenizer from .models.camembert import CamembertTokenizer from .models.code_llama import CodeLlamaTokenizer from .models.cpm import CpmTokenizer from .models.deberta_v2 import DebertaV2Tokenizer from .models.deprecated.ernie_m import ErnieMTokenizer from .models.deprecated.xlm_prophetnet import XLMProphetNetTokenizer from .models.fnet import FNetTokenizer from .models.gemma import GemmaTokenizer from .models.gpt_sw3 import GPTSw3Tokenizer from .models.layoutxlm import LayoutXLMTokenizer from .models.llama import LlamaTokenizer from .models.m2m_100 import M2M100Tokenizer from .models.marian import MarianTokenizer from .models.mbart import MBart50Tokenizer, MBartTokenizer from .models.mluke import MLukeTokenizer from .models.mt5 import MT5Tokenizer from .models.nllb import NllbTokenizer from .models.pegasus import PegasusTokenizer from .models.plbart import PLBartTokenizer from .models.reformer import ReformerTokenizer from .models.rembert import RemBertTokenizer from .models.seamless_m4t import SeamlessM4TTokenizer from .models.siglip import SiglipTokenizer from .models.speech_to_text import Speech2TextTokenizer from .models.speecht5 import SpeechT5Tokenizer from .models.t5 import T5Tokenizer from .models.udop import UdopTokenizer from .models.xglm import XGLMTokenizer from .models.xlm_roberta import XLMRobertaTokenizer from .models.xlnet import XLNetTokenizer try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_tokenizers_objects import * else: # Fast tokenizers imports from .models.albert import AlbertTokenizerFast from .models.bart import BartTokenizerFast from .models.barthez import BarthezTokenizerFast from .models.bert import BertTokenizerFast from .models.big_bird import BigBirdTokenizerFast from .models.blenderbot import BlenderbotTokenizerFast from .models.blenderbot_small import BlenderbotSmallTokenizerFast from .models.bloom import BloomTokenizerFast from .models.camembert import CamembertTokenizerFast from .models.clip import CLIPTokenizerFast from .models.code_llama import CodeLlamaTokenizerFast from .models.codegen import CodeGenTokenizerFast from .models.cohere import CohereTokenizerFast from .models.convbert import ConvBertTokenizerFast from .models.cpm import CpmTokenizerFast from .models.deberta import DebertaTokenizerFast from .models.deberta_v2 import DebertaV2TokenizerFast from .models.deprecated.realm import RealmTokenizerFast from .models.deprecated.retribert import RetriBertTokenizerFast from .models.distilbert import DistilBertTokenizerFast from .models.dpr import ( DPRContextEncoderTokenizerFast, DPRQuestionEncoderTokenizerFast, DPRReaderTokenizerFast, ) from .models.electra import ElectraTokenizerFast from .models.fnet import FNetTokenizerFast from .models.funnel import FunnelTokenizerFast from .models.gemma import GemmaTokenizerFast from .models.gpt2 import GPT2TokenizerFast from .models.gpt_neox import GPTNeoXTokenizerFast from .models.gpt_neox_japanese import GPTNeoXJapaneseTokenizer from .models.herbert import HerbertTokenizerFast from .models.layoutlm import LayoutLMTokenizerFast from .models.layoutlmv2 import LayoutLMv2TokenizerFast from .models.layoutlmv3 import LayoutLMv3TokenizerFast from .models.layoutxlm import LayoutXLMTokenizerFast from .models.led import LEDTokenizerFast from .models.llama import LlamaTokenizerFast from .models.longformer import LongformerTokenizerFast from .models.lxmert import LxmertTokenizerFast from .models.markuplm import MarkupLMTokenizerFast from .models.mbart import MBartTokenizerFast from .models.mbart50 import MBart50TokenizerFast from .models.mobilebert import MobileBertTokenizerFast from .models.mpnet import MPNetTokenizerFast from .models.mt5 import MT5TokenizerFast from .models.mvp import MvpTokenizerFast from .models.nllb import NllbTokenizerFast from .models.nougat import NougatTokenizerFast from .models.openai import OpenAIGPTTokenizerFast from .models.pegasus import PegasusTokenizerFast from .models.qwen2 import Qwen2TokenizerFast from .models.reformer import ReformerTokenizerFast from .models.rembert import RemBertTokenizerFast from .models.roberta import RobertaTokenizerFast from .models.roformer import RoFormerTokenizerFast from .models.seamless_m4t import SeamlessM4TTokenizerFast from .models.splinter import SplinterTokenizerFast from .models.squeezebert import SqueezeBertTokenizerFast from .models.t5 import T5TokenizerFast from .models.udop import UdopTokenizerFast from .models.whisper import WhisperTokenizerFast from .models.xglm import XGLMTokenizerFast from .models.xlm_roberta import XLMRobertaTokenizerFast from .models.xlnet import XLNetTokenizerFast from .tokenization_utils_fast import PreTrainedTokenizerFast try: if not (is_sentencepiece_available() and is_tokenizers_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummies_sentencepiece_and_tokenizers_objects import * else: from .convert_slow_tokenizer import ( SLOW_TO_FAST_CONVERTERS, convert_slow_tokenizer, ) try: if not is_tensorflow_text_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_tensorflow_text_objects import * else: from .models.bert import TFBertTokenizer try: if not is_keras_nlp_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_keras_nlp_objects import * else: from .models.gpt2 import TFGPT2Tokenizer try: if not is_vision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_vision_objects import * else: from .image_processing_base import ImageProcessingMixin from .image_processing_utils import BaseImageProcessor from .image_utils import ImageFeatureExtractionMixin from .models.beit import BeitFeatureExtractor, BeitImageProcessor from .models.bit import BitImageProcessor from .models.blip import BlipImageProcessor from .models.bridgetower import BridgeTowerImageProcessor from .models.chameleon import ChameleonImageProcessor from .models.chinese_clip import ( ChineseCLIPFeatureExtractor, ChineseCLIPImageProcessor, ) from .models.clip import CLIPFeatureExtractor, CLIPImageProcessor from .models.conditional_detr import ( ConditionalDetrFeatureExtractor, ConditionalDetrImageProcessor, ) from .models.convnext import ConvNextFeatureExtractor, ConvNextImageProcessor from .models.deformable_detr import ( DeformableDetrFeatureExtractor, DeformableDetrImageProcessor, ) from .models.deit import DeiTFeatureExtractor, DeiTImageProcessor from .models.deprecated.deta import DetaImageProcessor from .models.deprecated.efficientformer import EfficientFormerImageProcessor from .models.deprecated.tvlt import TvltImageProcessor from .models.deprecated.vit_hybrid import ViTHybridImageProcessor from .models.detr import DetrFeatureExtractor, DetrImageProcessor from .models.donut import DonutFeatureExtractor, DonutImageProcessor from .models.dpt import DPTFeatureExtractor, DPTImageProcessor from .models.efficientnet import EfficientNetImageProcessor from .models.flava import ( FlavaFeatureExtractor, FlavaImageProcessor, FlavaProcessor, ) from .models.fuyu import FuyuImageProcessor, FuyuProcessor from .models.glpn import GLPNFeatureExtractor, GLPNImageProcessor from .models.grounding_dino import GroundingDinoImageProcessor from .models.idefics import IdeficsImageProcessor from .models.idefics2 import Idefics2ImageProcessor from .models.imagegpt import ImageGPTFeatureExtractor, ImageGPTImageProcessor from .models.instructblipvideo import InstructBlipVideoImageProcessor from .models.layoutlmv2 import ( LayoutLMv2FeatureExtractor, LayoutLMv2ImageProcessor, ) from .models.layoutlmv3 import ( LayoutLMv3FeatureExtractor, LayoutLMv3ImageProcessor, ) from .models.levit import LevitFeatureExtractor, LevitImageProcessor from .models.llava_next import LlavaNextImageProcessor from .models.llava_next_video import LlavaNextVideoImageProcessor from .models.mask2former import Mask2FormerImageProcessor from .models.maskformer import ( MaskFormerFeatureExtractor, MaskFormerImageProcessor, ) from .models.mobilenet_v1 import ( MobileNetV1FeatureExtractor, MobileNetV1ImageProcessor, ) from .models.mobilenet_v2 import ( MobileNetV2FeatureExtractor, MobileNetV2ImageProcessor, ) from .models.mobilevit import MobileViTFeatureExtractor, MobileViTImageProcessor from .models.nougat import NougatImageProcessor from .models.oneformer import OneFormerImageProcessor from .models.owlv2 import Owlv2ImageProcessor from .models.owlvit import OwlViTFeatureExtractor, OwlViTImageProcessor from .models.perceiver import PerceiverFeatureExtractor, PerceiverImageProcessor from .models.pix2struct import Pix2StructImageProcessor from .models.poolformer import ( PoolFormerFeatureExtractor, PoolFormerImageProcessor, ) from .models.pvt import PvtImageProcessor from .models.qwen2_vl import Qwen2VLImageProcessor from .models.rt_detr import RTDetrImageProcessor from .models.sam import SamImageProcessor from .models.segformer import SegformerFeatureExtractor, SegformerImageProcessor from .models.seggpt import SegGptImageProcessor from .models.siglip import SiglipImageProcessor from .models.superpoint import SuperPointImageProcessor from .models.swin2sr import Swin2SRImageProcessor from .models.tvp import TvpImageProcessor from .models.video_llava import VideoLlavaImageProcessor from .models.videomae import VideoMAEFeatureExtractor, VideoMAEImageProcessor from .models.vilt import ViltFeatureExtractor, ViltImageProcessor, ViltProcessor from .models.vit import ViTFeatureExtractor, ViTImageProcessor from .models.vitmatte import VitMatteImageProcessor from .models.vivit import VivitImageProcessor from .models.yolos import YolosFeatureExtractor, YolosImageProcessor from .models.zoedepth import ZoeDepthImageProcessor try: if not is_torchvision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torchvision_objects import * else: from .image_processing_utils_fast import BaseImageProcessorFast from .models.vit import ViTImageProcessorFast # Modeling try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_pt_objects import * else: # Benchmarks from .benchmark.benchmark import PyTorchBenchmark from .benchmark.benchmark_args import PyTorchBenchmarkArguments from .cache_utils import ( Cache, CacheConfig, DynamicCache, EncoderDecoderCache, HQQQuantizedCache, HybridCache, MambaCache, OffloadedCache, OffloadedStaticCache, QuantizedCache, QuantizedCacheConfig, QuantoQuantizedCache, SinkCache, SlidingWindowCache, StaticCache, ) from .data.datasets import ( GlueDataset, GlueDataTrainingArguments, LineByLineTextDataset, LineByLineWithRefDataset, LineByLineWithSOPTextDataset, SquadDataset, SquadDataTrainingArguments, TextDataset, TextDatasetForNextSentencePrediction, ) from .generation import ( AlternatingCodebooksLogitsProcessor, BeamScorer, BeamSearchScorer, ClassifierFreeGuidanceLogitsProcessor, ConstrainedBeamSearchScorer, Constraint, ConstraintListState, DisjunctiveConstraint, EncoderNoRepeatNGramLogitsProcessor, EncoderRepetitionPenaltyLogitsProcessor, EosTokenCriteria, EpsilonLogitsWarper, EtaLogitsWarper, ExponentialDecayLengthPenalty, ForcedBOSTokenLogitsProcessor, ForcedEOSTokenLogitsProcessor, GenerationMixin, HammingDiversityLogitsProcessor, InfNanRemoveLogitsProcessor, LogitNormalization, LogitsProcessor, LogitsProcessorList, LogitsWarper, MaxLengthCriteria, MaxTimeCriteria, MinLengthLogitsProcessor, MinNewTokensLengthLogitsProcessor, MinPLogitsWarper, NoBadWordsLogitsProcessor, NoRepeatNGramLogitsProcessor, PhrasalConstraint, PrefixConstrainedLogitsProcessor, RepetitionPenaltyLogitsProcessor, SequenceBiasLogitsProcessor, StoppingCriteria, StoppingCriteriaList, StopStringCriteria, SuppressTokensAtBeginLogitsProcessor, SuppressTokensLogitsProcessor, TemperatureLogitsWarper, TopKLogitsWarper, TopPLogitsWarper, TypicalLogitsWarper, UnbatchedClassifierFreeGuidanceLogitsProcessor, WatermarkDetector, WatermarkLogitsProcessor, WhisperTimeStampLogitsProcessor, ) from .modeling_rope_utils import ROPE_INIT_FUNCTIONS from .modeling_utils import PreTrainedModel from .models.albert import ( AlbertForMaskedLM, AlbertForMultipleChoice, AlbertForPreTraining, AlbertForQuestionAnswering, AlbertForSequenceClassification, AlbertForTokenClassification, AlbertModel, AlbertPreTrainedModel, load_tf_weights_in_albert, ) from .models.align import ( AlignModel, AlignPreTrainedModel, AlignTextModel, AlignVisionModel, ) from .models.altclip import ( AltCLIPModel, AltCLIPPreTrainedModel, AltCLIPTextModel, AltCLIPVisionModel, ) from .models.audio_spectrogram_transformer import ( ASTForAudioClassification, ASTModel, ASTPreTrainedModel, ) from .models.auto import ( MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING, MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING, MODEL_FOR_AUDIO_XVECTOR_MAPPING, MODEL_FOR_BACKBONE_MAPPING, MODEL_FOR_CAUSAL_IMAGE_MODELING_MAPPING, MODEL_FOR_CAUSAL_LM_MAPPING, MODEL_FOR_CTC_MAPPING, MODEL_FOR_DEPTH_ESTIMATION_MAPPING, MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING, MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING, MODEL_FOR_IMAGE_MAPPING, MODEL_FOR_IMAGE_SEGMENTATION_MAPPING, MODEL_FOR_IMAGE_TO_IMAGE_MAPPING, MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING, MODEL_FOR_KEYPOINT_DETECTION_MAPPING, MODEL_FOR_MASK_GENERATION_MAPPING, MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING, MODEL_FOR_MASKED_LM_MAPPING, MODEL_FOR_MULTIPLE_CHOICE_MAPPING, MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING, MODEL_FOR_OBJECT_DETECTION_MAPPING, MODEL_FOR_PRETRAINING_MAPPING, MODEL_FOR_QUESTION_ANSWERING_MAPPING, MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING, MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING, MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING, MODEL_FOR_TEXT_ENCODING_MAPPING, MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING, MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING, MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING, MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING, MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING, MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING, MODEL_FOR_VISION_2_SEQ_MAPPING, MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING, MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING, MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING, MODEL_MAPPING, MODEL_WITH_LM_HEAD_MAPPING, AutoBackbone, AutoModel, AutoModelForAudioClassification, AutoModelForAudioFrameClassification, AutoModelForAudioXVector, AutoModelForCausalLM, AutoModelForCTC, AutoModelForDepthEstimation, AutoModelForDocumentQuestionAnswering, AutoModelForImageClassification, AutoModelForImageSegmentation, AutoModelForImageToImage, AutoModelForInstanceSegmentation, AutoModelForKeypointDetection, AutoModelForMaskedImageModeling, AutoModelForMaskedLM, AutoModelForMaskGeneration, AutoModelForMultipleChoice, AutoModelForNextSentencePrediction, AutoModelForObjectDetection, AutoModelForPreTraining, AutoModelForQuestionAnswering, AutoModelForSemanticSegmentation, AutoModelForSeq2SeqLM, AutoModelForSequenceClassification, AutoModelForSpeechSeq2Seq, AutoModelForTableQuestionAnswering, AutoModelForTextEncoding, AutoModelForTextToSpectrogram, AutoModelForTextToWaveform, AutoModelForTokenClassification, AutoModelForUniversalSegmentation, AutoModelForVideoClassification, AutoModelForVision2Seq, AutoModelForVisualQuestionAnswering, AutoModelForZeroShotImageClassification, AutoModelForZeroShotObjectDetection, AutoModelWithLMHead, ) from .models.autoformer import ( AutoformerForPrediction, AutoformerModel, AutoformerPreTrainedModel, ) from .models.bark import ( BarkCausalModel, BarkCoarseModel, BarkFineModel, BarkModel, BarkPreTrainedModel, BarkSemanticModel, ) from .models.bart import ( BartForCausalLM, BartForConditionalGeneration, BartForQuestionAnswering, BartForSequenceClassification, BartModel, BartPreTrainedModel, BartPretrainedModel, PretrainedBartModel, ) from .models.beit import ( BeitBackbone, BeitForImageClassification, BeitForMaskedImageModeling, BeitForSemanticSegmentation, BeitModel, BeitPreTrainedModel, ) from .models.bert import ( BertForMaskedLM, BertForMultipleChoice, BertForNextSentencePrediction, BertForPreTraining, BertForQuestionAnswering, BertForSequenceClassification, BertForTokenClassification, BertLayer, BertLMHeadModel, BertModel, BertPreTrainedModel, load_tf_weights_in_bert, ) from .models.bert_generation import ( BertGenerationDecoder, BertGenerationEncoder, BertGenerationPreTrainedModel, load_tf_weights_in_bert_generation, ) from .models.big_bird import ( BigBirdForCausalLM, BigBirdForMaskedLM, BigBirdForMultipleChoice, BigBirdForPreTraining, BigBirdForQuestionAnswering, BigBirdForSequenceClassification, BigBirdForTokenClassification, BigBirdLayer, BigBirdModel, BigBirdPreTrainedModel, load_tf_weights_in_big_bird, ) from .models.bigbird_pegasus import ( BigBirdPegasusForCausalLM, BigBirdPegasusForConditionalGeneration, BigBirdPegasusForQuestionAnswering, BigBirdPegasusForSequenceClassification, BigBirdPegasusModel, BigBirdPegasusPreTrainedModel, ) from .models.biogpt import ( BioGptForCausalLM, BioGptForSequenceClassification, BioGptForTokenClassification, BioGptModel, BioGptPreTrainedModel, ) from .models.bit import ( BitBackbone, BitForImageClassification, BitModel, BitPreTrainedModel, ) from .models.blenderbot import ( BlenderbotForCausalLM, BlenderbotForConditionalGeneration, BlenderbotModel, BlenderbotPreTrainedModel, ) from .models.blenderbot_small import ( BlenderbotSmallForCausalLM, BlenderbotSmallForConditionalGeneration, BlenderbotSmallModel, BlenderbotSmallPreTrainedModel, ) from .models.blip import ( BlipForConditionalGeneration, BlipForImageTextRetrieval, BlipForQuestionAnswering, BlipModel, BlipPreTrainedModel, BlipTextModel, BlipVisionModel, ) from .models.blip_2 import ( Blip2ForConditionalGeneration, Blip2ForImageTextRetrieval, Blip2Model, Blip2PreTrainedModel, Blip2QFormerModel, Blip2TextModelWithProjection, Blip2VisionModel, Blip2VisionModelWithProjection, ) from .models.bloom import ( BloomForCausalLM, BloomForQuestionAnswering, BloomForSequenceClassification, BloomForTokenClassification, BloomModel, BloomPreTrainedModel, ) from .models.bridgetower import ( BridgeTowerForContrastiveLearning, BridgeTowerForImageAndTextRetrieval, BridgeTowerForMaskedLM, BridgeTowerModel, BridgeTowerPreTrainedModel, ) from .models.bros import ( BrosForTokenClassification, BrosModel, BrosPreTrainedModel, BrosProcessor, BrosSpadeEEForTokenClassification, BrosSpadeELForTokenClassification, ) from .models.camembert import ( CamembertForCausalLM, CamembertForMaskedLM, CamembertForMultipleChoice, CamembertForQuestionAnswering, CamembertForSequenceClassification, CamembertForTokenClassification, CamembertModel, CamembertPreTrainedModel, ) from .models.canine import ( CanineForMultipleChoice, CanineForQuestionAnswering, CanineForSequenceClassification, CanineForTokenClassification, CanineLayer, CanineModel, CaninePreTrainedModel, load_tf_weights_in_canine, ) from .models.chameleon import ( ChameleonForConditionalGeneration, ChameleonModel, ChameleonPreTrainedModel, ChameleonProcessor, ChameleonVQVAE, ) from .models.chinese_clip import ( ChineseCLIPModel, ChineseCLIPPreTrainedModel, ChineseCLIPTextModel, ChineseCLIPVisionModel, ) from .models.clap import ( ClapAudioModel, ClapAudioModelWithProjection, ClapFeatureExtractor, ClapModel, ClapPreTrainedModel, ClapTextModel, ClapTextModelWithProjection, ) from .models.clip import ( CLIPForImageClassification, CLIPModel, CLIPPreTrainedModel, CLIPTextModel, CLIPTextModelWithProjection, CLIPVisionModel, CLIPVisionModelWithProjection, ) from .models.clipseg import ( CLIPSegForImageSegmentation, CLIPSegModel, CLIPSegPreTrainedModel, CLIPSegTextModel, CLIPSegVisionModel, ) from .models.clvp import ( ClvpDecoder, ClvpEncoder, ClvpForCausalLM, ClvpModel, ClvpModelForConditionalGeneration, ClvpPreTrainedModel, ) from .models.codegen import ( CodeGenForCausalLM, CodeGenModel, CodeGenPreTrainedModel, ) from .models.cohere import ( CohereForCausalLM, CohereModel, CoherePreTrainedModel, ) from .models.conditional_detr import ( ConditionalDetrForObjectDetection, ConditionalDetrForSegmentation, ConditionalDetrModel, ConditionalDetrPreTrainedModel, ) from .models.convbert import ( ConvBertForMaskedLM, ConvBertForMultipleChoice, ConvBertForQuestionAnswering, ConvBertForSequenceClassification, ConvBertForTokenClassification, ConvBertLayer, ConvBertModel, ConvBertPreTrainedModel, load_tf_weights_in_convbert, ) from .models.convnext import ( ConvNextBackbone, ConvNextForImageClassification, ConvNextModel, ConvNextPreTrainedModel, ) from .models.convnextv2 import ( ConvNextV2Backbone, ConvNextV2ForImageClassification, ConvNextV2Model, ConvNextV2PreTrainedModel, ) from .models.cpmant import ( CpmAntForCausalLM, CpmAntModel, CpmAntPreTrainedModel, ) from .models.ctrl import ( CTRLForSequenceClassification, CTRLLMHeadModel, CTRLModel, CTRLPreTrainedModel, ) from .models.cvt import ( CvtForImageClassification, CvtModel, CvtPreTrainedModel, ) from .models.dac import ( DacModel, DacPreTrainedModel, ) from .models.data2vec import ( Data2VecAudioForAudioFrameClassification, Data2VecAudioForCTC, Data2VecAudioForSequenceClassification, Data2VecAudioForXVector, Data2VecAudioModel, Data2VecAudioPreTrainedModel, Data2VecTextForCausalLM, Data2VecTextForMaskedLM, Data2VecTextForMultipleChoice, Data2VecTextForQuestionAnswering, Data2VecTextForSequenceClassification, Data2VecTextForTokenClassification, Data2VecTextModel, Data2VecTextPreTrainedModel, Data2VecVisionForImageClassification, Data2VecVisionForSemanticSegmentation, Data2VecVisionModel, Data2VecVisionPreTrainedModel, ) # PyTorch model imports from .models.dbrx import ( DbrxForCausalLM, DbrxModel, DbrxPreTrainedModel, ) from .models.deberta import ( DebertaForMaskedLM, DebertaForQuestionAnswering, DebertaForSequenceClassification, DebertaForTokenClassification, DebertaModel, DebertaPreTrainedModel, ) from .models.deberta_v2 import ( DebertaV2ForMaskedLM, DebertaV2ForMultipleChoice, DebertaV2ForQuestionAnswering, DebertaV2ForSequenceClassification, DebertaV2ForTokenClassification, DebertaV2Model, DebertaV2PreTrainedModel, ) from .models.decision_transformer import ( DecisionTransformerGPT2Model, DecisionTransformerGPT2PreTrainedModel, DecisionTransformerModel, DecisionTransformerPreTrainedModel, ) from .models.deformable_detr import ( DeformableDetrForObjectDetection, DeformableDetrModel, DeformableDetrPreTrainedModel, ) from .models.deit import ( DeiTForImageClassification, DeiTForImageClassificationWithTeacher, DeiTForMaskedImageModeling, DeiTModel, DeiTPreTrainedModel, ) from .models.deprecated.deta import ( DetaForObjectDetection, DetaModel, DetaPreTrainedModel, ) from .models.deprecated.efficientformer import ( EfficientFormerForImageClassification, EfficientFormerForImageClassificationWithTeacher, EfficientFormerModel, EfficientFormerPreTrainedModel, ) from .models.deprecated.ernie_m import ( ErnieMForInformationExtraction, ErnieMForMultipleChoice, ErnieMForQuestionAnswering, ErnieMForSequenceClassification, ErnieMForTokenClassification, ErnieMModel, ErnieMPreTrainedModel, ) from .models.deprecated.gptsan_japanese import ( GPTSanJapaneseForConditionalGeneration, GPTSanJapaneseModel, GPTSanJapanesePreTrainedModel, ) from .models.deprecated.graphormer import ( GraphormerForGraphClassification, GraphormerModel, GraphormerPreTrainedModel, ) from .models.deprecated.jukebox import ( JukeboxModel, JukeboxPreTrainedModel, JukeboxPrior, JukeboxVQVAE, ) from .models.deprecated.mctct import ( MCTCTForCTC, MCTCTModel, MCTCTPreTrainedModel, ) from .models.deprecated.mega import ( MegaForCausalLM, MegaForMaskedLM, MegaForMultipleChoice, MegaForQuestionAnswering, MegaForSequenceClassification, MegaForTokenClassification, MegaModel, MegaPreTrainedModel, ) from .models.deprecated.mmbt import ( MMBTForClassification, MMBTModel, ModalEmbeddings, ) from .models.deprecated.nat import ( NatBackbone, NatForImageClassification, NatModel, NatPreTrainedModel, ) from .models.deprecated.nezha import ( NezhaForMaskedLM, NezhaForMultipleChoice, NezhaForNextSentencePrediction, NezhaForPreTraining, NezhaForQuestionAnswering, NezhaForSequenceClassification, NezhaForTokenClassification, NezhaModel, NezhaPreTrainedModel, ) from .models.deprecated.open_llama import ( OpenLlamaForCausalLM, OpenLlamaForSequenceClassification, OpenLlamaModel, OpenLlamaPreTrainedModel, ) from .models.deprecated.qdqbert import ( QDQBertForMaskedLM, QDQBertForMultipleChoice, QDQBertForNextSentencePrediction, QDQBertForQuestionAnswering, QDQBertForSequenceClassification, QDQBertForTokenClassification, QDQBertLayer, QDQBertLMHeadModel, QDQBertModel, QDQBertPreTrainedModel, load_tf_weights_in_qdqbert, ) from .models.deprecated.realm import ( RealmEmbedder, RealmForOpenQA, RealmKnowledgeAugEncoder, RealmPreTrainedModel, RealmReader, RealmRetriever, RealmScorer, load_tf_weights_in_realm, ) from .models.deprecated.retribert import ( RetriBertModel, RetriBertPreTrainedModel, ) from .models.deprecated.speech_to_text_2 import ( Speech2Text2ForCausalLM, Speech2Text2PreTrainedModel, ) from .models.deprecated.trajectory_transformer import ( TrajectoryTransformerModel, TrajectoryTransformerPreTrainedModel, ) from .models.deprecated.transfo_xl import ( AdaptiveEmbedding, TransfoXLForSequenceClassification, TransfoXLLMHeadModel, TransfoXLModel, TransfoXLPreTrainedModel, load_tf_weights_in_transfo_xl, ) from .models.deprecated.tvlt import ( TvltForAudioVisualClassification, TvltForPreTraining, TvltModel, TvltPreTrainedModel, ) from .models.deprecated.van import ( VanForImageClassification, VanModel, VanPreTrainedModel, ) from .models.deprecated.vit_hybrid import ( ViTHybridForImageClassification, ViTHybridModel, ViTHybridPreTrainedModel, ) from .models.deprecated.xlm_prophetnet import ( XLMProphetNetDecoder, XLMProphetNetEncoder, XLMProphetNetForCausalLM, XLMProphetNetForConditionalGeneration, XLMProphetNetModel, XLMProphetNetPreTrainedModel, ) from .models.depth_anything import ( DepthAnythingForDepthEstimation, DepthAnythingPreTrainedModel, ) from .models.detr import ( DetrForObjectDetection, DetrForSegmentation, DetrModel, DetrPreTrainedModel, ) from .models.dinat import ( DinatBackbone, DinatForImageClassification, DinatModel, DinatPreTrainedModel, ) from .models.dinov2 import ( Dinov2Backbone, Dinov2ForImageClassification, Dinov2Model, Dinov2PreTrainedModel, ) from .models.distilbert import ( DistilBertForMaskedLM, DistilBertForMultipleChoice, DistilBertForQuestionAnswering, DistilBertForSequenceClassification, DistilBertForTokenClassification, DistilBertModel, DistilBertPreTrainedModel, ) from .models.donut import ( DonutSwinModel, DonutSwinPreTrainedModel, ) from .models.dpr import ( DPRContextEncoder, DPRPretrainedContextEncoder, DPRPreTrainedModel, DPRPretrainedQuestionEncoder, DPRPretrainedReader, DPRQuestionEncoder, DPRReader, ) from .models.dpt import ( DPTForDepthEstimation, DPTForSemanticSegmentation, DPTModel, DPTPreTrainedModel, ) from .models.efficientnet import ( EfficientNetForImageClassification, EfficientNetModel, EfficientNetPreTrainedModel, ) from .models.electra import ( ElectraForCausalLM, ElectraForMaskedLM, ElectraForMultipleChoice, ElectraForPreTraining, ElectraForQuestionAnswering, ElectraForSequenceClassification, ElectraForTokenClassification, ElectraModel, ElectraPreTrainedModel, load_tf_weights_in_electra, ) from .models.encodec import ( EncodecModel, EncodecPreTrainedModel, ) from .models.encoder_decoder import EncoderDecoderModel from .models.ernie import ( ErnieForCausalLM, ErnieForMaskedLM, ErnieForMultipleChoice, ErnieForNextSentencePrediction, ErnieForPreTraining, ErnieForQuestionAnswering, ErnieForSequenceClassification, ErnieForTokenClassification, ErnieModel, ErniePreTrainedModel, ) from .models.esm import ( EsmFoldPreTrainedModel, EsmForMaskedLM, EsmForProteinFolding, EsmForSequenceClassification, EsmForTokenClassification, EsmModel, EsmPreTrainedModel, ) from .models.falcon import ( FalconForCausalLM, FalconForQuestionAnswering, FalconForSequenceClassification, FalconForTokenClassification, FalconModel, FalconPreTrainedModel, ) from .models.falcon_mamba import ( FalconMambaForCausalLM, FalconMambaModel, FalconMambaPreTrainedModel, ) from .models.fastspeech2_conformer import ( FastSpeech2ConformerHifiGan, FastSpeech2ConformerModel, FastSpeech2ConformerPreTrainedModel, FastSpeech2ConformerWithHifiGan, ) from .models.flaubert import ( FlaubertForMultipleChoice, FlaubertForQuestionAnswering, FlaubertForQuestionAnsweringSimple, FlaubertForSequenceClassification, FlaubertForTokenClassification, FlaubertModel, FlaubertPreTrainedModel, FlaubertWithLMHeadModel, ) from .models.flava import ( FlavaForPreTraining, FlavaImageCodebook, FlavaImageModel, FlavaModel, FlavaMultimodalModel, FlavaPreTrainedModel, FlavaTextModel, ) from .models.fnet import ( FNetForMaskedLM, FNetForMultipleChoice, FNetForNextSentencePrediction, FNetForPreTraining, FNetForQuestionAnswering, FNetForSequenceClassification, FNetForTokenClassification, FNetLayer, FNetModel, FNetPreTrainedModel, ) from .models.focalnet import ( FocalNetBackbone, FocalNetForImageClassification, FocalNetForMaskedImageModeling, FocalNetModel, FocalNetPreTrainedModel, ) from .models.fsmt import ( FSMTForConditionalGeneration, FSMTModel, PretrainedFSMTModel, ) from .models.funnel import ( FunnelBaseModel, FunnelForMaskedLM, FunnelForMultipleChoice, FunnelForPreTraining, FunnelForQuestionAnswering, FunnelForSequenceClassification, FunnelForTokenClassification, FunnelModel, FunnelPreTrainedModel, load_tf_weights_in_funnel, ) from .models.fuyu import ( FuyuForCausalLM, FuyuPreTrainedModel, ) from .models.gemma import ( GemmaForCausalLM, GemmaForSequenceClassification, GemmaForTokenClassification, GemmaModel, GemmaPreTrainedModel, ) from .models.gemma2 import ( Gemma2ForCausalLM, Gemma2ForSequenceClassification, Gemma2ForTokenClassification, Gemma2Model, Gemma2PreTrainedModel, ) from .models.git import ( GitForCausalLM, GitModel, GitPreTrainedModel, GitVisionModel, ) from .models.glpn import ( GLPNForDepthEstimation, GLPNModel, GLPNPreTrainedModel, ) from .models.gpt2 import ( GPT2DoubleHeadsModel, GPT2ForQuestionAnswering, GPT2ForSequenceClassification, GPT2ForTokenClassification, GPT2LMHeadModel, GPT2Model, GPT2PreTrainedModel, load_tf_weights_in_gpt2, ) from .models.gpt_bigcode import ( GPTBigCodeForCausalLM, GPTBigCodeForSequenceClassification, GPTBigCodeForTokenClassification, GPTBigCodeModel, GPTBigCodePreTrainedModel, ) from .models.gpt_neo import ( GPTNeoForCausalLM, GPTNeoForQuestionAnswering, GPTNeoForSequenceClassification, GPTNeoForTokenClassification, GPTNeoModel, GPTNeoPreTrainedModel, load_tf_weights_in_gpt_neo, ) from .models.gpt_neox import ( GPTNeoXForCausalLM, GPTNeoXForQuestionAnswering, GPTNeoXForSequenceClassification, GPTNeoXForTokenClassification, GPTNeoXLayer, GPTNeoXModel, GPTNeoXPreTrainedModel, ) from .models.gpt_neox_japanese import ( GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseLayer, GPTNeoXJapaneseModel, GPTNeoXJapanesePreTrainedModel, ) from .models.gptj import ( GPTJForCausalLM, GPTJForQuestionAnswering, GPTJForSequenceClassification, GPTJModel, GPTJPreTrainedModel, ) from .models.granite import ( GraniteForCausalLM, GraniteModel, GranitePreTrainedModel, ) from .models.grounding_dino import ( GroundingDinoForObjectDetection, GroundingDinoModel, GroundingDinoPreTrainedModel, ) from .models.groupvit import ( GroupViTModel, GroupViTPreTrainedModel, GroupViTTextModel, GroupViTVisionModel, ) from .models.hiera import ( HieraBackbone, HieraForImageClassification, HieraForPreTraining, HieraModel, HieraPreTrainedModel, ) from .models.hubert import ( HubertForCTC, HubertForSequenceClassification, HubertModel, HubertPreTrainedModel, ) from .models.ibert import ( IBertForMaskedLM, IBertForMultipleChoice, IBertForQuestionAnswering, IBertForSequenceClassification, IBertForTokenClassification, IBertModel, IBertPreTrainedModel, ) from .models.idefics import ( IdeficsForVisionText2Text, IdeficsModel, IdeficsPreTrainedModel, IdeficsProcessor, ) from .models.idefics2 import ( Idefics2ForConditionalGeneration, Idefics2Model, Idefics2PreTrainedModel, Idefics2Processor, ) from .models.imagegpt import ( ImageGPTForCausalImageModeling, ImageGPTForImageClassification, ImageGPTModel, ImageGPTPreTrainedModel, load_tf_weights_in_imagegpt, ) from .models.informer import ( InformerForPrediction, InformerModel, InformerPreTrainedModel, ) from .models.instructblip import ( InstructBlipForConditionalGeneration, InstructBlipPreTrainedModel, InstructBlipQFormerModel, InstructBlipVisionModel, ) from .models.instructblipvideo import ( InstructBlipVideoForConditionalGeneration, InstructBlipVideoPreTrainedModel, InstructBlipVideoQFormerModel, InstructBlipVideoVisionModel, ) from .models.jamba import ( JambaForCausalLM, JambaForSequenceClassification, JambaModel, JambaPreTrainedModel, ) from .models.jetmoe import ( JetMoeForCausalLM, JetMoeForSequenceClassification, JetMoeModel, JetMoePreTrainedModel, ) from .models.kosmos2 import ( Kosmos2ForConditionalGeneration, Kosmos2Model, Kosmos2PreTrainedModel, ) from .models.layoutlm import ( LayoutLMForMaskedLM, LayoutLMForQuestionAnswering, LayoutLMForSequenceClassification, LayoutLMForTokenClassification, LayoutLMModel, LayoutLMPreTrainedModel, ) from .models.layoutlmv2 import ( LayoutLMv2ForQuestionAnswering, LayoutLMv2ForSequenceClassification, LayoutLMv2ForTokenClassification, LayoutLMv2Model, LayoutLMv2PreTrainedModel, ) from .models.layoutlmv3 import ( LayoutLMv3ForQuestionAnswering, LayoutLMv3ForSequenceClassification, LayoutLMv3ForTokenClassification, LayoutLMv3Model, LayoutLMv3PreTrainedModel, ) from .models.led import ( LEDForConditionalGeneration, LEDForQuestionAnswering, LEDForSequenceClassification, LEDModel, LEDPreTrainedModel, ) from .models.levit import ( LevitForImageClassification, LevitForImageClassificationWithTeacher, LevitModel, LevitPreTrainedModel, ) from .models.lilt import ( LiltForQuestionAnswering, LiltForSequenceClassification, LiltForTokenClassification, LiltModel, LiltPreTrainedModel, ) from .models.llama import ( LlamaForCausalLM, LlamaForQuestionAnswering, LlamaForSequenceClassification, LlamaForTokenClassification, LlamaModel, LlamaPreTrainedModel, ) from .models.llava import ( LlavaForConditionalGeneration, LlavaPreTrainedModel, ) from .models.llava_next import ( LlavaNextForConditionalGeneration, LlavaNextPreTrainedModel, ) from .models.llava_next_video import ( LlavaNextVideoForConditionalGeneration, LlavaNextVideoPreTrainedModel, ) from .models.longformer import ( LongformerForMaskedLM, LongformerForMultipleChoice, LongformerForQuestionAnswering, LongformerForSequenceClassification, LongformerForTokenClassification, LongformerModel, LongformerPreTrainedModel, LongformerSelfAttention, ) from .models.longt5 import ( LongT5EncoderModel, LongT5ForConditionalGeneration, LongT5Model, LongT5PreTrainedModel, ) from .models.luke import ( LukeForEntityClassification, LukeForEntityPairClassification, LukeForEntitySpanClassification, LukeForMaskedLM, LukeForMultipleChoice, LukeForQuestionAnswering, LukeForSequenceClassification, LukeForTokenClassification, LukeModel, LukePreTrainedModel, ) from .models.lxmert import ( LxmertEncoder, LxmertForPreTraining, LxmertForQuestionAnswering, LxmertModel, LxmertPreTrainedModel, LxmertVisualFeatureEncoder, LxmertXLayer, ) from .models.m2m_100 import ( M2M100ForConditionalGeneration, M2M100Model, M2M100PreTrainedModel, ) from .models.mamba import ( MambaForCausalLM, MambaModel, MambaPreTrainedModel, ) from .models.mamba2 import ( Mamba2ForCausalLM, Mamba2Model, Mamba2PreTrainedModel, ) from .models.marian import MarianForCausalLM, MarianModel, MarianMTModel from .models.markuplm import ( MarkupLMForQuestionAnswering, MarkupLMForSequenceClassification, MarkupLMForTokenClassification, MarkupLMModel, MarkupLMPreTrainedModel, ) from .models.mask2former import ( Mask2FormerForUniversalSegmentation, Mask2FormerModel, Mask2FormerPreTrainedModel, ) from .models.maskformer import ( MaskFormerForInstanceSegmentation, MaskFormerModel, MaskFormerPreTrainedModel, MaskFormerSwinBackbone, ) from .models.mbart import ( MBartForCausalLM, MBartForConditionalGeneration, MBartForQuestionAnswering, MBartForSequenceClassification, MBartModel, MBartPreTrainedModel, ) from .models.megatron_bert import ( MegatronBertForCausalLM, MegatronBertForMaskedLM, MegatronBertForMultipleChoice, MegatronBertForNextSentencePrediction, MegatronBertForPreTraining, MegatronBertForQuestionAnswering, MegatronBertForSequenceClassification, MegatronBertForTokenClassification, MegatronBertModel, MegatronBertPreTrainedModel, ) from .models.mgp_str import ( MgpstrForSceneTextRecognition, MgpstrModel, MgpstrPreTrainedModel, ) from .models.mistral import ( MistralForCausalLM, MistralForSequenceClassification, MistralForTokenClassification, MistralModel, MistralPreTrainedModel, ) from .models.mixtral import ( MixtralForCausalLM, MixtralForSequenceClassification, MixtralForTokenClassification, MixtralModel, MixtralPreTrainedModel, ) from .models.mobilebert import ( MobileBertForMaskedLM, MobileBertForMultipleChoice, MobileBertForNextSentencePrediction, MobileBertForPreTraining, MobileBertForQuestionAnswering, MobileBertForSequenceClassification, MobileBertForTokenClassification, MobileBertLayer, MobileBertModel, MobileBertPreTrainedModel, load_tf_weights_in_mobilebert, ) from .models.mobilenet_v1 import ( MobileNetV1ForImageClassification, MobileNetV1Model, MobileNetV1PreTrainedModel, load_tf_weights_in_mobilenet_v1, ) from .models.mobilenet_v2 import ( MobileNetV2ForImageClassification, MobileNetV2ForSemanticSegmentation, MobileNetV2Model, MobileNetV2PreTrainedModel, load_tf_weights_in_mobilenet_v2, ) from .models.mobilevit import ( MobileViTForImageClassification, MobileViTForSemanticSegmentation, MobileViTModel, MobileViTPreTrainedModel, ) from .models.mobilevitv2 import ( MobileViTV2ForImageClassification, MobileViTV2ForSemanticSegmentation, MobileViTV2Model, MobileViTV2PreTrainedModel, ) from .models.mpnet import ( MPNetForMaskedLM, MPNetForMultipleChoice, MPNetForQuestionAnswering, MPNetForSequenceClassification, MPNetForTokenClassification, MPNetLayer, MPNetModel, MPNetPreTrainedModel, ) from .models.mpt import ( MptForCausalLM, MptForQuestionAnswering, MptForSequenceClassification, MptForTokenClassification, MptModel, MptPreTrainedModel, ) from .models.mra import ( MraForMaskedLM, MraForMultipleChoice, MraForQuestionAnswering, MraForSequenceClassification, MraForTokenClassification, MraModel, MraPreTrainedModel, ) from .models.mt5 import ( MT5EncoderModel, MT5ForConditionalGeneration, MT5ForQuestionAnswering, MT5ForSequenceClassification, MT5ForTokenClassification, MT5Model, MT5PreTrainedModel, ) from .models.musicgen import ( MusicgenForCausalLM, MusicgenForConditionalGeneration, MusicgenModel, MusicgenPreTrainedModel, MusicgenProcessor, ) from .models.musicgen_melody import ( MusicgenMelodyForCausalLM, MusicgenMelodyForConditionalGeneration, MusicgenMelodyModel, MusicgenMelodyPreTrainedModel, ) from .models.mvp import ( MvpForCausalLM, MvpForConditionalGeneration, MvpForQuestionAnswering, MvpForSequenceClassification, MvpModel, MvpPreTrainedModel, ) from .models.nemotron import ( NemotronForCausalLM, NemotronForQuestionAnswering, NemotronForSequenceClassification, NemotronForTokenClassification, NemotronModel, NemotronPreTrainedModel, ) from .models.nllb_moe import ( NllbMoeForConditionalGeneration, NllbMoeModel, NllbMoePreTrainedModel, NllbMoeSparseMLP, NllbMoeTop2Router, ) from .models.nystromformer import ( NystromformerForMaskedLM, NystromformerForMultipleChoice, NystromformerForQuestionAnswering, NystromformerForSequenceClassification, NystromformerForTokenClassification, NystromformerLayer, NystromformerModel, NystromformerPreTrainedModel, ) from .models.olmo import ( OlmoForCausalLM, OlmoModel, OlmoPreTrainedModel, ) from .models.oneformer import ( OneFormerForUniversalSegmentation, OneFormerModel, OneFormerPreTrainedModel, ) from .models.openai import ( OpenAIGPTDoubleHeadsModel, OpenAIGPTForSequenceClassification, OpenAIGPTLMHeadModel, OpenAIGPTModel, OpenAIGPTPreTrainedModel, load_tf_weights_in_openai_gpt, ) from .models.opt import ( OPTForCausalLM, OPTForQuestionAnswering, OPTForSequenceClassification, OPTModel, OPTPreTrainedModel, ) from .models.owlv2 import ( Owlv2ForObjectDetection, Owlv2Model, Owlv2PreTrainedModel, Owlv2TextModel, Owlv2VisionModel, ) from .models.owlvit import ( OwlViTForObjectDetection, OwlViTModel, OwlViTPreTrainedModel, OwlViTTextModel, OwlViTVisionModel, ) from .models.paligemma import ( PaliGemmaForConditionalGeneration, PaliGemmaPreTrainedModel, PaliGemmaProcessor, ) from .models.patchtsmixer import ( PatchTSMixerForPrediction, PatchTSMixerForPretraining, PatchTSMixerForRegression, PatchTSMixerForTimeSeriesClassification, PatchTSMixerModel, PatchTSMixerPreTrainedModel, ) from .models.patchtst import ( PatchTSTForClassification, PatchTSTForPrediction, PatchTSTForPretraining, PatchTSTForRegression, PatchTSTModel, PatchTSTPreTrainedModel, ) from .models.pegasus import ( PegasusForCausalLM, PegasusForConditionalGeneration, PegasusModel, PegasusPreTrainedModel, ) from .models.pegasus_x import ( PegasusXForConditionalGeneration, PegasusXModel, PegasusXPreTrainedModel, ) from .models.perceiver import ( PerceiverForImageClassificationConvProcessing, PerceiverForImageClassificationFourier, PerceiverForImageClassificationLearned, PerceiverForMaskedLM, PerceiverForMultimodalAutoencoding, PerceiverForOpticalFlow, PerceiverForSequenceClassification, PerceiverLayer, PerceiverModel, PerceiverPreTrainedModel, ) from .models.persimmon import ( PersimmonForCausalLM, PersimmonForSequenceClassification, PersimmonForTokenClassification, PersimmonModel, PersimmonPreTrainedModel, ) from .models.phi import ( PhiForCausalLM, PhiForSequenceClassification, PhiForTokenClassification, PhiModel, PhiPreTrainedModel, ) from .models.phi3 import ( Phi3ForCausalLM, Phi3ForSequenceClassification, Phi3ForTokenClassification, Phi3Model, Phi3PreTrainedModel, ) from .models.pix2struct import ( Pix2StructForConditionalGeneration, Pix2StructPreTrainedModel, Pix2StructTextModel, Pix2StructVisionModel, ) from .models.plbart import ( PLBartForCausalLM, PLBartForConditionalGeneration, PLBartForSequenceClassification, PLBartModel, PLBartPreTrainedModel, ) from .models.poolformer import ( PoolFormerForImageClassification, PoolFormerModel, PoolFormerPreTrainedModel, ) from .models.pop2piano import ( Pop2PianoForConditionalGeneration, Pop2PianoPreTrainedModel, ) from .models.prophetnet import ( ProphetNetDecoder, ProphetNetEncoder, ProphetNetForCausalLM, ProphetNetForConditionalGeneration, ProphetNetModel, ProphetNetPreTrainedModel, ) from .models.pvt import ( PvtForImageClassification, PvtModel, PvtPreTrainedModel, ) from .models.pvt_v2 import ( PvtV2Backbone, PvtV2ForImageClassification, PvtV2Model, PvtV2PreTrainedModel, ) from .models.qwen2 import ( Qwen2ForCausalLM, Qwen2ForSequenceClassification, Qwen2ForTokenClassification, Qwen2Model, Qwen2PreTrainedModel, ) from .models.qwen2_audio import ( Qwen2AudioEncoder, Qwen2AudioForConditionalGeneration, Qwen2AudioPreTrainedModel, ) from .models.qwen2_moe import ( Qwen2MoeForCausalLM, Qwen2MoeForSequenceClassification, Qwen2MoeForTokenClassification, Qwen2MoeModel, Qwen2MoePreTrainedModel, ) from .models.qwen2_vl import ( Qwen2VLForConditionalGeneration, Qwen2VLModel, Qwen2VLPreTrainedModel, ) from .models.rag import ( RagModel, RagPreTrainedModel, RagSequenceForGeneration, RagTokenForGeneration, ) from .models.recurrent_gemma import ( RecurrentGemmaForCausalLM, RecurrentGemmaModel, RecurrentGemmaPreTrainedModel, ) from .models.reformer import ( ReformerAttention, ReformerForMaskedLM, ReformerForQuestionAnswering, ReformerForSequenceClassification, ReformerLayer, ReformerModel, ReformerModelWithLMHead, ReformerPreTrainedModel, ) from .models.regnet import ( RegNetForImageClassification, RegNetModel, RegNetPreTrainedModel, ) from .models.rembert import ( RemBertForCausalLM, RemBertForMaskedLM, RemBertForMultipleChoice, RemBertForQuestionAnswering, RemBertForSequenceClassification, RemBertForTokenClassification, RemBertLayer, RemBertModel, RemBertPreTrainedModel, load_tf_weights_in_rembert, ) from .models.resnet import ( ResNetBackbone, ResNetForImageClassification, ResNetModel, ResNetPreTrainedModel, ) from .models.roberta import ( RobertaForCausalLM, RobertaForMaskedLM, RobertaForMultipleChoice, RobertaForQuestionAnswering, RobertaForSequenceClassification, RobertaForTokenClassification, RobertaModel, RobertaPreTrainedModel, ) from .models.roberta_prelayernorm import ( RobertaPreLayerNormForCausalLM, RobertaPreLayerNormForMaskedLM, RobertaPreLayerNormForMultipleChoice, RobertaPreLayerNormForQuestionAnswering, RobertaPreLayerNormForSequenceClassification, RobertaPreLayerNormForTokenClassification, RobertaPreLayerNormModel, RobertaPreLayerNormPreTrainedModel, ) from .models.roc_bert import ( RoCBertForCausalLM, RoCBertForMaskedLM, RoCBertForMultipleChoice, RoCBertForPreTraining, RoCBertForQuestionAnswering, RoCBertForSequenceClassification, RoCBertForTokenClassification, RoCBertLayer, RoCBertModel, RoCBertPreTrainedModel, load_tf_weights_in_roc_bert, ) from .models.roformer import ( RoFormerForCausalLM, RoFormerForMaskedLM, RoFormerForMultipleChoice, RoFormerForQuestionAnswering, RoFormerForSequenceClassification, RoFormerForTokenClassification, RoFormerLayer, RoFormerModel, RoFormerPreTrainedModel, load_tf_weights_in_roformer, ) from .models.rt_detr import ( RTDetrForObjectDetection, RTDetrModel, RTDetrPreTrainedModel, RTDetrResNetBackbone, RTDetrResNetPreTrainedModel, ) from .models.rwkv import ( RwkvForCausalLM, RwkvModel, RwkvPreTrainedModel, ) from .models.sam import ( SamModel, SamPreTrainedModel, ) from .models.seamless_m4t import ( SeamlessM4TCodeHifiGan, SeamlessM4TForSpeechToSpeech, SeamlessM4TForSpeechToText, SeamlessM4TForTextToSpeech, SeamlessM4TForTextToText, SeamlessM4THifiGan, SeamlessM4TModel, SeamlessM4TPreTrainedModel, SeamlessM4TTextToUnitForConditionalGeneration, SeamlessM4TTextToUnitModel, ) from .models.seamless_m4t_v2 import ( SeamlessM4Tv2ForSpeechToSpeech, SeamlessM4Tv2ForSpeechToText, SeamlessM4Tv2ForTextToSpeech, SeamlessM4Tv2ForTextToText, SeamlessM4Tv2Model, SeamlessM4Tv2PreTrainedModel, ) from .models.segformer import ( SegformerDecodeHead, SegformerForImageClassification, SegformerForSemanticSegmentation, SegformerLayer, SegformerModel, SegformerPreTrainedModel, ) from .models.seggpt import ( SegGptForImageSegmentation, SegGptModel, SegGptPreTrainedModel, ) from .models.sew import ( SEWForCTC, SEWForSequenceClassification, SEWModel, SEWPreTrainedModel, ) from .models.sew_d import ( SEWDForCTC, SEWDForSequenceClassification, SEWDModel, SEWDPreTrainedModel, ) from .models.siglip import ( SiglipForImageClassification, SiglipModel, SiglipPreTrainedModel, SiglipTextModel, SiglipVisionModel, ) from .models.speech_encoder_decoder import SpeechEncoderDecoderModel from .models.speech_to_text import ( Speech2TextForConditionalGeneration, Speech2TextModel, Speech2TextPreTrainedModel, ) from .models.speecht5 import ( SpeechT5ForSpeechToSpeech, SpeechT5ForSpeechToText, SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Model, SpeechT5PreTrainedModel, ) from .models.splinter import ( SplinterForPreTraining, SplinterForQuestionAnswering, SplinterLayer, SplinterModel, SplinterPreTrainedModel, ) from .models.squeezebert import ( SqueezeBertForMaskedLM, SqueezeBertForMultipleChoice, SqueezeBertForQuestionAnswering, SqueezeBertForSequenceClassification, SqueezeBertForTokenClassification, SqueezeBertModel, SqueezeBertModule, SqueezeBertPreTrainedModel, ) from .models.stablelm import ( StableLmForCausalLM, StableLmForSequenceClassification, StableLmForTokenClassification, StableLmModel, StableLmPreTrainedModel, ) from .models.starcoder2 import ( Starcoder2ForCausalLM, Starcoder2ForSequenceClassification, Starcoder2ForTokenClassification, Starcoder2Model, Starcoder2PreTrainedModel, ) from .models.superpoint import ( SuperPointForKeypointDetection, SuperPointPreTrainedModel, ) from .models.swiftformer import ( SwiftFormerForImageClassification, SwiftFormerModel, SwiftFormerPreTrainedModel, ) from .models.swin import ( SwinBackbone, SwinForImageClassification, SwinForMaskedImageModeling, SwinModel, SwinPreTrainedModel, ) from .models.swin2sr import ( Swin2SRForImageSuperResolution, Swin2SRModel, Swin2SRPreTrainedModel, ) from .models.swinv2 import ( Swinv2Backbone, Swinv2ForImageClassification, Swinv2ForMaskedImageModeling, Swinv2Model, Swinv2PreTrainedModel, ) from .models.switch_transformers import ( SwitchTransformersEncoderModel, SwitchTransformersForConditionalGeneration, SwitchTransformersModel, SwitchTransformersPreTrainedModel, SwitchTransformersSparseMLP, SwitchTransformersTop1Router, ) from .models.t5 import ( T5EncoderModel, T5ForConditionalGeneration, T5ForQuestionAnswering, T5ForSequenceClassification, T5ForTokenClassification, T5Model, T5PreTrainedModel, load_tf_weights_in_t5, ) from .models.table_transformer import ( TableTransformerForObjectDetection, TableTransformerModel, TableTransformerPreTrainedModel, ) from .models.tapas import ( TapasForMaskedLM, TapasForQuestionAnswering, TapasForSequenceClassification, TapasModel, TapasPreTrainedModel, load_tf_weights_in_tapas, ) from .models.time_series_transformer import ( TimeSeriesTransformerForPrediction, TimeSeriesTransformerModel, TimeSeriesTransformerPreTrainedModel, ) from .models.timesformer import ( TimesformerForVideoClassification, TimesformerModel, TimesformerPreTrainedModel, ) from .models.timm_backbone import TimmBackbone from .models.trocr import ( TrOCRForCausalLM, TrOCRPreTrainedModel, ) from .models.tvp import ( TvpForVideoGrounding, TvpModel, TvpPreTrainedModel, ) from .models.udop import ( UdopEncoderModel, UdopForConditionalGeneration, UdopModel, UdopPreTrainedModel, ) from .models.umt5 import ( UMT5EncoderModel, UMT5ForConditionalGeneration, UMT5ForQuestionAnswering, UMT5ForSequenceClassification, UMT5ForTokenClassification, UMT5Model, UMT5PreTrainedModel, ) from .models.unispeech import ( UniSpeechForCTC, UniSpeechForPreTraining, UniSpeechForSequenceClassification, UniSpeechModel, UniSpeechPreTrainedModel, ) from .models.unispeech_sat import ( UniSpeechSatForAudioFrameClassification, UniSpeechSatForCTC, UniSpeechSatForPreTraining, UniSpeechSatForSequenceClassification, UniSpeechSatForXVector, UniSpeechSatModel, UniSpeechSatPreTrainedModel, ) from .models.univnet import UnivNetModel from .models.upernet import ( UperNetForSemanticSegmentation, UperNetPreTrainedModel, ) from .models.video_llava import ( VideoLlavaForConditionalGeneration, VideoLlavaPreTrainedModel, VideoLlavaProcessor, ) from .models.videomae import ( VideoMAEForPreTraining, VideoMAEForVideoClassification, VideoMAEModel, VideoMAEPreTrainedModel, ) from .models.vilt import ( ViltForImageAndTextRetrieval, ViltForImagesAndTextClassification, ViltForMaskedLM, ViltForQuestionAnswering, ViltForTokenClassification, ViltLayer, ViltModel, ViltPreTrainedModel, ) from .models.vipllava import ( VipLlavaForConditionalGeneration, VipLlavaPreTrainedModel, ) from .models.vision_encoder_decoder import VisionEncoderDecoderModel from .models.vision_text_dual_encoder import VisionTextDualEncoderModel from .models.visual_bert import ( VisualBertForMultipleChoice, VisualBertForPreTraining, VisualBertForQuestionAnswering, VisualBertForRegionToPhraseAlignment, VisualBertForVisualReasoning, VisualBertLayer, VisualBertModel, VisualBertPreTrainedModel, ) from .models.vit import ( ViTForImageClassification, ViTForMaskedImageModeling, ViTModel, ViTPreTrainedModel, ) from .models.vit_mae import ( ViTMAEForPreTraining, ViTMAELayer, ViTMAEModel, ViTMAEPreTrainedModel, ) from .models.vit_msn import ( ViTMSNForImageClassification, ViTMSNModel, ViTMSNPreTrainedModel, ) from .models.vitdet import ( VitDetBackbone, VitDetModel, VitDetPreTrainedModel, ) from .models.vitmatte import ( VitMatteForImageMatting, VitMattePreTrainedModel, ) from .models.vits import ( VitsModel, VitsPreTrainedModel, ) from .models.vivit import ( VivitForVideoClassification, VivitModel, VivitPreTrainedModel, ) from .models.wav2vec2 import ( Wav2Vec2ForAudioFrameClassification, Wav2Vec2ForCTC, Wav2Vec2ForMaskedLM, Wav2Vec2ForPreTraining, Wav2Vec2ForSequenceClassification, Wav2Vec2ForXVector, Wav2Vec2Model, Wav2Vec2PreTrainedModel, ) from .models.wav2vec2_bert import ( Wav2Vec2BertForAudioFrameClassification, Wav2Vec2BertForCTC, Wav2Vec2BertForSequenceClassification, Wav2Vec2BertForXVector, Wav2Vec2BertModel, Wav2Vec2BertPreTrainedModel, ) from .models.wav2vec2_conformer import ( Wav2Vec2ConformerForAudioFrameClassification, Wav2Vec2ConformerForCTC, Wav2Vec2ConformerForPreTraining, Wav2Vec2ConformerForSequenceClassification, Wav2Vec2ConformerForXVector, Wav2Vec2ConformerModel, Wav2Vec2ConformerPreTrainedModel, ) from .models.wavlm import ( WavLMForAudioFrameClassification, WavLMForCTC, WavLMForSequenceClassification, WavLMForXVector, WavLMModel, WavLMPreTrainedModel, ) from .models.whisper import ( WhisperForAudioClassification, WhisperForCausalLM, WhisperForConditionalGeneration, WhisperModel, WhisperPreTrainedModel, ) from .models.x_clip import ( XCLIPModel, XCLIPPreTrainedModel, XCLIPTextModel, XCLIPVisionModel, ) from .models.xglm import ( XGLMForCausalLM, XGLMModel, XGLMPreTrainedModel, ) from .models.xlm import ( XLMForMultipleChoice, XLMForQuestionAnswering, XLMForQuestionAnsweringSimple, XLMForSequenceClassification, XLMForTokenClassification, XLMModel, XLMPreTrainedModel, XLMWithLMHeadModel, ) from .models.xlm_roberta import ( XLMRobertaForCausalLM, XLMRobertaForMaskedLM, XLMRobertaForMultipleChoice, XLMRobertaForQuestionAnswering, XLMRobertaForSequenceClassification, XLMRobertaForTokenClassification, XLMRobertaModel, XLMRobertaPreTrainedModel, ) from .models.xlm_roberta_xl import ( XLMRobertaXLForCausalLM, XLMRobertaXLForMaskedLM, XLMRobertaXLForMultipleChoice, XLMRobertaXLForQuestionAnswering, XLMRobertaXLForSequenceClassification, XLMRobertaXLForTokenClassification, XLMRobertaXLModel, XLMRobertaXLPreTrainedModel, ) from .models.xlnet import ( XLNetForMultipleChoice, XLNetForQuestionAnswering, XLNetForQuestionAnsweringSimple, XLNetForSequenceClassification, XLNetForTokenClassification, XLNetLMHeadModel, XLNetModel, XLNetPreTrainedModel, load_tf_weights_in_xlnet, ) from .models.xmod import ( XmodForCausalLM, XmodForMaskedLM, XmodForMultipleChoice, XmodForQuestionAnswering, XmodForSequenceClassification, XmodForTokenClassification, XmodModel, XmodPreTrainedModel, ) from .models.yolos import ( YolosForObjectDetection, YolosModel, YolosPreTrainedModel, ) from .models.yoso import ( YosoForMaskedLM, YosoForMultipleChoice, YosoForQuestionAnswering, YosoForSequenceClassification, YosoForTokenClassification, YosoLayer, YosoModel, YosoPreTrainedModel, ) from .models.zoedepth import ( ZoeDepthForDepthEstimation, ZoeDepthPreTrainedModel, ) # Optimization from .optimization import ( Adafactor, AdamW, get_constant_schedule, get_constant_schedule_with_warmup, get_cosine_schedule_with_warmup, get_cosine_with_hard_restarts_schedule_with_warmup, get_inverse_sqrt_schedule, get_linear_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup, get_scheduler, get_wsd_schedule, ) from .pytorch_utils import Conv1D, apply_chunking_to_forward, prune_layer # Trainer from .trainer import Trainer from .trainer_pt_utils import torch_distributed_zero_first from .trainer_seq2seq import Seq2SeqTrainer # TensorFlow try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: # Import the same objects as dummies to get them in the namespace. # They will raise an import error if the user tries to instantiate / use them. from .utils.dummy_tf_objects import * else: from .benchmark.benchmark_args_tf import TensorFlowBenchmarkArguments # Benchmarks from .benchmark.benchmark_tf import TensorFlowBenchmark from .generation import ( TFForcedBOSTokenLogitsProcessor, TFForcedEOSTokenLogitsProcessor, TFForceTokensLogitsProcessor, TFGenerationMixin, TFLogitsProcessor, TFLogitsProcessorList, TFLogitsWarper, TFMinLengthLogitsProcessor, TFNoBadWordsLogitsProcessor, TFNoRepeatNGramLogitsProcessor, TFRepetitionPenaltyLogitsProcessor, TFSuppressTokensAtBeginLogitsProcessor, TFSuppressTokensLogitsProcessor, TFTemperatureLogitsWarper, TFTopKLogitsWarper, TFTopPLogitsWarper, ) from .keras_callbacks import KerasMetricCallback, PushToHubCallback from .modeling_tf_utils import ( TFPreTrainedModel, TFSequenceSummary, TFSharedEmbeddings, shape_list, ) # TensorFlow model imports from .models.albert import ( TFAlbertForMaskedLM, TFAlbertForMultipleChoice, TFAlbertForPreTraining, TFAlbertForQuestionAnswering, TFAlbertForSequenceClassification, TFAlbertForTokenClassification, TFAlbertMainLayer, TFAlbertModel, TFAlbertPreTrainedModel, ) from .models.auto import ( TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING, TF_MODEL_FOR_CAUSAL_LM_MAPPING, TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_MASK_GENERATION_MAPPING, TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING, TF_MODEL_FOR_MASKED_LM_MAPPING, TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING, TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING, TF_MODEL_FOR_PRETRAINING_MAPPING, TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING, TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING, TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_TEXT_ENCODING_MAPPING, TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, TF_MODEL_FOR_VISION_2_SEQ_MAPPING, TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING, TF_MODEL_MAPPING, TF_MODEL_WITH_LM_HEAD_MAPPING, TFAutoModel, TFAutoModelForAudioClassification, TFAutoModelForCausalLM, TFAutoModelForDocumentQuestionAnswering, TFAutoModelForImageClassification, TFAutoModelForMaskedImageModeling, TFAutoModelForMaskedLM, TFAutoModelForMaskGeneration, TFAutoModelForMultipleChoice, TFAutoModelForNextSentencePrediction, TFAutoModelForPreTraining, TFAutoModelForQuestionAnswering, TFAutoModelForSemanticSegmentation, TFAutoModelForSeq2SeqLM, TFAutoModelForSequenceClassification, TFAutoModelForSpeechSeq2Seq, TFAutoModelForTableQuestionAnswering, TFAutoModelForTextEncoding, TFAutoModelForTokenClassification, TFAutoModelForVision2Seq, TFAutoModelForZeroShotImageClassification, TFAutoModelWithLMHead, ) from .models.bart import ( TFBartForConditionalGeneration, TFBartForSequenceClassification, TFBartModel, TFBartPretrainedModel, ) from .models.bert import ( TFBertEmbeddings, TFBertForMaskedLM, TFBertForMultipleChoice, TFBertForNextSentencePrediction, TFBertForPreTraining, TFBertForQuestionAnswering, TFBertForSequenceClassification, TFBertForTokenClassification, TFBertLMHeadModel, TFBertMainLayer, TFBertModel, TFBertPreTrainedModel, ) from .models.blenderbot import ( TFBlenderbotForConditionalGeneration, TFBlenderbotModel, TFBlenderbotPreTrainedModel, ) from .models.blenderbot_small import ( TFBlenderbotSmallForConditionalGeneration, TFBlenderbotSmallModel, TFBlenderbotSmallPreTrainedModel, ) from .models.blip import ( TFBlipForConditionalGeneration, TFBlipForImageTextRetrieval, TFBlipForQuestionAnswering, TFBlipModel, TFBlipPreTrainedModel, TFBlipTextModel, TFBlipVisionModel, ) from .models.camembert import ( TFCamembertForCausalLM, TFCamembertForMaskedLM, TFCamembertForMultipleChoice, TFCamembertForQuestionAnswering, TFCamembertForSequenceClassification, TFCamembertForTokenClassification, TFCamembertModel, TFCamembertPreTrainedModel, ) from .models.clip import ( TFCLIPModel, TFCLIPPreTrainedModel, TFCLIPTextModel, TFCLIPVisionModel, ) from .models.convbert import ( TFConvBertForMaskedLM, TFConvBertForMultipleChoice, TFConvBertForQuestionAnswering, TFConvBertForSequenceClassification, TFConvBertForTokenClassification, TFConvBertLayer, TFConvBertModel, TFConvBertPreTrainedModel, ) from .models.convnext import ( TFConvNextForImageClassification, TFConvNextModel, TFConvNextPreTrainedModel, ) from .models.convnextv2 import ( TFConvNextV2ForImageClassification, TFConvNextV2Model, TFConvNextV2PreTrainedModel, ) from .models.ctrl import ( TFCTRLForSequenceClassification, TFCTRLLMHeadModel, TFCTRLModel, TFCTRLPreTrainedModel, ) from .models.cvt import ( TFCvtForImageClassification, TFCvtModel, TFCvtPreTrainedModel, ) from .models.data2vec import ( TFData2VecVisionForImageClassification, TFData2VecVisionForSemanticSegmentation, TFData2VecVisionModel, TFData2VecVisionPreTrainedModel, ) from .models.deberta import ( TFDebertaForMaskedLM, TFDebertaForQuestionAnswering, TFDebertaForSequenceClassification, TFDebertaForTokenClassification, TFDebertaModel, TFDebertaPreTrainedModel, ) from .models.deberta_v2 import ( TFDebertaV2ForMaskedLM, TFDebertaV2ForMultipleChoice, TFDebertaV2ForQuestionAnswering, TFDebertaV2ForSequenceClassification, TFDebertaV2ForTokenClassification, TFDebertaV2Model, TFDebertaV2PreTrainedModel, ) from .models.deit import ( TFDeiTForImageClassification, TFDeiTForImageClassificationWithTeacher, TFDeiTForMaskedImageModeling, TFDeiTModel, TFDeiTPreTrainedModel, ) from .models.deprecated.efficientformer import ( TFEfficientFormerForImageClassification, TFEfficientFormerForImageClassificationWithTeacher, TFEfficientFormerModel, TFEfficientFormerPreTrainedModel, ) from .models.deprecated.transfo_xl import ( TFAdaptiveEmbedding, TFTransfoXLForSequenceClassification, TFTransfoXLLMHeadModel, TFTransfoXLMainLayer, TFTransfoXLModel, TFTransfoXLPreTrainedModel, ) from .models.distilbert import ( TFDistilBertForMaskedLM, TFDistilBertForMultipleChoice, TFDistilBertForQuestionAnswering, TFDistilBertForSequenceClassification, TFDistilBertForTokenClassification, TFDistilBertMainLayer, TFDistilBertModel, TFDistilBertPreTrainedModel, ) from .models.dpr import ( TFDPRContextEncoder, TFDPRPretrainedContextEncoder, TFDPRPretrainedQuestionEncoder, TFDPRPretrainedReader, TFDPRQuestionEncoder, TFDPRReader, ) from .models.electra import ( TFElectraForMaskedLM, TFElectraForMultipleChoice, TFElectraForPreTraining, TFElectraForQuestionAnswering, TFElectraForSequenceClassification, TFElectraForTokenClassification, TFElectraModel, TFElectraPreTrainedModel, ) from .models.encoder_decoder import TFEncoderDecoderModel from .models.esm import ( TFEsmForMaskedLM, TFEsmForSequenceClassification, TFEsmForTokenClassification, TFEsmModel, TFEsmPreTrainedModel, ) from .models.flaubert import ( TFFlaubertForMultipleChoice, TFFlaubertForQuestionAnsweringSimple, TFFlaubertForSequenceClassification, TFFlaubertForTokenClassification, TFFlaubertModel, TFFlaubertPreTrainedModel, TFFlaubertWithLMHeadModel, ) from .models.funnel import ( TFFunnelBaseModel, TFFunnelForMaskedLM, TFFunnelForMultipleChoice, TFFunnelForPreTraining, TFFunnelForQuestionAnswering, TFFunnelForSequenceClassification, TFFunnelForTokenClassification, TFFunnelModel, TFFunnelPreTrainedModel, ) from .models.gpt2 import ( TFGPT2DoubleHeadsModel, TFGPT2ForSequenceClassification, TFGPT2LMHeadModel, TFGPT2MainLayer, TFGPT2Model, TFGPT2PreTrainedModel, ) from .models.gptj import ( TFGPTJForCausalLM, TFGPTJForQuestionAnswering, TFGPTJForSequenceClassification, TFGPTJModel, TFGPTJPreTrainedModel, ) from .models.groupvit import ( TFGroupViTModel, TFGroupViTPreTrainedModel, TFGroupViTTextModel, TFGroupViTVisionModel, ) from .models.hubert import ( TFHubertForCTC, TFHubertModel, TFHubertPreTrainedModel, ) from .models.idefics import ( TFIdeficsForVisionText2Text, TFIdeficsModel, TFIdeficsPreTrainedModel, ) from .models.layoutlm import ( TFLayoutLMForMaskedLM, TFLayoutLMForQuestionAnswering, TFLayoutLMForSequenceClassification, TFLayoutLMForTokenClassification, TFLayoutLMMainLayer, TFLayoutLMModel, TFLayoutLMPreTrainedModel, ) from .models.layoutlmv3 import ( TFLayoutLMv3ForQuestionAnswering, TFLayoutLMv3ForSequenceClassification, TFLayoutLMv3ForTokenClassification, TFLayoutLMv3Model, TFLayoutLMv3PreTrainedModel, ) from .models.led import ( TFLEDForConditionalGeneration, TFLEDModel, TFLEDPreTrainedModel, ) from .models.longformer import ( TFLongformerForMaskedLM, TFLongformerForMultipleChoice, TFLongformerForQuestionAnswering, TFLongformerForSequenceClassification, TFLongformerForTokenClassification, TFLongformerModel, TFLongformerPreTrainedModel, TFLongformerSelfAttention, ) from .models.lxmert import ( TFLxmertForPreTraining, TFLxmertMainLayer, TFLxmertModel, TFLxmertPreTrainedModel, TFLxmertVisualFeatureEncoder, ) from .models.marian import ( TFMarianModel, TFMarianMTModel, TFMarianPreTrainedModel, ) from .models.mbart import ( TFMBartForConditionalGeneration, TFMBartModel, TFMBartPreTrainedModel, ) from .models.mistral import ( TFMistralForCausalLM, TFMistralForSequenceClassification, TFMistralModel, TFMistralPreTrainedModel, ) from .models.mobilebert import ( TFMobileBertForMaskedLM, TFMobileBertForMultipleChoice, TFMobileBertForNextSentencePrediction, TFMobileBertForPreTraining, TFMobileBertForQuestionAnswering, TFMobileBertForSequenceClassification, TFMobileBertForTokenClassification, TFMobileBertMainLayer, TFMobileBertModel, TFMobileBertPreTrainedModel, ) from .models.mobilevit import ( TFMobileViTForImageClassification, TFMobileViTForSemanticSegmentation, TFMobileViTModel, TFMobileViTPreTrainedModel, ) from .models.mpnet import ( TFMPNetForMaskedLM, TFMPNetForMultipleChoice, TFMPNetForQuestionAnswering, TFMPNetForSequenceClassification, TFMPNetForTokenClassification, TFMPNetMainLayer, TFMPNetModel, TFMPNetPreTrainedModel, ) from .models.mt5 import ( TFMT5EncoderModel, TFMT5ForConditionalGeneration, TFMT5Model, ) from .models.openai import ( TFOpenAIGPTDoubleHeadsModel, TFOpenAIGPTForSequenceClassification, TFOpenAIGPTLMHeadModel, TFOpenAIGPTMainLayer, TFOpenAIGPTModel, TFOpenAIGPTPreTrainedModel, ) from .models.opt import TFOPTForCausalLM, TFOPTModel, TFOPTPreTrainedModel from .models.pegasus import ( TFPegasusForConditionalGeneration, TFPegasusModel, TFPegasusPreTrainedModel, ) from .models.rag import ( TFRagModel, TFRagPreTrainedModel, TFRagSequenceForGeneration, TFRagTokenForGeneration, ) from .models.regnet import ( TFRegNetForImageClassification, TFRegNetModel, TFRegNetPreTrainedModel, ) from .models.rembert import ( TFRemBertForCausalLM, TFRemBertForMaskedLM, TFRemBertForMultipleChoice, TFRemBertForQuestionAnswering, TFRemBertForSequenceClassification, TFRemBertForTokenClassification, TFRemBertLayer, TFRemBertModel, TFRemBertPreTrainedModel, ) from .models.resnet import ( TFResNetForImageClassification, TFResNetModel, TFResNetPreTrainedModel, ) from .models.roberta import ( TFRobertaForCausalLM, TFRobertaForMaskedLM, TFRobertaForMultipleChoice, TFRobertaForQuestionAnswering, TFRobertaForSequenceClassification, TFRobertaForTokenClassification, TFRobertaMainLayer, TFRobertaModel, TFRobertaPreTrainedModel, ) from .models.roberta_prelayernorm import ( TFRobertaPreLayerNormForCausalLM, TFRobertaPreLayerNormForMaskedLM, TFRobertaPreLayerNormForMultipleChoice, TFRobertaPreLayerNormForQuestionAnswering, TFRobertaPreLayerNormForSequenceClassification, TFRobertaPreLayerNormForTokenClassification, TFRobertaPreLayerNormMainLayer, TFRobertaPreLayerNormModel, TFRobertaPreLayerNormPreTrainedModel, ) from .models.roformer import ( TFRoFormerForCausalLM, TFRoFormerForMaskedLM, TFRoFormerForMultipleChoice, TFRoFormerForQuestionAnswering, TFRoFormerForSequenceClassification, TFRoFormerForTokenClassification, TFRoFormerLayer, TFRoFormerModel, TFRoFormerPreTrainedModel, ) from .models.sam import ( TFSamModel, TFSamPreTrainedModel, ) from .models.segformer import ( TFSegformerDecodeHead, TFSegformerForImageClassification, TFSegformerForSemanticSegmentation, TFSegformerModel, TFSegformerPreTrainedModel, ) from .models.speech_to_text import ( TFSpeech2TextForConditionalGeneration, TFSpeech2TextModel, TFSpeech2TextPreTrainedModel, ) from .models.swiftformer import ( TFSwiftFormerForImageClassification, TFSwiftFormerModel, TFSwiftFormerPreTrainedModel, ) from .models.swin import ( TFSwinForImageClassification, TFSwinForMaskedImageModeling, TFSwinModel, TFSwinPreTrainedModel, ) from .models.t5 import ( TFT5EncoderModel, TFT5ForConditionalGeneration, TFT5Model, TFT5PreTrainedModel, ) from .models.tapas import ( TFTapasForMaskedLM, TFTapasForQuestionAnswering, TFTapasForSequenceClassification, TFTapasModel, TFTapasPreTrainedModel, ) from .models.vision_encoder_decoder import TFVisionEncoderDecoderModel from .models.vision_text_dual_encoder import TFVisionTextDualEncoderModel from .models.vit import ( TFViTForImageClassification, TFViTModel, TFViTPreTrainedModel, ) from .models.vit_mae import ( TFViTMAEForPreTraining, TFViTMAEModel, TFViTMAEPreTrainedModel, ) from .models.wav2vec2 import ( TFWav2Vec2ForCTC, TFWav2Vec2ForSequenceClassification, TFWav2Vec2Model, TFWav2Vec2PreTrainedModel, ) from .models.whisper import ( TFWhisperForConditionalGeneration, TFWhisperModel, TFWhisperPreTrainedModel, ) from .models.xglm import ( TFXGLMForCausalLM, TFXGLMModel, TFXGLMPreTrainedModel, ) from .models.xlm import ( TFXLMForMultipleChoice, TFXLMForQuestionAnsweringSimple, TFXLMForSequenceClassification, TFXLMForTokenClassification, TFXLMMainLayer, TFXLMModel, TFXLMPreTrainedModel, TFXLMWithLMHeadModel, ) from .models.xlm_roberta import ( TFXLMRobertaForCausalLM, TFXLMRobertaForMaskedLM, TFXLMRobertaForMultipleChoice, TFXLMRobertaForQuestionAnswering, TFXLMRobertaForSequenceClassification, TFXLMRobertaForTokenClassification, TFXLMRobertaModel, TFXLMRobertaPreTrainedModel, ) from .models.xlnet import ( TFXLNetForMultipleChoice, TFXLNetForQuestionAnsweringSimple, TFXLNetForSequenceClassification, TFXLNetForTokenClassification, TFXLNetLMHeadModel, TFXLNetMainLayer, TFXLNetModel, TFXLNetPreTrainedModel, ) # Optimization from .optimization_tf import ( AdamWeightDecay, GradientAccumulator, WarmUp, create_optimizer, ) try: if not ( is_librosa_available() and is_essentia_available() and is_scipy_available() and is_torch_available() and is_pretty_midi_available() ): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects import * else: from .models.pop2piano import ( Pop2PianoFeatureExtractor, Pop2PianoProcessor, Pop2PianoTokenizer, ) try: if not is_torchaudio_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from .utils.dummy_torchaudio_objects import * else: from .models.musicgen_melody import MusicgenMelodyFeatureExtractor, MusicgenMelodyProcessor try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: # Import the same objects as dummies to get them in the namespace. # They will raise an import error if the user tries to instantiate / use them. from .utils.dummy_flax_objects import * else: from .generation import ( FlaxForcedBOSTokenLogitsProcessor, FlaxForcedEOSTokenLogitsProcessor, FlaxForceTokensLogitsProcessor, FlaxGenerationMixin, FlaxLogitsProcessor, FlaxLogitsProcessorList, FlaxLogitsWarper, FlaxMinLengthLogitsProcessor, FlaxSuppressTokensAtBeginLogitsProcessor, FlaxSuppressTokensLogitsProcessor, FlaxTemperatureLogitsWarper, FlaxTopKLogitsWarper, FlaxTopPLogitsWarper, FlaxWhisperTimeStampLogitsProcessor, ) from .modeling_flax_utils import FlaxPreTrainedModel # Flax model imports from .models.albert import ( FlaxAlbertForMaskedLM, FlaxAlbertForMultipleChoice, FlaxAlbertForPreTraining, FlaxAlbertForQuestionAnswering, FlaxAlbertForSequenceClassification, FlaxAlbertForTokenClassification, FlaxAlbertModel, FlaxAlbertPreTrainedModel, ) from .models.auto import ( FLAX_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING, FLAX_MODEL_FOR_CAUSAL_LM_MAPPING, FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING, FLAX_MODEL_FOR_MASKED_LM_MAPPING, FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING, FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING, FLAX_MODEL_FOR_PRETRAINING_MAPPING, FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING, FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, FLAX_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING, FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING, FLAX_MODEL_MAPPING, FlaxAutoModel, FlaxAutoModelForCausalLM, FlaxAutoModelForImageClassification, FlaxAutoModelForMaskedLM, FlaxAutoModelForMultipleChoice, FlaxAutoModelForNextSentencePrediction, FlaxAutoModelForPreTraining, FlaxAutoModelForQuestionAnswering, FlaxAutoModelForSeq2SeqLM, FlaxAutoModelForSequenceClassification, FlaxAutoModelForSpeechSeq2Seq, FlaxAutoModelForTokenClassification, FlaxAutoModelForVision2Seq, ) from .models.bart import ( FlaxBartDecoderPreTrainedModel, FlaxBartForCausalLM, FlaxBartForConditionalGeneration, FlaxBartForQuestionAnswering, FlaxBartForSequenceClassification, FlaxBartModel, FlaxBartPreTrainedModel, ) from .models.beit import ( FlaxBeitForImageClassification, FlaxBeitForMaskedImageModeling, FlaxBeitModel, FlaxBeitPreTrainedModel, ) from .models.bert import ( FlaxBertForCausalLM, FlaxBertForMaskedLM, FlaxBertForMultipleChoice, FlaxBertForNextSentencePrediction, FlaxBertForPreTraining, FlaxBertForQuestionAnswering, FlaxBertForSequenceClassification, FlaxBertForTokenClassification, FlaxBertModel, FlaxBertPreTrainedModel, ) from .models.big_bird import ( FlaxBigBirdForCausalLM, FlaxBigBirdForMaskedLM, FlaxBigBirdForMultipleChoice, FlaxBigBirdForPreTraining, FlaxBigBirdForQuestionAnswering, FlaxBigBirdForSequenceClassification, FlaxBigBirdForTokenClassification, FlaxBigBirdModel, FlaxBigBirdPreTrainedModel, ) from .models.blenderbot import ( FlaxBlenderbotForConditionalGeneration, FlaxBlenderbotModel, FlaxBlenderbotPreTrainedModel, ) from .models.blenderbot_small import ( FlaxBlenderbotSmallForConditionalGeneration, FlaxBlenderbotSmallModel, FlaxBlenderbotSmallPreTrainedModel, ) from .models.bloom import ( FlaxBloomForCausalLM, FlaxBloomModel, FlaxBloomPreTrainedModel, ) from .models.clip import ( FlaxCLIPModel, FlaxCLIPPreTrainedModel, FlaxCLIPTextModel, FlaxCLIPTextModelWithProjection, FlaxCLIPTextPreTrainedModel, FlaxCLIPVisionModel, FlaxCLIPVisionPreTrainedModel, ) from .models.dinov2 import ( FlaxDinov2ForImageClassification, FlaxDinov2Model, FlaxDinov2PreTrainedModel, ) from .models.distilbert import ( FlaxDistilBertForMaskedLM, FlaxDistilBertForMultipleChoice, FlaxDistilBertForQuestionAnswering, FlaxDistilBertForSequenceClassification, FlaxDistilBertForTokenClassification, FlaxDistilBertModel, FlaxDistilBertPreTrainedModel, ) from .models.electra import ( FlaxElectraForCausalLM, FlaxElectraForMaskedLM, FlaxElectraForMultipleChoice, FlaxElectraForPreTraining, FlaxElectraForQuestionAnswering, FlaxElectraForSequenceClassification, FlaxElectraForTokenClassification, FlaxElectraModel, FlaxElectraPreTrainedModel, ) from .models.encoder_decoder import FlaxEncoderDecoderModel from .models.gemma import ( FlaxGemmaForCausalLM, FlaxGemmaModel, FlaxGemmaPreTrainedModel, ) from .models.gpt2 import ( FlaxGPT2LMHeadModel, FlaxGPT2Model, FlaxGPT2PreTrainedModel, ) from .models.gpt_neo import ( FlaxGPTNeoForCausalLM, FlaxGPTNeoModel, FlaxGPTNeoPreTrainedModel, ) from .models.gptj import ( FlaxGPTJForCausalLM, FlaxGPTJModel, FlaxGPTJPreTrainedModel, ) from .models.llama import ( FlaxLlamaForCausalLM, FlaxLlamaModel, FlaxLlamaPreTrainedModel, ) from .models.longt5 import ( FlaxLongT5ForConditionalGeneration, FlaxLongT5Model, FlaxLongT5PreTrainedModel, ) from .models.marian import ( FlaxMarianModel, FlaxMarianMTModel, FlaxMarianPreTrainedModel, ) from .models.mbart import ( FlaxMBartForConditionalGeneration, FlaxMBartForQuestionAnswering, FlaxMBartForSequenceClassification, FlaxMBartModel, FlaxMBartPreTrainedModel, ) from .models.mistral import ( FlaxMistralForCausalLM, FlaxMistralModel, FlaxMistralPreTrainedModel, ) from .models.mt5 import ( FlaxMT5EncoderModel, FlaxMT5ForConditionalGeneration, FlaxMT5Model, ) from .models.opt import FlaxOPTForCausalLM, FlaxOPTModel, FlaxOPTPreTrainedModel from .models.pegasus import ( FlaxPegasusForConditionalGeneration, FlaxPegasusModel, FlaxPegasusPreTrainedModel, ) from .models.regnet import ( FlaxRegNetForImageClassification, FlaxRegNetModel, FlaxRegNetPreTrainedModel, ) from .models.resnet import ( FlaxResNetForImageClassification, FlaxResNetModel, FlaxResNetPreTrainedModel, ) from .models.roberta import ( FlaxRobertaForCausalLM, FlaxRobertaForMaskedLM, FlaxRobertaForMultipleChoice, FlaxRobertaForQuestionAnswering, FlaxRobertaForSequenceClassification, FlaxRobertaForTokenClassification, FlaxRobertaModel, FlaxRobertaPreTrainedModel, ) from .models.roberta_prelayernorm import ( FlaxRobertaPreLayerNormForCausalLM, FlaxRobertaPreLayerNormForMaskedLM, FlaxRobertaPreLayerNormForMultipleChoice, FlaxRobertaPreLayerNormForQuestionAnswering, FlaxRobertaPreLayerNormForSequenceClassification, FlaxRobertaPreLayerNormForTokenClassification, FlaxRobertaPreLayerNormModel, FlaxRobertaPreLayerNormPreTrainedModel, ) from .models.roformer import ( FlaxRoFormerForMaskedLM, FlaxRoFormerForMultipleChoice, FlaxRoFormerForQuestionAnswering, FlaxRoFormerForSequenceClassification, FlaxRoFormerForTokenClassification, FlaxRoFormerModel, FlaxRoFormerPreTrainedModel, ) from .models.speech_encoder_decoder import FlaxSpeechEncoderDecoderModel from .models.t5 import ( FlaxT5EncoderModel, FlaxT5ForConditionalGeneration, FlaxT5Model, FlaxT5PreTrainedModel, ) from .models.vision_encoder_decoder import FlaxVisionEncoderDecoderModel from .models.vision_text_dual_encoder import FlaxVisionTextDualEncoderModel from .models.vit import ( FlaxViTForImageClassification, FlaxViTModel, FlaxViTPreTrainedModel, ) from .models.wav2vec2 import ( FlaxWav2Vec2ForCTC, FlaxWav2Vec2ForPreTraining, FlaxWav2Vec2Model, FlaxWav2Vec2PreTrainedModel, ) from .models.whisper import ( FlaxWhisperForAudioClassification, FlaxWhisperForConditionalGeneration, FlaxWhisperModel, FlaxWhisperPreTrainedModel, ) from .models.xglm import ( FlaxXGLMForCausalLM, FlaxXGLMModel, FlaxXGLMPreTrainedModel, ) from .models.xlm_roberta import ( FlaxXLMRobertaForCausalLM, FlaxXLMRobertaForMaskedLM, FlaxXLMRobertaForMultipleChoice, FlaxXLMRobertaForQuestionAnswering, FlaxXLMRobertaForSequenceClassification, FlaxXLMRobertaForTokenClassification, FlaxXLMRobertaModel, FlaxXLMRobertaPreTrainedModel, ) else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, extra_objects={"__version__": __version__}, ) if not is_tf_available() and not is_torch_available() and not is_flax_available(): logger.warning_advice( "None of PyTorch, TensorFlow >= 2.0, or Flax have been found. " "Models won't be available and only tokenizers, configuration " "and file/data utilities can be used." )
transformers/src/transformers/__init__.py/0
{ "file_path": "transformers/src/transformers/__init__.py", "repo_id": "transformers", "token_count": 150988 }
321
#!/usr/bin/env python # coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import base64 import importlib import io import json import os import tempfile from functools import lru_cache from typing import Any, Dict, List, Optional, Union from huggingface_hub import create_repo, get_collection, hf_hub_download, metadata_update, upload_folder from huggingface_hub.utils import RepositoryNotFoundError, build_hf_headers, get_session from packaging import version from ..dynamic_module_utils import ( custom_object_save, get_class_from_dynamic_module, get_imports, ) from ..models.auto import AutoProcessor from ..utils import ( CONFIG_NAME, cached_file, is_accelerate_available, is_torch_available, is_vision_available, logging, ) from .agent_types import handle_agent_inputs, handle_agent_outputs logger = logging.get_logger(__name__) if is_torch_available(): import torch if is_accelerate_available(): from accelerate import PartialState from accelerate.utils import send_to_device TOOL_CONFIG_FILE = "tool_config.json" def get_repo_type(repo_id, repo_type=None, **hub_kwargs): if repo_type is not None: return repo_type try: hf_hub_download(repo_id, TOOL_CONFIG_FILE, repo_type="space", **hub_kwargs) return "space" except RepositoryNotFoundError: try: hf_hub_download(repo_id, TOOL_CONFIG_FILE, repo_type="model", **hub_kwargs) return "model" except RepositoryNotFoundError: raise EnvironmentError(f"`{repo_id}` does not seem to be a valid repo identifier on the Hub.") except Exception: return "model" except Exception: return "space" # docstyle-ignore APP_FILE_TEMPLATE = """from transformers import launch_gradio_demo from {module_name} import {class_name} launch_gradio_demo({class_name}) """ class Tool: """ A base class for the functions used by the agent. Subclass this and implement the `__call__` method as well as the following class attributes: - **description** (`str`) -- A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance 'This is a tool that downloads a file from a `url`. It takes the `url` as input, and returns the text contained in the file'. - **name** (`str`) -- A performative name that will be used for your tool in the prompt to the agent. For instance `"text-classifier"` or `"image_generator"`. - **inputs** (`Dict[str, Dict[str, Union[str, type]]]`) -- The dict of modalities expected for the inputs. It has one `type`key and a `description`key. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. - **output_type** (`type`) -- The type of the tool output. This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated description for your tool. You can also override the method [`~Tool.setup`] if your tool as an expensive operation to perform before being usable (such as loading a model). [`~Tool.setup`] will be called the first time you use your tool, but not at instantiation. """ name: str description: str inputs: Dict[str, Dict[str, Union[str, type]]] output_type: type def __init__(self, *args, **kwargs): self.is_initialized = False def validate_attributes(self): required_attributes = { "description": str, "name": str, "inputs": Dict, "output_type": type, } for attr, expected_type in required_attributes.items(): attr_value = getattr(self, attr, None) if not isinstance(attr_value, expected_type): raise TypeError(f"Instance attribute {attr} must exist and be of type {expected_type.__name__}") def forward(self, *args, **kwargs): return NotImplemented("Write this method in your subclass of `Tool`.") def __call__(self, *args, **kwargs): args, kwargs = handle_agent_inputs(*args, **kwargs) outputs = self.forward(*args, **kwargs) return handle_agent_outputs(outputs, self.output_type) def setup(self): """ Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model. """ self.is_initialized = True def save(self, output_dir): """ Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your tool in `output_dir` as well as autogenerate: - a config file named `tool_config.json` - an `app.py` file so that your tool can be converted to a space - a `requirements.txt` containing the names of the module used by your tool (as detected when inspecting its code) You should only use this method to save tools that are defined in a separate module (not `__main__`). Args: output_dir (`str`): The folder in which you want to save your tool. """ os.makedirs(output_dir, exist_ok=True) # Save module file if self.__module__ == "__main__": raise ValueError( f"We can't save the code defining {self} in {output_dir} as it's been defined in __main__. You " "have to put this code in a separate module so we can include it in the saved folder." ) module_files = custom_object_save(self, output_dir) module_name = self.__class__.__module__ last_module = module_name.split(".")[-1] full_name = f"{last_module}.{self.__class__.__name__}" # Save config file config_file = os.path.join(output_dir, "tool_config.json") if os.path.isfile(config_file): with open(config_file, "r", encoding="utf-8") as f: tool_config = json.load(f) else: tool_config = {} tool_config = { "tool_class": full_name, "description": self.description, "name": self.name, "inputs": self.inputs, "output_type": str(self.output_type), } with open(config_file, "w", encoding="utf-8") as f: f.write(json.dumps(tool_config, indent=2, sort_keys=True) + "\n") # Save app file app_file = os.path.join(output_dir, "app.py") with open(app_file, "w", encoding="utf-8") as f: f.write(APP_FILE_TEMPLATE.format(module_name=last_module, class_name=self.__class__.__name__)) # Save requirements file requirements_file = os.path.join(output_dir, "requirements.txt") imports = [] for module in module_files: imports.extend(get_imports(module)) imports = list(set(imports)) with open(requirements_file, "w", encoding="utf-8") as f: f.write("\n".join(imports) + "\n") @classmethod def from_hub( cls, repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, **kwargs, ): """ Loads a tool defined on the Hub. <Tip warning={true}> Loading a tool from the Hub means that you'll download the tool and execute it locally. ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. </Tip> Args: repo_id (`str`): The name of the repo on the Hub where your tool is defined. model_repo_id (`str`, *optional*): If your tool uses a model and you want to use a different model than the default, you can pass a second repo ID or an endpoint url to this argument. token (`str`, *optional*): The token to identify you on hf.co. If unset, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). kwargs (additional keyword arguments, *optional*): Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as `cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your tool, and the others will be passed along to its init. """ hub_kwargs_names = [ "cache_dir", "force_download", "resume_download", "proxies", "revision", "repo_type", "subfolder", "local_files_only", ] hub_kwargs = {k: v for k, v in kwargs.items() if k in hub_kwargs_names} # Try to get the tool config first. hub_kwargs["repo_type"] = get_repo_type(repo_id, **hub_kwargs) resolved_config_file = cached_file( repo_id, TOOL_CONFIG_FILE, token=token, **hub_kwargs, _raise_exceptions_for_gated_repo=False, _raise_exceptions_for_missing_entries=False, _raise_exceptions_for_connection_errors=False, ) is_tool_config = resolved_config_file is not None if resolved_config_file is None: resolved_config_file = cached_file( repo_id, CONFIG_NAME, token=token, **hub_kwargs, _raise_exceptions_for_gated_repo=False, _raise_exceptions_for_missing_entries=False, _raise_exceptions_for_connection_errors=False, ) if resolved_config_file is None: raise EnvironmentError( f"{repo_id} does not appear to provide a valid configuration in `tool_config.json` or `config.json`." ) with open(resolved_config_file, encoding="utf-8") as reader: config = json.load(reader) if not is_tool_config: if "custom_tool" not in config: raise EnvironmentError( f"{repo_id} does not provide a mapping to custom tools in its configuration `config.json`." ) custom_tool = config["custom_tool"] else: custom_tool = config tool_class = custom_tool["tool_class"] tool_class = get_class_from_dynamic_module(tool_class, repo_id, token=token, **hub_kwargs) if len(tool_class.name) == 0: tool_class.name = custom_tool["name"] if tool_class.name != custom_tool["name"]: logger.warning( f"{tool_class.__name__} implements a different name in its configuration and class. Using the tool " "configuration name." ) tool_class.name = custom_tool["name"] if len(tool_class.description) == 0: tool_class.description = custom_tool["description"] if tool_class.description != custom_tool["description"]: logger.warning( f"{tool_class.__name__} implements a different description in its configuration and class. Using the " "tool configuration description." ) tool_class.description = custom_tool["description"] if tool_class.inputs != custom_tool["inputs"]: tool_class.inputs = custom_tool["inputs"] if tool_class.output_type != custom_tool["output_type"]: tool_class.output_type = custom_tool["output_type"] return tool_class(**kwargs) def push_to_hub( self, repo_id: str, commit_message: str = "Upload tool", private: Optional[bool] = None, token: Optional[Union[bool, str]] = None, create_pr: bool = False, ) -> str: """ Upload the tool to the Hub. For this method to work properly, your tool must have been defined in a separate module (not `__main__`). For instance: ``` from my_tool_module import MyTool my_tool = MyTool() my_tool.push_to_hub("my-username/my-space") ``` Parameters: repo_id (`str`): The name of the repository you want to push your tool to. It should contain your organization name when pushing to a given organization. commit_message (`str`, *optional*, defaults to `"Upload tool"`): Message to commit while pushing. private (`bool`, *optional*): Whether or not the repository created should be private. token (`bool` or `str`, *optional*): The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). create_pr (`bool`, *optional*, defaults to `False`): Whether or not to create a PR with the uploaded files or directly commit. """ repo_url = create_repo( repo_id=repo_id, token=token, private=private, exist_ok=True, repo_type="space", space_sdk="gradio", ) repo_id = repo_url.repo_id metadata_update(repo_id, {"tags": ["tool"]}, repo_type="space") with tempfile.TemporaryDirectory() as work_dir: # Save all files. self.save(work_dir) logger.info(f"Uploading the following files to {repo_id}: {','.join(os.listdir(work_dir))}") return upload_folder( repo_id=repo_id, commit_message=commit_message, folder_path=work_dir, token=token, create_pr=create_pr, repo_type="space", ) @staticmethod def from_gradio(gradio_tool): """ Creates a [`Tool`] from a gradio tool. """ import inspect class GradioToolWrapper(Tool): def __init__(self, _gradio_tool): super().__init__() self.name = _gradio_tool.name self.description = _gradio_tool.description self.output_type = "text" self._gradio_tool = _gradio_tool func_args = list(inspect.signature(_gradio_tool.run).parameters.keys()) self.inputs = {key: "" for key in func_args} def forward(self, *args, **kwargs): return self._gradio_tool.run(*args, **kwargs) return GradioToolWrapper(gradio_tool) @staticmethod def from_langchain(langchain_tool): """ Creates a [`Tool`] from a langchain tool. """ class LangChainToolWrapper(Tool): def __init__(self, _langchain_tool): super().__init__() self.name = _langchain_tool.name.lower() self.description = _langchain_tool.description self.inputs = parse_langchain_args(_langchain_tool.args) self.output_type = "text" self.langchain_tool = _langchain_tool def forward(self, *args, **kwargs): tool_input = kwargs.copy() for index, argument in enumerate(args): if index < len(self.inputs): input_key = next(iter(self.inputs)) tool_input[input_key] = argument return self.langchain_tool.run(tool_input) return LangChainToolWrapper(langchain_tool) DEFAULT_TOOL_DESCRIPTION_TEMPLATE = """ - {{ tool.name }}: {{ tool.description }} Takes inputs: {{tool.inputs}} """ def get_tool_description_with_args(tool: Tool, description_template: str = DEFAULT_TOOL_DESCRIPTION_TEMPLATE) -> str: compiled_template = compile_jinja_template(description_template) rendered = compiled_template.render( tool=tool, ) return rendered @lru_cache def compile_jinja_template(template): try: import jinja2 from jinja2.exceptions import TemplateError from jinja2.sandbox import ImmutableSandboxedEnvironment except ImportError: raise ImportError("template requires jinja2 to be installed.") if version.parse(jinja2.__version__) < version.parse("3.1.0"): raise ImportError("template requires jinja2>=3.1.0 to be installed. Your version is " f"{jinja2.__version__}.") def raise_exception(message): raise TemplateError(message) jinja_env = ImmutableSandboxedEnvironment(trim_blocks=True, lstrip_blocks=True) jinja_env.globals["raise_exception"] = raise_exception return jinja_env.from_string(template) class PipelineTool(Tool): """ A [`Tool`] tailored towards Transformer models. On top of the class attributes of the base class [`Tool`], you will need to specify: - **model_class** (`type`) -- The class to use to load the model in this tool. - **default_checkpoint** (`str`) -- The default checkpoint that should be used when the user doesn't specify one. - **pre_processor_class** (`type`, *optional*, defaults to [`AutoProcessor`]) -- The class to use to load the pre-processor - **post_processor_class** (`type`, *optional*, defaults to [`AutoProcessor`]) -- The class to use to load the post-processor (when different from the pre-processor). Args: model (`str` or [`PreTrainedModel`], *optional*): The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the value of the class attribute `default_checkpoint`. pre_processor (`str` or `Any`, *optional*): The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the value of `model` if unset. post_processor (`str` or `Any`, *optional*): The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the `pre_processor` if unset. device (`int`, `str` or `torch.device`, *optional*): The device on which to execute the model. Will default to any accelerator available (GPU, MPS etc...), the CPU otherwise. device_map (`str` or `dict`, *optional*): If passed along, will be used to instantiate the model. model_kwargs (`dict`, *optional*): Any keyword argument to send to the model instantiation. token (`str`, *optional*): The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). hub_kwargs (additional keyword arguments, *optional*): Any additional keyword argument to send to the methods that will load the data from the Hub. """ pre_processor_class = AutoProcessor model_class = None post_processor_class = AutoProcessor default_checkpoint = None description = "This is a pipeline tool" name = "pipeline" inputs = {"prompt": str} output_type = str def __init__( self, model=None, pre_processor=None, post_processor=None, device=None, device_map=None, model_kwargs=None, token=None, **hub_kwargs, ): if not is_torch_available(): raise ImportError("Please install torch in order to use this tool.") if not is_accelerate_available(): raise ImportError("Please install accelerate in order to use this tool.") if model is None: if self.default_checkpoint is None: raise ValueError("This tool does not implement a default checkpoint, you need to pass one.") model = self.default_checkpoint if pre_processor is None: pre_processor = model self.model = model self.pre_processor = pre_processor self.post_processor = post_processor self.device = device self.device_map = device_map self.model_kwargs = {} if model_kwargs is None else model_kwargs if device_map is not None: self.model_kwargs["device_map"] = device_map self.hub_kwargs = hub_kwargs self.hub_kwargs["token"] = token super().__init__() def setup(self): """ Instantiates the `pre_processor`, `model` and `post_processor` if necessary. """ if isinstance(self.pre_processor, str): self.pre_processor = self.pre_processor_class.from_pretrained(self.pre_processor, **self.hub_kwargs) if isinstance(self.model, str): self.model = self.model_class.from_pretrained(self.model, **self.model_kwargs, **self.hub_kwargs) if self.post_processor is None: self.post_processor = self.pre_processor elif isinstance(self.post_processor, str): self.post_processor = self.post_processor_class.from_pretrained(self.post_processor, **self.hub_kwargs) if self.device is None: if self.device_map is not None: self.device = list(self.model.hf_device_map.values())[0] else: self.device = PartialState().default_device if self.device_map is None: self.model.to(self.device) super().setup() def encode(self, raw_inputs): """ Uses the `pre_processor` to prepare the inputs for the `model`. """ return self.pre_processor(raw_inputs) def forward(self, inputs): """ Sends the inputs through the `model`. """ with torch.no_grad(): return self.model(**inputs) def decode(self, outputs): """ Uses the `post_processor` to decode the model output. """ return self.post_processor(outputs) def __call__(self, *args, **kwargs): args, kwargs = handle_agent_inputs(*args, **kwargs) if not self.is_initialized: self.setup() encoded_inputs = self.encode(*args, **kwargs) tensor_inputs = {k: v for k, v in encoded_inputs.items() if isinstance(v, torch.Tensor)} non_tensor_inputs = {k: v for k, v in encoded_inputs.items() if not isinstance(v, torch.Tensor)} encoded_inputs = send_to_device(tensor_inputs, self.device) outputs = self.forward({**encoded_inputs, **non_tensor_inputs}) outputs = send_to_device(outputs, "cpu") decoded_outputs = self.decode(outputs) return handle_agent_outputs(decoded_outputs, self.output_type) def launch_gradio_demo(tool_class: Tool): """ Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes `inputs` and `output_type`. Args: tool_class (`type`): The class of the tool for which to launch the demo. """ try: import gradio as gr except ImportError: raise ImportError("Gradio should be installed in order to launch a gradio demo.") tool = tool_class() def fn(*args, **kwargs): return tool(*args, **kwargs) gradio_inputs = [] for input_name, input_details in tool_class.inputs.items(): input_type = input_details["type"] if input_type == "text": gradio_inputs.append(gr.Textbox(label=input_name)) elif input_type == "image": gradio_inputs.append(gr.Image(label=input_name)) elif input_type == "audio": gradio_inputs.append(gr.Audio(label=input_name)) else: error_message = f"Input type '{input_type}' not supported." raise ValueError(error_message) gradio_output = tool_class.output_type assert gradio_output in ["text", "image", "audio"], f"Output type '{gradio_output}' not supported." gr.Interface( fn=fn, inputs=gradio_inputs, outputs=gradio_output, title=tool_class.__name__, article=tool.description, ).launch() TASK_MAPPING = { "document-question-answering": "DocumentQuestionAnsweringTool", "image-question-answering": "ImageQuestionAnsweringTool", "speech-to-text": "SpeechToTextTool", "text-to-speech": "TextToSpeechTool", "translation": "TranslationTool", "python_interpreter": "PythonInterpreterTool", } def load_tool(task_or_repo_id, model_repo_id=None, token=None, **kwargs): """ Main function to quickly load a tool, be it on the Hub or in the Transformers library. <Tip warning={true}> Loading a tool means that you'll download the tool and execute it locally. ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. </Tip> Args: task_or_repo_id (`str`): The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are: - `"document-question-answering"` - `"image-question-answering"` - `"speech-to-text"` - `"text-to-speech"` - `"translation"` model_repo_id (`str`, *optional*): Use this argument to use a different model than the default one for the tool you selected. token (`str`, *optional*): The token to identify you on hf.co. If unset, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). kwargs (additional keyword arguments, *optional*): Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as `cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your tool, and the others will be passed along to its init. """ if task_or_repo_id in TASK_MAPPING: tool_class_name = TASK_MAPPING[task_or_repo_id] main_module = importlib.import_module("transformers") tools_module = main_module.agents tool_class = getattr(tools_module, tool_class_name) return tool_class(model_repo_id, token=token, **kwargs) else: logger.warning_once( f"You're loading a tool from the Hub from {model_repo_id}. Please make sure this is a source that you " f"trust as the code within that tool will be executed on your machine. Always verify the code of " f"the tools that you load. We recommend specifying a `revision` to ensure you're loading the " f"code that you have checked." ) return Tool.from_hub(task_or_repo_id, model_repo_id=model_repo_id, token=token, **kwargs) def add_description(description): """ A decorator that adds a description to a function. """ def inner(func): func.description = description func.name = func.__name__ return func return inner ## Will move to the Hub class EndpointClient: def __init__(self, endpoint_url: str, token: Optional[str] = None): self.headers = { **build_hf_headers(token=token), "Content-Type": "application/json", } self.endpoint_url = endpoint_url @staticmethod def encode_image(image): _bytes = io.BytesIO() image.save(_bytes, format="PNG") b64 = base64.b64encode(_bytes.getvalue()) return b64.decode("utf-8") @staticmethod def decode_image(raw_image): if not is_vision_available(): raise ImportError( "This tool returned an image but Pillow is not installed. Please install it (`pip install Pillow`)." ) from PIL import Image b64 = base64.b64decode(raw_image) _bytes = io.BytesIO(b64) return Image.open(_bytes) def __call__( self, inputs: Optional[Union[str, Dict, List[str], List[List[str]]]] = None, params: Optional[Dict] = None, data: Optional[bytes] = None, output_image: bool = False, ) -> Any: # Build payload payload = {} if inputs: payload["inputs"] = inputs if params: payload["parameters"] = params # Make API call response = get_session().post(self.endpoint_url, headers=self.headers, json=payload, data=data) # By default, parse the response for the user. if output_image: return self.decode_image(response.content) else: return response.json() def parse_langchain_args(args: Dict[str, str]) -> Dict[str, str]: """Parse the args attribute of a LangChain tool to create a matching inputs dictionary.""" inputs = args.copy() for arg_details in inputs.values(): if "title" in arg_details: arg_details.pop("title") return inputs class ToolCollection: """ Tool collections enable loading all Spaces from a collection in order to be added to the agent's toolbox. > [!NOTE] > Only Spaces will be fetched, so you can feel free to add models and datasets to your collection if you'd > like for this collection to showcase them. Args: collection_slug (str): The collection slug referencing the collection. token (str, *optional*): The authentication token if the collection is private. Example: ```py >>> from transformers import ToolCollection, ReactCodeAgent >>> image_tool_collection = ToolCollection(collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f") >>> agent = ReactCodeAgent(tools=[*image_tool_collection.tools], add_base_tools=True) >>> agent.run("Please draw me a picture of rivers and lakes.") ``` """ def __init__(self, collection_slug: str, token: Optional[str] = None): self._collection = get_collection(collection_slug, token=token) self._hub_repo_ids = {item.item_id for item in self._collection.items if item.item_type == "space"} self.tools = {Tool.from_hub(repo_id) for repo_id in self._hub_repo_ids}
transformers/src/transformers/agents/tools.py/0
{ "file_path": "transformers/src/transformers/agents/tools.py", "repo_id": "transformers", "token_count": 12939 }
322
""" Implementation of a custom transfer agent for the transfer type "multipart" for git-lfs. Inspired by: github.com/cbartz/git-lfs-swift-transfer-agent/blob/master/git_lfs_swift_transfer.py Spec is: github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md To launch debugger while developing: ``` [lfs "customtransfer.multipart"] path = /path/to/transformers/.env/bin/python args = -m debugpy --listen 5678 --wait-for-client /path/to/transformers/src/transformers/commands/transformers_cli.py lfs-multipart-upload ```""" import json import os import subprocess import sys import warnings from argparse import ArgumentParser from contextlib import AbstractContextManager from typing import Dict, List, Optional import requests from ..utils import logging from . import BaseTransformersCLICommand logger = logging.get_logger(__name__) # pylint: disable=invalid-name LFS_MULTIPART_UPLOAD_COMMAND = "lfs-multipart-upload" class LfsCommands(BaseTransformersCLICommand): """ Implementation of a custom transfer agent for the transfer type "multipart" for git-lfs. This lets users upload large files >5GB 🔥. Spec for LFS custom transfer agent is: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md This introduces two commands to the CLI: 1. $ transformers-cli lfs-enable-largefiles This should be executed once for each model repo that contains a model file >5GB. It's documented in the error message you get if you just try to git push a 5GB file without having enabled it before. 2. $ transformers-cli lfs-multipart-upload This command is called by lfs directly and is not meant to be called by the user. """ @staticmethod def register_subcommand(parser: ArgumentParser): enable_parser = parser.add_parser( "lfs-enable-largefiles", help=( "Deprecated: use `huggingface-cli` instead. Configure your repository to enable upload of files > 5GB." ), ) enable_parser.add_argument("path", type=str, help="Local path to repository you want to configure.") enable_parser.set_defaults(func=lambda args: LfsEnableCommand(args)) upload_parser = parser.add_parser( LFS_MULTIPART_UPLOAD_COMMAND, help=( "Deprecated: use `huggingface-cli` instead. " "Command will get called by git-lfs, do not call it directly." ), ) upload_parser.set_defaults(func=lambda args: LfsUploadCommand(args)) class LfsEnableCommand: def __init__(self, args): self.args = args def run(self): warnings.warn( "Managing repositories through transformers-cli is deprecated. Please use `huggingface-cli` instead." ) local_path = os.path.abspath(self.args.path) if not os.path.isdir(local_path): print("This does not look like a valid git repo.") exit(1) subprocess.run( "git config lfs.customtransfer.multipart.path transformers-cli".split(), check=True, cwd=local_path ) subprocess.run( f"git config lfs.customtransfer.multipart.args {LFS_MULTIPART_UPLOAD_COMMAND}".split(), check=True, cwd=local_path, ) print("Local repo set up for largefiles") def write_msg(msg: Dict): """Write out the message in Line delimited JSON.""" msg = json.dumps(msg) + "\n" sys.stdout.write(msg) sys.stdout.flush() def read_msg() -> Optional[Dict]: """Read Line delimited JSON from stdin.""" msg = json.loads(sys.stdin.readline().strip()) if "terminate" in (msg.get("type"), msg.get("event")): # terminate message received return None if msg.get("event") not in ("download", "upload"): logger.critical("Received unexpected message") sys.exit(1) return msg class FileSlice(AbstractContextManager): """ File-like object that only reads a slice of a file Inspired by stackoverflow.com/a/29838711/593036 """ def __init__(self, filepath: str, seek_from: int, read_limit: int): self.filepath = filepath self.seek_from = seek_from self.read_limit = read_limit self.n_seen = 0 def __enter__(self): self.f = open(self.filepath, "rb") self.f.seek(self.seek_from) return self def __len__(self): total_length = os.fstat(self.f.fileno()).st_size return min(self.read_limit, total_length - self.seek_from) def read(self, n=-1): if self.n_seen >= self.read_limit: return b"" remaining_amount = self.read_limit - self.n_seen data = self.f.read(remaining_amount if n < 0 else min(n, remaining_amount)) self.n_seen += len(data) return data def __iter__(self): yield self.read(n=4 * 1024 * 1024) def __exit__(self, *args): self.f.close() class LfsUploadCommand: def __init__(self, args): self.args = args def run(self): # Immediately after invoking a custom transfer process, git-lfs # sends initiation data to the process over stdin. # This tells the process useful information about the configuration. init_msg = json.loads(sys.stdin.readline().strip()) if not (init_msg.get("event") == "init" and init_msg.get("operation") == "upload"): write_msg({"error": {"code": 32, "message": "Wrong lfs init operation"}}) sys.exit(1) # The transfer process should use the information it needs from the # initiation structure, and also perform any one-off setup tasks it # needs to do. It should then respond on stdout with a simple empty # confirmation structure, as follows: write_msg({}) # After the initiation exchange, git-lfs will send any number of # transfer requests to the stdin of the transfer process, in a serial sequence. while True: msg = read_msg() if msg is None: # When all transfers have been processed, git-lfs will send # a terminate event to the stdin of the transfer process. # On receiving this message the transfer process should # clean up and terminate. No response is expected. sys.exit(0) oid = msg["oid"] filepath = msg["path"] completion_url = msg["action"]["href"] header = msg["action"]["header"] chunk_size = int(header.pop("chunk_size")) presigned_urls: List[str] = list(header.values()) parts = [] for i, presigned_url in enumerate(presigned_urls): with FileSlice(filepath, seek_from=i * chunk_size, read_limit=chunk_size) as data: r = requests.put(presigned_url, data=data) r.raise_for_status() parts.append( { "etag": r.headers.get("etag"), "partNumber": i + 1, } ) # In order to support progress reporting while data is uploading / downloading, # the transfer process should post messages to stdout write_msg( { "event": "progress", "oid": oid, "bytesSoFar": (i + 1) * chunk_size, "bytesSinceLast": chunk_size, } ) # Not precise but that's ok. r = requests.post( completion_url, json={ "oid": oid, "parts": parts, }, ) r.raise_for_status() write_msg({"event": "complete", "oid": oid})
transformers/src/transformers/commands/lfs.py/0
{ "file_path": "transformers/src/transformers/commands/lfs.py", "repo_id": "transformers", "token_count": 3515 }
323
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time import warnings from dataclasses import dataclass, field from enum import Enum from typing import List, Optional, Union import torch from filelock import FileLock from torch.utils.data import Dataset from ...tokenization_utils_base import PreTrainedTokenizerBase from ...utils import logging from ..processors.glue import glue_convert_examples_to_features, glue_output_modes, glue_processors from ..processors.utils import InputFeatures logger = logging.get_logger(__name__) @dataclass class GlueDataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. Using `HfArgumentParser` we can turn this class into argparse arguments to be able to specify them on the command line. """ task_name: str = field(metadata={"help": "The name of the task to train on: " + ", ".join(glue_processors.keys())}) data_dir: str = field( metadata={"help": "The input data dir. Should contain the .tsv files (or other data files) for the task."} ) max_seq_length: int = field( default=128, metadata={ "help": ( "The maximum total input sequence length after tokenization. Sequences longer " "than this will be truncated, sequences shorter will be padded." ) }, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) def __post_init__(self): self.task_name = self.task_name.lower() class Split(Enum): train = "train" dev = "dev" test = "test" class GlueDataset(Dataset): """ This will be superseded by a framework-agnostic approach soon. """ args: GlueDataTrainingArguments output_mode: str features: List[InputFeatures] def __init__( self, args: GlueDataTrainingArguments, tokenizer: PreTrainedTokenizerBase, limit_length: Optional[int] = None, mode: Union[str, Split] = Split.train, cache_dir: Optional[str] = None, ): warnings.warn( "This dataset will be removed from the library soon, preprocessing should be handled with the 🀗 Datasets " "library. You can have a look at this example script for pointers: " "https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py", FutureWarning, ) self.args = args self.processor = glue_processors[args.task_name]() self.output_mode = glue_output_modes[args.task_name] if isinstance(mode, str): try: mode = Split[mode] except KeyError: raise KeyError("mode is not a valid split name") # Load data features from cache or dataset file cached_features_file = os.path.join( cache_dir if cache_dir is not None else args.data_dir, f"cached_{mode.value}_{tokenizer.__class__.__name__}_{args.max_seq_length}_{args.task_name}", ) label_list = self.processor.get_labels() if args.task_name in ["mnli", "mnli-mm"] and tokenizer.__class__.__name__ in ( "RobertaTokenizer", "RobertaTokenizerFast", "XLMRobertaTokenizer", "BartTokenizer", "BartTokenizerFast", ): # HACK(label indices are swapped in RoBERTa pretrained model) label_list[1], label_list[2] = label_list[2], label_list[1] self.label_list = label_list # Make sure only the first process in distributed training processes the dataset, # and the others will use the cache. lock_path = cached_features_file + ".lock" with FileLock(lock_path): if os.path.exists(cached_features_file) and not args.overwrite_cache: start = time.time() self.features = torch.load(cached_features_file) logger.info( f"Loading features from cached file {cached_features_file} [took %.3f s]", time.time() - start ) else: logger.info(f"Creating features from dataset file at {args.data_dir}") if mode == Split.dev: examples = self.processor.get_dev_examples(args.data_dir) elif mode == Split.test: examples = self.processor.get_test_examples(args.data_dir) else: examples = self.processor.get_train_examples(args.data_dir) if limit_length is not None: examples = examples[:limit_length] self.features = glue_convert_examples_to_features( examples, tokenizer, max_length=args.max_seq_length, label_list=label_list, output_mode=self.output_mode, ) start = time.time() torch.save(self.features, cached_features_file) # ^ This seems to take a lot of time so I want to investigate why and how we can improve. logger.info( f"Saving features into cached file {cached_features_file} [took {time.time() - start:.3f} s]" ) def __len__(self): return len(self.features) def __getitem__(self, i) -> InputFeatures: return self.features[i] def get_labels(self): return self.label_list
transformers/src/transformers/data/datasets/glue.py/0
{ "file_path": "transformers/src/transformers/data/datasets/glue.py", "repo_id": "transformers", "token_count": 2587 }
324
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Feature extraction saving/loading class for common feature extractors. """ import copy import json import os import warnings from collections import UserDict from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple, Union import numpy as np from .dynamic_module_utils import custom_object_save from .utils import ( FEATURE_EXTRACTOR_NAME, PushToHubMixin, TensorType, add_model_info_to_auto_map, add_model_info_to_custom_pipelines, cached_file, copy_func, download_url, is_flax_available, is_jax_tensor, is_numpy_array, is_offline_mode, is_remote_url, is_tf_available, is_torch_available, is_torch_device, is_torch_dtype, logging, requires_backends, ) if TYPE_CHECKING: if is_torch_available(): import torch # noqa logger = logging.get_logger(__name__) PreTrainedFeatureExtractor = Union["SequenceFeatureExtractor"] # noqa: F821 class BatchFeature(UserDict): r""" Holds the output of the [`~SequenceFeatureExtractor.pad`] and feature extractor specific `__call__` methods. This class is derived from a python dictionary and can be used as a dictionary. Args: data (`dict`, *optional*): Dictionary of lists/arrays/tensors returned by the __call__/pad methods ('input_values', 'attention_mask', etc.). tensor_type (`Union[None, str, TensorType]`, *optional*): You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization. """ def __init__(self, data: Optional[Dict[str, Any]] = None, tensor_type: Union[None, str, TensorType] = None): super().__init__(data) self.convert_to_tensors(tensor_type=tensor_type) def __getitem__(self, item: str) -> Union[Any]: """ If the key is a string, returns the value of the dict associated to `key` ('input_values', 'attention_mask', etc.). """ if isinstance(item, str): return self.data[item] else: raise KeyError("Indexing with integers is not available when using Python based feature extractors") def __getattr__(self, item: str): try: return self.data[item] except KeyError: raise AttributeError def __getstate__(self): return {"data": self.data} def __setstate__(self, state): if "data" in state: self.data = state["data"] # Copied from transformers.tokenization_utils_base.BatchEncoding.keys def keys(self): return self.data.keys() # Copied from transformers.tokenization_utils_base.BatchEncoding.values def values(self): return self.data.values() # Copied from transformers.tokenization_utils_base.BatchEncoding.items def items(self): return self.data.items() def _get_is_as_tensor_fns(self, tensor_type: Optional[Union[str, TensorType]] = None): if tensor_type is None: return None, None # Convert to TensorType if not isinstance(tensor_type, TensorType): tensor_type = TensorType(tensor_type) # Get a function reference for the correct framework if tensor_type == TensorType.TENSORFLOW: if not is_tf_available(): raise ImportError( "Unable to convert output to TensorFlow tensors format, TensorFlow is not installed." ) import tensorflow as tf as_tensor = tf.constant is_tensor = tf.is_tensor elif tensor_type == TensorType.PYTORCH: if not is_torch_available(): raise ImportError("Unable to convert output to PyTorch tensors format, PyTorch is not installed.") import torch # noqa def as_tensor(value): if isinstance(value, (list, tuple)) and len(value) > 0: if isinstance(value[0], np.ndarray): value = np.array(value) elif ( isinstance(value[0], (list, tuple)) and len(value[0]) > 0 and isinstance(value[0][0], np.ndarray) ): value = np.array(value) return torch.tensor(value) is_tensor = torch.is_tensor elif tensor_type == TensorType.JAX: if not is_flax_available(): raise ImportError("Unable to convert output to JAX tensors format, JAX is not installed.") import jax.numpy as jnp # noqa: F811 as_tensor = jnp.array is_tensor = is_jax_tensor else: def as_tensor(value, dtype=None): if isinstance(value, (list, tuple)) and isinstance(value[0], (list, tuple, np.ndarray)): value_lens = [len(val) for val in value] if len(set(value_lens)) > 1 and dtype is None: # we have a ragged list so handle explicitly value = as_tensor([np.asarray(val) for val in value], dtype=object) return np.asarray(value, dtype=dtype) is_tensor = is_numpy_array return is_tensor, as_tensor def convert_to_tensors(self, tensor_type: Optional[Union[str, TensorType]] = None): """ Convert the inner content to tensors. Args: tensor_type (`str` or [`~utils.TensorType`], *optional*): The type of tensors to use. If `str`, should be one of the values of the enum [`~utils.TensorType`]. If `None`, no modification is done. """ if tensor_type is None: return self is_tensor, as_tensor = self._get_is_as_tensor_fns(tensor_type) # Do the tensor conversion in batch for key, value in self.items(): try: if not is_tensor(value): tensor = as_tensor(value) self[key] = tensor except: # noqa E722 if key == "overflowing_values": raise ValueError("Unable to create tensor returning overflowing values of different lengths. ") raise ValueError( "Unable to create tensor, you should probably activate padding " "with 'padding=True' to have batched tensors with the same length." ) return self def to(self, *args, **kwargs) -> "BatchFeature": """ Send all values to device by calling `v.to(*args, **kwargs)` (PyTorch only). This should support casting in different `dtypes` and sending the `BatchFeature` to a different `device`. Args: args (`Tuple`): Will be passed to the `to(...)` function of the tensors. kwargs (`Dict`, *optional*): Will be passed to the `to(...)` function of the tensors. Returns: [`BatchFeature`]: The same instance after modification. """ requires_backends(self, ["torch"]) import torch # noqa new_data = {} device = kwargs.get("device") # Check if the args are a device or a dtype if device is None and len(args) > 0: # device should be always the first argument arg = args[0] if is_torch_dtype(arg): # The first argument is a dtype pass elif isinstance(arg, str) or is_torch_device(arg) or isinstance(arg, int): device = arg else: # it's something else raise ValueError(f"Attempting to cast a BatchFeature to type {str(arg)}. This is not supported.") # We cast only floating point tensors to avoid issues with tokenizers casting `LongTensor` to `FloatTensor` for k, v in self.items(): # check if v is a floating point if torch.is_floating_point(v): # cast and send to device new_data[k] = v.to(*args, **kwargs) elif device is not None: new_data[k] = v.to(device=device) else: new_data[k] = v self.data = new_data return self class FeatureExtractionMixin(PushToHubMixin): """ This is a feature extraction mixin used to provide saving/loading functionality for sequential and image feature extractors. """ _auto_class = None def __init__(self, **kwargs): """Set elements of `kwargs` as attributes.""" # Pop "processor_class" as it should be saved as private attribute self._processor_class = kwargs.pop("processor_class", None) # Additional attributes without default values for key, value in kwargs.items(): try: setattr(self, key, value) except AttributeError as err: logger.error(f"Can't set {key} with value {value} for {self}") raise err def _set_processor_class(self, processor_class: str): """Sets processor class as an attribute.""" self._processor_class = processor_class @classmethod def from_pretrained( cls, pretrained_model_name_or_path: Union[str, os.PathLike], cache_dir: Optional[Union[str, os.PathLike]] = None, force_download: bool = False, local_files_only: bool = False, token: Optional[Union[str, bool]] = None, revision: str = "main", **kwargs, ): r""" Instantiate a type of [`~feature_extraction_utils.FeatureExtractionMixin`] from a feature extractor, *e.g.* a derived class of [`SequenceFeatureExtractor`]. Args: pretrained_model_name_or_path (`str` or `os.PathLike`): This can be either: - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on huggingface.co. - a path to a *directory* containing a feature extractor file saved using the [`~feature_extraction_utils.FeatureExtractionMixin.save_pretrained`] method, e.g., `./my_model_directory/`. - a path or url to a saved feature extractor JSON *file*, e.g., `./my_model_directory/preprocessor_config.json`. cache_dir (`str` or `os.PathLike`, *optional*): Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download: Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request. token (`str` or `bool`, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. <Tip> To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>". </Tip> return_unused_kwargs (`bool`, *optional*, defaults to `False`): If `False`, then this function returns just the final feature extractor object. If `True`, then this functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused_kwargs* is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored. kwargs (`Dict[str, Any]`, *optional*): The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is controlled by the `return_unused_kwargs` keyword parameter. Returns: A feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`]. Examples: ```python # We can't instantiate directly the base class *FeatureExtractionMixin* nor *SequenceFeatureExtractor* so let's show the examples on a # derived class: *Wav2Vec2FeatureExtractor* feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( "facebook/wav2vec2-base-960h" ) # Download feature_extraction_config from huggingface.co and cache. feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( "./test/saved_model/" ) # E.g. feature_extractor (or model) was saved using *save_pretrained('./test/saved_model/')* feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("./test/saved_model/preprocessor_config.json") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( "facebook/wav2vec2-base-960h", return_attention_mask=False, foo=False ) assert feature_extractor.return_attention_mask is False feature_extractor, unused_kwargs = Wav2Vec2FeatureExtractor.from_pretrained( "facebook/wav2vec2-base-960h", return_attention_mask=False, foo=False, return_unused_kwargs=True ) assert feature_extractor.return_attention_mask is False assert unused_kwargs == {"foo": False} ```""" kwargs["cache_dir"] = cache_dir kwargs["force_download"] = force_download kwargs["local_files_only"] = local_files_only kwargs["revision"] = revision use_auth_token = kwargs.pop("use_auth_token", None) if use_auth_token is not None: warnings.warn( "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.", FutureWarning, ) if token is not None: raise ValueError( "`token` and `use_auth_token` are both specified. Please set only the argument `token`." ) token = use_auth_token if token is not None: kwargs["token"] = token feature_extractor_dict, kwargs = cls.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs) return cls.from_dict(feature_extractor_dict, **kwargs) def save_pretrained(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs): """ Save a feature_extractor object to the directory `save_directory`, so that it can be re-loaded using the [`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] class method. Args: save_directory (`str` or `os.PathLike`): Directory where the feature extractor JSON file will be saved (will be created if it does not exist). push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). kwargs (`Dict[str, Any]`, *optional*): Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. """ use_auth_token = kwargs.pop("use_auth_token", None) if use_auth_token is not None: warnings.warn( "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.", FutureWarning, ) if kwargs.get("token", None) is not None: raise ValueError( "`token` and `use_auth_token` are both specified. Please set only the argument `token`." ) kwargs["token"] = use_auth_token if os.path.isfile(save_directory): raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file") os.makedirs(save_directory, exist_ok=True) if push_to_hub: commit_message = kwargs.pop("commit_message", None) repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1]) repo_id = self._create_repo(repo_id, **kwargs) files_timestamps = self._get_files_timestamps(save_directory) # If we have a custom config, we copy the file defining it in the folder and set the attributes so it can be # loaded from the Hub. if self._auto_class is not None: custom_object_save(self, save_directory, config=self) # If we save using the predefined names, we can load using `from_pretrained` output_feature_extractor_file = os.path.join(save_directory, FEATURE_EXTRACTOR_NAME) self.to_json_file(output_feature_extractor_file) logger.info(f"Feature extractor saved in {output_feature_extractor_file}") if push_to_hub: self._upload_modified_files( save_directory, repo_id, files_timestamps, commit_message=commit_message, token=kwargs.get("token"), ) return [output_feature_extractor_file] @classmethod def get_feature_extractor_dict( cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs ) -> Tuple[Dict[str, Any], Dict[str, Any]]: """ From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`] using `from_dict`. Parameters: pretrained_model_name_or_path (`str` or `os.PathLike`): The identifier of the pre-trained checkpoint from which we want the dictionary of parameters. Returns: `Tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the feature extractor object. """ cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) resume_download = kwargs.pop("resume_download", None) proxies = kwargs.pop("proxies", None) subfolder = kwargs.pop("subfolder", None) token = kwargs.pop("token", None) use_auth_token = kwargs.pop("use_auth_token", None) local_files_only = kwargs.pop("local_files_only", False) revision = kwargs.pop("revision", None) if use_auth_token is not None: warnings.warn( "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.", FutureWarning, ) if token is not None: raise ValueError( "`token` and `use_auth_token` are both specified. Please set only the argument `token`." ) token = use_auth_token from_pipeline = kwargs.pop("_from_pipeline", None) from_auto_class = kwargs.pop("_from_auto", False) user_agent = {"file_type": "feature extractor", "from_auto_class": from_auto_class} if from_pipeline is not None: user_agent["using_pipeline"] = from_pipeline if is_offline_mode() and not local_files_only: logger.info("Offline mode: forcing local_files_only=True") local_files_only = True pretrained_model_name_or_path = str(pretrained_model_name_or_path) is_local = os.path.isdir(pretrained_model_name_or_path) if os.path.isdir(pretrained_model_name_or_path): feature_extractor_file = os.path.join(pretrained_model_name_or_path, FEATURE_EXTRACTOR_NAME) if os.path.isfile(pretrained_model_name_or_path): resolved_feature_extractor_file = pretrained_model_name_or_path is_local = True elif is_remote_url(pretrained_model_name_or_path): feature_extractor_file = pretrained_model_name_or_path resolved_feature_extractor_file = download_url(pretrained_model_name_or_path) else: feature_extractor_file = FEATURE_EXTRACTOR_NAME try: # Load from local folder or from cache or download from model Hub and cache resolved_feature_extractor_file = cached_file( pretrained_model_name_or_path, feature_extractor_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, subfolder=subfolder, token=token, user_agent=user_agent, revision=revision, ) except EnvironmentError: # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to # the original exception. raise except Exception: # For any other exception, we throw a generic error. raise EnvironmentError( f"Can't load feature extractor for '{pretrained_model_name_or_path}'. If you were trying to load" " it from 'https://huggingface.co/models', make sure you don't have a local directory with the" f" same name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a" f" directory containing a {FEATURE_EXTRACTOR_NAME} file" ) try: # Load feature_extractor dict with open(resolved_feature_extractor_file, "r", encoding="utf-8") as reader: text = reader.read() feature_extractor_dict = json.loads(text) except json.JSONDecodeError: raise EnvironmentError( f"It looks like the config file at '{resolved_feature_extractor_file}' is not a valid JSON file." ) if is_local: logger.info(f"loading configuration file {resolved_feature_extractor_file}") else: logger.info( f"loading configuration file {feature_extractor_file} from cache at {resolved_feature_extractor_file}" ) if not is_local: if "auto_map" in feature_extractor_dict: feature_extractor_dict["auto_map"] = add_model_info_to_auto_map( feature_extractor_dict["auto_map"], pretrained_model_name_or_path ) if "custom_pipelines" in feature_extractor_dict: feature_extractor_dict["custom_pipelines"] = add_model_info_to_custom_pipelines( feature_extractor_dict["custom_pipelines"], pretrained_model_name_or_path ) return feature_extractor_dict, kwargs @classmethod def from_dict(cls, feature_extractor_dict: Dict[str, Any], **kwargs) -> PreTrainedFeatureExtractor: """ Instantiates a type of [`~feature_extraction_utils.FeatureExtractionMixin`] from a Python dictionary of parameters. Args: feature_extractor_dict (`Dict[str, Any]`): Dictionary that will be used to instantiate the feature extractor object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the [`~feature_extraction_utils.FeatureExtractionMixin.to_dict`] method. kwargs (`Dict[str, Any]`): Additional parameters from which to initialize the feature extractor object. Returns: [`~feature_extraction_utils.FeatureExtractionMixin`]: The feature extractor object instantiated from those parameters. """ return_unused_kwargs = kwargs.pop("return_unused_kwargs", False) # Update feature_extractor with kwargs if needed to_remove = [] for key, value in kwargs.items(): if key in feature_extractor_dict: feature_extractor_dict[key] = value to_remove.append(key) for key in to_remove: kwargs.pop(key, None) feature_extractor = cls(**feature_extractor_dict) logger.info(f"Feature extractor {feature_extractor}") if return_unused_kwargs: return feature_extractor, kwargs else: return feature_extractor def to_dict(self) -> Dict[str, Any]: """ Serializes this instance to a Python dictionary. Returns: `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance. """ output = copy.deepcopy(self.__dict__) output["feature_extractor_type"] = self.__class__.__name__ if "mel_filters" in output: del output["mel_filters"] if "window" in output: del output["window"] return output @classmethod def from_json_file(cls, json_file: Union[str, os.PathLike]) -> PreTrainedFeatureExtractor: """ Instantiates a feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`] from the path to a JSON file of parameters. Args: json_file (`str` or `os.PathLike`): Path to the JSON file containing the parameters. Returns: A feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`]: The feature_extractor object instantiated from that JSON file. """ with open(json_file, "r", encoding="utf-8") as reader: text = reader.read() feature_extractor_dict = json.loads(text) return cls(**feature_extractor_dict) def to_json_string(self) -> str: """ Serializes this instance to a JSON string. Returns: `str`: String containing all the attributes that make up this feature_extractor instance in JSON format. """ dictionary = self.to_dict() for key, value in dictionary.items(): if isinstance(value, np.ndarray): dictionary[key] = value.tolist() # make sure private name "_processor_class" is correctly # saved as "processor_class" _processor_class = dictionary.pop("_processor_class", None) if _processor_class is not None: dictionary["processor_class"] = _processor_class return json.dumps(dictionary, indent=2, sort_keys=True) + "\n" def to_json_file(self, json_file_path: Union[str, os.PathLike]): """ Save this instance to a JSON file. Args: json_file_path (`str` or `os.PathLike`): Path to the JSON file in which this feature_extractor instance's parameters will be saved. """ with open(json_file_path, "w", encoding="utf-8") as writer: writer.write(self.to_json_string()) def __repr__(self): return f"{self.__class__.__name__} {self.to_json_string()}" @classmethod def register_for_auto_class(cls, auto_class="AutoFeatureExtractor"): """ Register this class with a given auto class. This should only be used for custom feature extractors as the ones in the library are already mapped with `AutoFeatureExtractor`. <Tip warning={true}> This API is experimental and may have some slight breaking changes in the next releases. </Tip> Args: auto_class (`str` or `type`, *optional*, defaults to `"AutoFeatureExtractor"`): The auto class to register this new feature extractor with. """ if not isinstance(auto_class, str): auto_class = auto_class.__name__ import transformers.models.auto as auto_module if not hasattr(auto_module, auto_class): raise ValueError(f"{auto_class} is not a valid auto class.") cls._auto_class = auto_class FeatureExtractionMixin.push_to_hub = copy_func(FeatureExtractionMixin.push_to_hub) if FeatureExtractionMixin.push_to_hub.__doc__ is not None: FeatureExtractionMixin.push_to_hub.__doc__ = FeatureExtractionMixin.push_to_hub.__doc__.format( object="feature extractor", object_class="AutoFeatureExtractor", object_files="feature extractor file" )
transformers/src/transformers/feature_extraction_utils.py/0
{ "file_path": "transformers/src/transformers/feature_extraction_utils.py", "repo_id": "transformers", "token_count": 13140 }
325
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import dataclasses import json import os import sys import types from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeError from copy import copy from enum import Enum from inspect import isclass from pathlib import Path from typing import Any, Callable, Dict, Iterable, List, Literal, NewType, Optional, Tuple, Union, get_type_hints import yaml DataClass = NewType("DataClass", Any) DataClassType = NewType("DataClassType", Any) # From https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse def string_to_bool(v): if isinstance(v, bool): return v if v.lower() in ("yes", "true", "t", "y", "1"): return True elif v.lower() in ("no", "false", "f", "n", "0"): return False else: raise ArgumentTypeError( f"Truthy value expected: got {v} but expected one of yes/no, true/false, t/f, y/n, 1/0 (case insensitive)." ) def make_choice_type_function(choices: list) -> Callable[[str], Any]: """ Creates a mapping function from each choices string representation to the actual value. Used to support multiple value types for a single argument. Args: choices (list): List of choices. Returns: Callable[[str], Any]: Mapping function from string representation to actual value for each choice. """ str_to_choice = {str(choice): choice for choice in choices} return lambda arg: str_to_choice.get(arg, arg) def HfArg( *, aliases: Union[str, List[str]] = None, help: str = None, default: Any = dataclasses.MISSING, default_factory: Callable[[], Any] = dataclasses.MISSING, metadata: dict = None, **kwargs, ) -> dataclasses.Field: """Argument helper enabling a concise syntax to create dataclass fields for parsing with `HfArgumentParser`. Example comparing the use of `HfArg` and `dataclasses.field`: ``` @dataclass class Args: regular_arg: str = dataclasses.field(default="Huggingface", metadata={"aliases": ["--example", "-e"], "help": "This syntax could be better!"}) hf_arg: str = HfArg(default="Huggingface", aliases=["--example", "-e"], help="What a nice syntax!") ``` Args: aliases (Union[str, List[str]], optional): Single string or list of strings of aliases to pass on to argparse, e.g. `aliases=["--example", "-e"]`. Defaults to None. help (str, optional): Help string to pass on to argparse that can be displayed with --help. Defaults to None. default (Any, optional): Default value for the argument. If not default or default_factory is specified, the argument is required. Defaults to dataclasses.MISSING. default_factory (Callable[[], Any], optional): The default_factory is a 0-argument function called to initialize a field's value. It is useful to provide default values for mutable types, e.g. lists: `default_factory=list`. Mutually exclusive with `default=`. Defaults to dataclasses.MISSING. metadata (dict, optional): Further metadata to pass on to `dataclasses.field`. Defaults to None. Returns: Field: A `dataclasses.Field` with the desired properties. """ if metadata is None: # Important, don't use as default param in function signature because dict is mutable and shared across function calls metadata = {} if aliases is not None: metadata["aliases"] = aliases if help is not None: metadata["help"] = help return dataclasses.field(metadata=metadata, default=default, default_factory=default_factory, **kwargs) class HfArgumentParser(ArgumentParser): """ This subclass of `argparse.ArgumentParser` uses type hints on dataclasses to generate arguments. The class is designed to play well with the native argparse. In particular, you can add more (non-dataclass backed) arguments to the parser after initialization and you'll get the output back after parsing as an additional namespace. Optional: To create sub argument groups use the `_argument_group_name` attribute in the dataclass. """ dataclass_types: Iterable[DataClassType] def __init__(self, dataclass_types: Union[DataClassType, Iterable[DataClassType]], **kwargs): """ Args: dataclass_types: Dataclass type, or list of dataclass types for which we will "fill" instances with the parsed args. kwargs (`Dict[str, Any]`, *optional*): Passed to `argparse.ArgumentParser()` in the regular way. """ # To make the default appear when using --help if "formatter_class" not in kwargs: kwargs["formatter_class"] = ArgumentDefaultsHelpFormatter super().__init__(**kwargs) if dataclasses.is_dataclass(dataclass_types): dataclass_types = [dataclass_types] self.dataclass_types = list(dataclass_types) for dtype in self.dataclass_types: self._add_dataclass_arguments(dtype) @staticmethod def _parse_dataclass_field(parser: ArgumentParser, field: dataclasses.Field): field_name = f"--{field.name}" kwargs = field.metadata.copy() # field.metadata is not used at all by Data Classes, # it is provided as a third-party extension mechanism. if isinstance(field.type, str): raise RuntimeError( "Unresolved type detected, which should have been done with the help of " "`typing.get_type_hints` method by default" ) aliases = kwargs.pop("aliases", []) if isinstance(aliases, str): aliases = [aliases] origin_type = getattr(field.type, "__origin__", field.type) if origin_type is Union or (hasattr(types, "UnionType") and isinstance(origin_type, types.UnionType)): if str not in field.type.__args__ and ( len(field.type.__args__) != 2 or type(None) not in field.type.__args__ ): raise ValueError( "Only `Union[X, NoneType]` (i.e., `Optional[X]`) is allowed for `Union` because" " the argument parser only supports one type per argument." f" Problem encountered in field '{field.name}'." ) if type(None) not in field.type.__args__: # filter `str` in Union field.type = field.type.__args__[0] if field.type.__args__[1] is str else field.type.__args__[1] origin_type = getattr(field.type, "__origin__", field.type) elif bool not in field.type.__args__: # filter `NoneType` in Union (except for `Union[bool, NoneType]`) field.type = ( field.type.__args__[0] if isinstance(None, field.type.__args__[1]) else field.type.__args__[1] ) origin_type = getattr(field.type, "__origin__", field.type) # A variable to store kwargs for a boolean field, if needed # so that we can init a `no_*` complement argument (see below) bool_kwargs = {} if origin_type is Literal or (isinstance(field.type, type) and issubclass(field.type, Enum)): if origin_type is Literal: kwargs["choices"] = field.type.__args__ else: kwargs["choices"] = [x.value for x in field.type] kwargs["type"] = make_choice_type_function(kwargs["choices"]) if field.default is not dataclasses.MISSING: kwargs["default"] = field.default else: kwargs["required"] = True elif field.type is bool or field.type == Optional[bool]: # Copy the currect kwargs to use to instantiate a `no_*` complement argument below. # We do not initialize it here because the `no_*` alternative must be instantiated after the real argument bool_kwargs = copy(kwargs) # Hack because type=bool in argparse does not behave as we want. kwargs["type"] = string_to_bool if field.type is bool or (field.default is not None and field.default is not dataclasses.MISSING): # Default value is False if we have no default when of type bool. default = False if field.default is dataclasses.MISSING else field.default # This is the value that will get picked if we don't include --field_name in any way kwargs["default"] = default # This tells argparse we accept 0 or 1 value after --field_name kwargs["nargs"] = "?" # This is the value that will get picked if we do --field_name (without value) kwargs["const"] = True elif isclass(origin_type) and issubclass(origin_type, list): kwargs["type"] = field.type.__args__[0] kwargs["nargs"] = "+" if field.default_factory is not dataclasses.MISSING: kwargs["default"] = field.default_factory() elif field.default is dataclasses.MISSING: kwargs["required"] = True else: kwargs["type"] = field.type if field.default is not dataclasses.MISSING: kwargs["default"] = field.default elif field.default_factory is not dataclasses.MISSING: kwargs["default"] = field.default_factory() else: kwargs["required"] = True parser.add_argument(field_name, *aliases, **kwargs) # Add a complement `no_*` argument for a boolean field AFTER the initial field has already been added. # Order is important for arguments with the same destination! # We use a copy of earlier kwargs because the original kwargs have changed a lot before reaching down # here and we do not need those changes/additional keys. if field.default is True and (field.type is bool or field.type == Optional[bool]): bool_kwargs["default"] = False parser.add_argument(f"--no_{field.name}", action="store_false", dest=field.name, **bool_kwargs) def _add_dataclass_arguments(self, dtype: DataClassType): if hasattr(dtype, "_argument_group_name"): parser = self.add_argument_group(dtype._argument_group_name) else: parser = self try: type_hints: Dict[str, type] = get_type_hints(dtype) except NameError: raise RuntimeError( f"Type resolution failed for {dtype}. Try declaring the class in global scope or " "removing line of `from __future__ import annotations` which opts in Postponed " "Evaluation of Annotations (PEP 563)" ) except TypeError as ex: # Remove this block when we drop Python 3.9 support if sys.version_info[:2] < (3, 10) and "unsupported operand type(s) for |" in str(ex): python_version = ".".join(map(str, sys.version_info[:3])) raise RuntimeError( f"Type resolution failed for {dtype} on Python {python_version}. Try removing " "line of `from __future__ import annotations` which opts in union types as " "`X | Y` (PEP 604) via Postponed Evaluation of Annotations (PEP 563). To " "support Python versions that lower than 3.10, you need to use " "`typing.Union[X, Y]` instead of `X | Y` and `typing.Optional[X]` instead of " "`X | None`." ) from ex raise for field in dataclasses.fields(dtype): if not field.init: continue field.type = type_hints[field.name] self._parse_dataclass_field(parser, field) def parse_args_into_dataclasses( self, args=None, return_remaining_strings=False, look_for_args_file=True, args_filename=None, args_file_flag=None, ) -> Tuple[DataClass, ...]: """ Parse command-line args into instances of the specified dataclass types. This relies on argparse's `ArgumentParser.parse_known_args`. See the doc at: docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args Args: args: List of strings to parse. The default is taken from sys.argv. (same as argparse.ArgumentParser) return_remaining_strings: If true, also return a list of remaining argument strings. look_for_args_file: If true, will look for a ".args" file with the same base name as the entry point script for this process, and will append its potential content to the command line args. args_filename: If not None, will uses this file instead of the ".args" file specified in the previous argument. args_file_flag: If not None, will look for a file in the command-line args specified with this flag. The flag can be specified multiple times and precedence is determined by the order (last one wins). Returns: Tuple consisting of: - the dataclass instances in the same order as they were passed to the initializer.abspath - if applicable, an additional namespace for more (non-dataclass backed) arguments added to the parser after initialization. - The potential list of remaining argument strings. (same as argparse.ArgumentParser.parse_known_args) """ if args_file_flag or args_filename or (look_for_args_file and len(sys.argv)): args_files = [] if args_filename: args_files.append(Path(args_filename)) elif look_for_args_file and len(sys.argv): args_files.append(Path(sys.argv[0]).with_suffix(".args")) # args files specified via command line flag should overwrite default args files so we add them last if args_file_flag: # Create special parser just to extract the args_file_flag values args_file_parser = ArgumentParser() args_file_parser.add_argument(args_file_flag, type=str, action="append") # Use only remaining args for further parsing (remove the args_file_flag) cfg, args = args_file_parser.parse_known_args(args=args) cmd_args_file_paths = vars(cfg).get(args_file_flag.lstrip("-"), None) if cmd_args_file_paths: args_files.extend([Path(p) for p in cmd_args_file_paths]) file_args = [] for args_file in args_files: if args_file.exists(): file_args += args_file.read_text().split() # in case of duplicate arguments the last one has precedence # args specified via the command line should overwrite args from files, so we add them last args = file_args + args if args is not None else file_args + sys.argv[1:] namespace, remaining_args = self.parse_known_args(args=args) outputs = [] for dtype in self.dataclass_types: keys = {f.name for f in dataclasses.fields(dtype) if f.init} inputs = {k: v for k, v in vars(namespace).items() if k in keys} for k in keys: delattr(namespace, k) obj = dtype(**inputs) outputs.append(obj) if len(namespace.__dict__) > 0: # additional namespace. outputs.append(namespace) if return_remaining_strings: return (*outputs, remaining_args) else: if remaining_args: raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") return (*outputs,) def parse_dict(self, args: Dict[str, Any], allow_extra_keys: bool = False) -> Tuple[DataClass, ...]: """ Alternative helper method that does not use `argparse` at all, instead uses a dict and populating the dataclass types. Args: args (`dict`): dict containing config values allow_extra_keys (`bool`, *optional*, defaults to `False`): Defaults to False. If False, will raise an exception if the dict contains keys that are not parsed. Returns: Tuple consisting of: - the dataclass instances in the same order as they were passed to the initializer. """ unused_keys = set(args.keys()) outputs = [] for dtype in self.dataclass_types: keys = {f.name for f in dataclasses.fields(dtype) if f.init} inputs = {k: v for k, v in args.items() if k in keys} unused_keys.difference_update(inputs.keys()) obj = dtype(**inputs) outputs.append(obj) if not allow_extra_keys and unused_keys: raise ValueError(f"Some keys are not used by the HfArgumentParser: {sorted(unused_keys)}") return tuple(outputs) def parse_json_file( self, json_file: Union[str, os.PathLike], allow_extra_keys: bool = False ) -> Tuple[DataClass, ...]: """ Alternative helper method that does not use `argparse` at all, instead loading a json file and populating the dataclass types. Args: json_file (`str` or `os.PathLike`): File name of the json file to parse allow_extra_keys (`bool`, *optional*, defaults to `False`): Defaults to False. If False, will raise an exception if the json file contains keys that are not parsed. Returns: Tuple consisting of: - the dataclass instances in the same order as they were passed to the initializer. """ with open(Path(json_file), encoding="utf-8") as open_json_file: data = json.loads(open_json_file.read()) outputs = self.parse_dict(data, allow_extra_keys=allow_extra_keys) return tuple(outputs) def parse_yaml_file( self, yaml_file: Union[str, os.PathLike], allow_extra_keys: bool = False ) -> Tuple[DataClass, ...]: """ Alternative helper method that does not use `argparse` at all, instead loading a yaml file and populating the dataclass types. Args: yaml_file (`str` or `os.PathLike`): File name of the yaml file to parse allow_extra_keys (`bool`, *optional*, defaults to `False`): Defaults to False. If False, will raise an exception if the json file contains keys that are not parsed. Returns: Tuple consisting of: - the dataclass instances in the same order as they were passed to the initializer. """ outputs = self.parse_dict(yaml.safe_load(Path(yaml_file).read_text()), allow_extra_keys=allow_extra_keys) return tuple(outputs)
transformers/src/transformers/hf_argparser.py/0
{ "file_path": "transformers/src/transformers/hf_argparser.py", "repo_id": "transformers", "token_count": 8251 }
326
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Integrations with other Python libraries. """ import functools import importlib.metadata import importlib.util import json import numbers import os import pickle import shutil import sys import tempfile from dataclasses import asdict, fields from enum import Enum from pathlib import Path from typing import TYPE_CHECKING, Any, Dict, Literal, Optional, Union import numpy as np import packaging.version from .. import PreTrainedModel, TFPreTrainedModel from .. import __version__ as version from ..utils import ( PushToHubMixin, flatten_dict, is_datasets_available, is_pandas_available, is_tf_available, is_torch_available, logging, ) logger = logging.get_logger(__name__) if is_torch_available(): import torch # comet_ml requires to be imported before any ML frameworks _MIN_COMET_VERSION = "3.43.2" try: _comet_version = importlib.metadata.version("comet_ml") _is_comet_installed = True _is_comet_recent_enough = packaging.version.parse(_comet_version) >= packaging.version.parse(_MIN_COMET_VERSION) # Check if the Comet API Key is set import comet_ml if comet_ml.config.get_config("comet.api_key") is not None: _is_comet_configured = True else: _is_comet_configured = False except (importlib.metadata.PackageNotFoundError, ImportError, ValueError, TypeError, AttributeError, KeyError): _comet_version = None _is_comet_installed = False _is_comet_recent_enough = False _is_comet_configured = False _has_neptune = ( importlib.util.find_spec("neptune") is not None or importlib.util.find_spec("neptune-client") is not None ) if TYPE_CHECKING and _has_neptune: try: _neptune_version = importlib.metadata.version("neptune") logger.info(f"Neptune version {_neptune_version} available.") except importlib.metadata.PackageNotFoundError: try: _neptune_version = importlib.metadata.version("neptune-client") logger.info(f"Neptune-client version {_neptune_version} available.") except importlib.metadata.PackageNotFoundError: _has_neptune = False from .. import modelcard # noqa: E402 from ..trainer_callback import ProgressCallback, TrainerCallback # noqa: E402 from ..trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun, IntervalStrategy # noqa: E402 from ..training_args import ParallelMode # noqa: E402 from ..utils import ENV_VARS_TRUE_VALUES, is_torch_xla_available # noqa: E402 # Integration functions: def is_wandb_available(): # any value of WANDB_DISABLED disables wandb if os.getenv("WANDB_DISABLED", "").upper() in ENV_VARS_TRUE_VALUES: logger.warning( "Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the " "--report_to flag to control the integrations used for logging result (for instance --report_to none)." ) return False return importlib.util.find_spec("wandb") is not None def is_clearml_available(): return importlib.util.find_spec("clearml") is not None def is_comet_available(): if os.getenv("COMET_MODE", "").upper() == "DISABLED": logger.warning( "Using the `COMET_MODE=DISABLED` environment variable is deprecated and will be removed in v5. Use the " "--report_to flag to control the integrations used for logging result (for instance --report_to none)." ) return False if _is_comet_installed is False: return False if _is_comet_recent_enough is False: logger.warning( "comet_ml version %s is installed, but version %s or higher is required. " "Please update comet_ml to the latest version to enable Comet logging with pip install 'comet-ml>=%s'.", _comet_version, _MIN_COMET_VERSION, _MIN_COMET_VERSION, ) return False if _is_comet_configured is False: logger.warning( "comet_ml is installed but the Comet API Key is not configured. " "Please set the `COMET_API_KEY` environment variable to enable Comet logging. " "Check out the documentation for other ways of configuring it: " "https://www.comet.com/docs/v2/guides/experiment-management/configure-sdk/#set-the-api-key" ) return False return True def is_tensorboard_available(): return importlib.util.find_spec("tensorboard") is not None or importlib.util.find_spec("tensorboardX") is not None def is_optuna_available(): return importlib.util.find_spec("optuna") is not None def is_ray_available(): return importlib.util.find_spec("ray") is not None def is_ray_tune_available(): if not is_ray_available(): return False return importlib.util.find_spec("ray.tune") is not None def is_sigopt_available(): return importlib.util.find_spec("sigopt") is not None def is_azureml_available(): if importlib.util.find_spec("azureml") is None: return False if importlib.util.find_spec("azureml.core") is None: return False return importlib.util.find_spec("azureml.core.run") is not None def is_mlflow_available(): if os.getenv("DISABLE_MLFLOW_INTEGRATION", "FALSE").upper() == "TRUE": return False return importlib.util.find_spec("mlflow") is not None def is_dagshub_available(): return None not in [importlib.util.find_spec("dagshub"), importlib.util.find_spec("mlflow")] def is_neptune_available(): return _has_neptune def is_codecarbon_available(): return importlib.util.find_spec("codecarbon") is not None def is_flytekit_available(): return importlib.util.find_spec("flytekit") is not None def is_flyte_deck_standard_available(): if not is_flytekit_available(): return False return importlib.util.find_spec("flytekitplugins.deck") is not None def is_dvclive_available(): return importlib.util.find_spec("dvclive") is not None def hp_params(trial): if is_optuna_available(): import optuna if isinstance(trial, optuna.Trial): return trial.params if is_ray_tune_available(): if isinstance(trial, dict): return trial if is_sigopt_available(): if isinstance(trial, dict): return trial if is_wandb_available(): if isinstance(trial, dict): return trial raise RuntimeError(f"Unknown type for trial {trial.__class__}") def run_hp_search_optuna(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: import optuna if trainer.args.process_index == 0: def _objective(trial, checkpoint_dir=None): checkpoint = None if checkpoint_dir: for subdir in os.listdir(checkpoint_dir): if subdir.startswith(PREFIX_CHECKPOINT_DIR): checkpoint = os.path.join(checkpoint_dir, subdir) trainer.objective = None if trainer.args.world_size > 1: if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED: raise RuntimeError("only support DDP optuna HPO for ParallelMode.DISTRIBUTED currently.") trainer._hp_search_setup(trial) torch.distributed.broadcast_object_list(pickle.dumps(trainer.args), src=0) trainer.train(resume_from_checkpoint=checkpoint) else: trainer.train(resume_from_checkpoint=checkpoint, trial=trial) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) return trainer.objective timeout = kwargs.pop("timeout", None) n_jobs = kwargs.pop("n_jobs", 1) gc_after_trial = kwargs.pop("gc_after_trial", False) directions = direction if isinstance(direction, list) else None direction = None if directions is not None else direction study = optuna.create_study(direction=direction, directions=directions, **kwargs) study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs, gc_after_trial=gc_after_trial) if not study._is_multi_objective(): best_trial = study.best_trial return BestRun(str(best_trial.number), best_trial.value, best_trial.params) else: best_trials = study.best_trials return [BestRun(str(best.number), best.values, best.params) for best in best_trials] else: for i in range(n_trials): trainer.objective = None args_main_rank = list(pickle.dumps(trainer.args)) if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED: raise RuntimeError("only support DDP optuna HPO for ParallelMode.DISTRIBUTED currently.") torch.distributed.broadcast_object_list(args_main_rank, src=0) args = pickle.loads(bytes(args_main_rank)) for key, value in asdict(args).items(): if key != "local_rank": setattr(trainer.args, key, value) trainer.train(resume_from_checkpoint=None) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) return None def run_hp_search_ray(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: import ray import ray.train def _objective(trial: dict, local_trainer): try: from transformers.utils.notebook import NotebookProgressCallback if local_trainer.pop_callback(NotebookProgressCallback): local_trainer.add_callback(ProgressCallback) except ModuleNotFoundError: pass local_trainer.objective = None checkpoint = ray.train.get_checkpoint() if checkpoint: # Upon trial resume, the local_trainer's objective gets reset to None. # If `local_trainer.train` is a noop (training has already reached # the target number of epochs/steps), then this would # trigger an unnecessary extra checkpoint at the end of training. # -> Set the objective to a dummy value upon resume as a workaround. local_trainer.objective = "objective" with checkpoint.as_directory() as checkpoint_dir: checkpoint_path = next(Path(checkpoint_dir).glob(f"{PREFIX_CHECKPOINT_DIR}*")).as_posix() local_trainer.train(resume_from_checkpoint=checkpoint_path, trial=trial) else: local_trainer.train(trial=trial) # If there hasn't been any evaluation during the training loop. if getattr(local_trainer, "objective", None) is None: metrics = local_trainer.evaluate() local_trainer.objective = local_trainer.compute_objective(metrics) metrics.update({"objective": local_trainer.objective, "done": True}) with tempfile.TemporaryDirectory() as temp_checkpoint_dir: local_trainer._tune_save_checkpoint(checkpoint_dir=temp_checkpoint_dir) checkpoint = ray.train.Checkpoint.from_directory(temp_checkpoint_dir) ray.train.report(metrics, checkpoint=checkpoint) if not trainer._memory_tracker.skip_memory_metrics: from ..trainer_utils import TrainerMemoryTracker logger.warning( "Memory tracking for your Trainer is currently " "enabled. Automatically disabling the memory tracker " "since the memory tracker is not serializable." ) trainer._memory_tracker = TrainerMemoryTracker(skip_memory_metrics=True) # The model and TensorBoard writer do not pickle so we have to remove them (if they exists) # while doing the ray hp search. _tb_writer = trainer.pop_callback(TensorBoardCallback) trainer.model = None # Setup default `resources_per_trial`. if "resources_per_trial" not in kwargs: # Default to 1 CPU and 1 GPU (if applicable) per trial. kwargs["resources_per_trial"] = {"cpu": 1} if trainer.args.n_gpu > 0: kwargs["resources_per_trial"]["gpu"] = 1 resource_msg = "1 CPU" + (" and 1 GPU" if trainer.args.n_gpu > 0 else "") logger.info( "No `resources_per_trial` arg was passed into " "`hyperparameter_search`. Setting it to a default value " f"of {resource_msg} for each trial." ) # Make sure each trainer only uses GPUs that were allocated per trial. gpus_per_trial = kwargs["resources_per_trial"].get("gpu", 0) trainer.args._n_gpu = gpus_per_trial # Setup default `progress_reporter`. if "progress_reporter" not in kwargs: from ray.tune import CLIReporter kwargs["progress_reporter"] = CLIReporter(metric_columns=["objective"]) if "scheduler" in kwargs: from ray.tune.schedulers import ASHAScheduler, HyperBandForBOHB, MedianStoppingRule, PopulationBasedTraining # Check for `do_eval` and `eval_during_training` for schedulers that require intermediate reporting. if isinstance( kwargs["scheduler"], (ASHAScheduler, MedianStoppingRule, HyperBandForBOHB, PopulationBasedTraining) ) and (not trainer.args.do_eval or trainer.args.eval_strategy == IntervalStrategy.NO): raise RuntimeError( "You are using {cls} as a scheduler but you haven't enabled evaluation during training. " "This means your trials will not report intermediate results to Ray Tune, and " "can thus not be stopped early or used to exploit other trials parameters. " "If this is what you want, do not use {cls}. If you would like to use {cls}, " "make sure you pass `do_eval=True` and `eval_strategy='steps'` in the " "Trainer `args`.".format(cls=type(kwargs["scheduler"]).__name__) ) trainable = ray.tune.with_parameters(_objective, local_trainer=trainer) @functools.wraps(trainable) def dynamic_modules_import_trainable(*args, **kwargs): """ Wrapper around `tune.with_parameters` to ensure datasets_modules are loaded on each Actor. Without this, an ImportError will be thrown. See https://github.com/huggingface/transformers/issues/11565. Assumes that `_objective`, defined above, is a function. """ if is_datasets_available(): import datasets.load dynamic_modules_path = os.path.join(datasets.load.init_dynamic_modules(), "__init__.py") # load dynamic_modules from path spec = importlib.util.spec_from_file_location("datasets_modules", dynamic_modules_path) datasets_modules = importlib.util.module_from_spec(spec) sys.modules[spec.name] = datasets_modules spec.loader.exec_module(datasets_modules) return trainable(*args, **kwargs) # special attr set by tune.with_parameters if hasattr(trainable, "__mixins__"): dynamic_modules_import_trainable.__mixins__ = trainable.__mixins__ analysis = ray.tune.run( dynamic_modules_import_trainable, config=trainer.hp_space(None), num_samples=n_trials, **kwargs, ) best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope) best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config, analysis) if _tb_writer is not None: trainer.add_callback(_tb_writer) return best_run def run_hp_search_sigopt(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: import sigopt if trainer.args.process_index == 0: if importlib.metadata.version("sigopt") >= "8.0.0": sigopt.set_project("huggingface") experiment = sigopt.create_experiment( name="huggingface-tune", type="offline", parameters=trainer.hp_space(None), metrics=[{"name": "objective", "objective": direction, "strategy": "optimize"}], parallel_bandwidth=1, budget=n_trials, ) logger.info(f"created experiment: https://app.sigopt.com/experiment/{experiment.id}") for run in experiment.loop(): with run: trainer.objective = None if trainer.args.world_size > 1: if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED: raise RuntimeError("only support DDP Sigopt HPO for ParallelMode.DISTRIBUTED currently.") trainer._hp_search_setup(run.run) torch.distributed.broadcast_object_list(pickle.dumps(trainer.args), src=0) trainer.train(resume_from_checkpoint=None) else: trainer.train(resume_from_checkpoint=None, trial=run.run) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) run.log_metric("objective", trainer.objective) best = list(experiment.get_best_runs())[0] best_run = BestRun(best.id, best.values["objective"].value, best.assignments) else: from sigopt import Connection conn = Connection() proxies = kwargs.pop("proxies", None) if proxies is not None: conn.set_proxies(proxies) experiment = conn.experiments().create( name="huggingface-tune", parameters=trainer.hp_space(None), metrics=[{"name": "objective", "objective": direction, "strategy": "optimize"}], parallel_bandwidth=1, observation_budget=n_trials, project="huggingface", ) logger.info(f"created experiment: https://app.sigopt.com/experiment/{experiment.id}") while experiment.progress.observation_count < experiment.observation_budget: suggestion = conn.experiments(experiment.id).suggestions().create() trainer.objective = None if trainer.args.world_size > 1: if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED: raise RuntimeError("only support DDP Sigopt HPO for ParallelMode.DISTRIBUTED currently.") trainer._hp_search_setup(suggestion) torch.distributed.broadcast_object_list(pickle.dumps(trainer.args), src=0) trainer.train(resume_from_checkpoint=None) else: trainer.train(resume_from_checkpoint=None, trial=suggestion) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) values = [{"name": "objective", "value": trainer.objective}] obs = conn.experiments(experiment.id).observations().create(suggestion=suggestion.id, values=values) logger.info(f"[suggestion_id, observation_id]: [{suggestion.id}, {obs.id}]") experiment = conn.experiments(experiment.id).fetch() best = list(conn.experiments(experiment.id).best_assignments().fetch().iterate_pages())[0] best_run = BestRun(best.id, best.value, best.assignments) return best_run else: for i in range(n_trials): trainer.objective = None args_main_rank = list(pickle.dumps(trainer.args)) if trainer.args.parallel_mode != ParallelMode.DISTRIBUTED: raise RuntimeError("only support DDP Sigopt HPO for ParallelMode.DISTRIBUTED currently.") torch.distributed.broadcast_object_list(args_main_rank, src=0) args = pickle.loads(bytes(args_main_rank)) for key, value in asdict(args).items(): if key != "local_rank": setattr(trainer.args, key, value) trainer.train(resume_from_checkpoint=None) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) return None def run_hp_search_wandb(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: from ..integrations import is_wandb_available if not is_wandb_available(): raise ImportError("This function needs wandb installed: `pip install wandb`") import wandb # add WandbCallback if not already added in trainer callbacks reporting_to_wandb = False for callback in trainer.callback_handler.callbacks: if isinstance(callback, WandbCallback): reporting_to_wandb = True break if not reporting_to_wandb: trainer.add_callback(WandbCallback()) trainer.args.report_to = ["wandb"] best_trial = {"run_id": None, "objective": None, "hyperparameters": None} sweep_id = kwargs.pop("sweep_id", None) project = kwargs.pop("project", None) name = kwargs.pop("name", None) entity = kwargs.pop("entity", None) metric = kwargs.pop("metric", "eval/loss") sweep_config = trainer.hp_space(None) sweep_config["metric"]["goal"] = direction sweep_config["metric"]["name"] = metric if name: sweep_config["name"] = name def _objective(): run = wandb.run if wandb.run else wandb.init() trainer.state.trial_name = run.name run.config.update({"assignments": {}, "metric": metric}) config = wandb.config trainer.objective = None trainer.train(resume_from_checkpoint=None, trial=vars(config)["_items"]) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) format_metrics = rewrite_logs(metrics) if metric not in format_metrics: logger.warning( f"Provided metric {metric} not found. This might result in unexpected sweeps charts. The available" f" metrics are {format_metrics.keys()}" ) best_score = False if best_trial["run_id"] is not None: if direction == "minimize": best_score = trainer.objective < best_trial["objective"] elif direction == "maximize": best_score = trainer.objective > best_trial["objective"] if best_score or best_trial["run_id"] is None: best_trial["run_id"] = run.id best_trial["objective"] = trainer.objective best_trial["hyperparameters"] = dict(config) return trainer.objective sweep_id = wandb.sweep(sweep_config, project=project, entity=entity) if not sweep_id else sweep_id logger.info(f"wandb sweep id - {sweep_id}") wandb.agent(sweep_id, function=_objective, count=n_trials) return BestRun(best_trial["run_id"], best_trial["objective"], best_trial["hyperparameters"]) def get_available_reporting_integrations(): integrations = [] if is_azureml_available() and not is_mlflow_available(): integrations.append("azure_ml") if is_comet_available(): integrations.append("comet_ml") if is_dagshub_available(): integrations.append("dagshub") if is_dvclive_available(): integrations.append("dvclive") if is_mlflow_available(): integrations.append("mlflow") if is_neptune_available(): integrations.append("neptune") if is_tensorboard_available(): integrations.append("tensorboard") if is_wandb_available(): integrations.append("wandb") if is_codecarbon_available(): integrations.append("codecarbon") if is_clearml_available(): integrations.append("clearml") return integrations def rewrite_logs(d): new_d = {} eval_prefix = "eval_" eval_prefix_len = len(eval_prefix) test_prefix = "test_" test_prefix_len = len(test_prefix) for k, v in d.items(): if k.startswith(eval_prefix): new_d["eval/" + k[eval_prefix_len:]] = v elif k.startswith(test_prefix): new_d["test/" + k[test_prefix_len:]] = v else: new_d["train/" + k] = v return new_d class TensorBoardCallback(TrainerCallback): """ A [`TrainerCallback`] that sends the logs to [TensorBoard](https://www.tensorflow.org/tensorboard). Args: tb_writer (`SummaryWriter`, *optional*): The writer to use. Will instantiate one if not set. """ def __init__(self, tb_writer=None): has_tensorboard = is_tensorboard_available() if not has_tensorboard: raise RuntimeError( "TensorBoardCallback requires tensorboard to be installed. Either update your PyTorch version or" " install tensorboardX." ) if has_tensorboard: try: from torch.utils.tensorboard import SummaryWriter # noqa: F401 self._SummaryWriter = SummaryWriter except ImportError: try: from tensorboardX import SummaryWriter self._SummaryWriter = SummaryWriter except ImportError: self._SummaryWriter = None else: self._SummaryWriter = None self.tb_writer = tb_writer def _init_summary_writer(self, args, log_dir=None): log_dir = log_dir or args.logging_dir if self._SummaryWriter is not None: self.tb_writer = self._SummaryWriter(log_dir=log_dir) def on_train_begin(self, args, state, control, **kwargs): if not state.is_world_process_zero: return log_dir = None if state.is_hyper_param_search: trial_name = state.trial_name if trial_name is not None: log_dir = os.path.join(args.logging_dir, trial_name) if self.tb_writer is None: self._init_summary_writer(args, log_dir) if self.tb_writer is not None: self.tb_writer.add_text("args", args.to_json_string()) if "model" in kwargs: model = kwargs["model"] if hasattr(model, "config") and model.config is not None: model_config_json = model.config.to_json_string() self.tb_writer.add_text("model_config", model_config_json) def on_log(self, args, state, control, logs=None, **kwargs): if not state.is_world_process_zero: return if self.tb_writer is None: self._init_summary_writer(args) if self.tb_writer is not None: logs = rewrite_logs(logs) for k, v in logs.items(): if isinstance(v, (int, float)): self.tb_writer.add_scalar(k, v, state.global_step) else: logger.warning( "Trainer is attempting to log a value of " f'"{v}" of type {type(v)} for key "{k}" as a scalar. ' "This invocation of Tensorboard's writer.add_scalar() " "is incorrect so we dropped this attribute." ) self.tb_writer.flush() def on_train_end(self, args, state, control, **kwargs): if self.tb_writer: self.tb_writer.close() self.tb_writer = None def save_model_architecture_to_file(model: Any, output_dir: str): with open(f"{output_dir}/model_architecture.txt", "w+") as f: if isinstance(model, PreTrainedModel): print(model, file=f) elif is_tf_available() and isinstance(model, TFPreTrainedModel): def print_to_file(s): print(s, file=f) model.summary(print_fn=print_to_file) elif is_torch_available() and ( isinstance(model, (torch.nn.Module, PushToHubMixin)) and hasattr(model, "base_model") ): print(model, file=f) class WandbLogModel(str, Enum): """Enum of possible log model values in W&B.""" CHECKPOINT = "checkpoint" END = "end" FALSE = "false" @property def is_enabled(self) -> bool: """Check if the value corresponds to a state where the `WANDB_LOG_MODEL` setting is enabled.""" return self in (WandbLogModel.CHECKPOINT, WandbLogModel.END) @classmethod def _missing_(cls, value: Any) -> "WandbLogModel": if not isinstance(value, str): raise ValueError(f"Expecting to have a string `WANDB_LOG_MODEL` setting, but got {type(value)}") if value.upper() in ENV_VARS_TRUE_VALUES: raise DeprecationWarning( f"Setting `WANDB_LOG_MODEL` as {os.getenv('WANDB_LOG_MODEL')} is deprecated and will be removed in " "version 5 of transformers. Use one of `'end'` or `'checkpoint'` instead." ) logger.info(f"Setting `WANDB_LOG_MODEL` from {os.getenv('WANDB_LOG_MODEL')} to `end` instead") return WandbLogModel.END logger.warning( f"Received unrecognized `WANDB_LOG_MODEL` setting value={value}; so disabling `WANDB_LOG_MODEL`" ) return WandbLogModel.FALSE class WandbCallback(TrainerCallback): """ A [`TrainerCallback`] that logs metrics, media, model checkpoints to [Weight and Biases](https://www.wandb.com/). """ def __init__(self): has_wandb = is_wandb_available() if not has_wandb: raise RuntimeError("WandbCallback requires wandb to be installed. Run `pip install wandb`.") if has_wandb: import wandb self._wandb = wandb self._initialized = False self._log_model = WandbLogModel(os.getenv("WANDB_LOG_MODEL", "false")) def setup(self, args, state, model, **kwargs): """ Setup the optional Weights & Biases (*wandb*) integration. One can subclass and override this method to customize the setup if needed. Find more information [here](https://docs.wandb.ai/guides/integrations/huggingface). You can also override the following environment variables: Environment: - **WANDB_LOG_MODEL** (`str`, *optional*, defaults to `"false"`): Whether to log model and checkpoints during training. Can be `"end"`, `"checkpoint"` or `"false"`. If set to `"end"`, the model will be uploaded at the end of training. If set to `"checkpoint"`, the checkpoint will be uploaded every `args.save_steps` . If set to `"false"`, the model will not be uploaded. Use along with [`~transformers.TrainingArguments.load_best_model_at_end`] to upload best model. <Deprecated version="5.0"> Setting `WANDB_LOG_MODEL` as `bool` will be deprecated in version 5 of 🀗 Transformers. </Deprecated> - **WANDB_WATCH** (`str`, *optional* defaults to `"false"`): Can be `"gradients"`, `"all"`, `"parameters"`, or `"false"`. Set to `"all"` to log gradients and parameters. - **WANDB_PROJECT** (`str`, *optional*, defaults to `"huggingface"`): Set this to a custom string to store results in a different project. - **WANDB_DISABLED** (`bool`, *optional*, defaults to `False`): Whether to disable wandb entirely. Set `WANDB_DISABLED=true` to disable. """ if self._wandb is None: return self._initialized = True if state.is_world_process_zero: logger.info( 'Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"' ) combined_dict = {**args.to_dict()} if hasattr(model, "config") and model.config is not None: model_config = model.config if isinstance(model.config, dict) else model.config.to_dict() combined_dict = {**model_config, **combined_dict} if hasattr(model, "peft_config") and model.peft_config is not None: peft_config = model.peft_config combined_dict = {**{"peft_config": peft_config}, **combined_dict} trial_name = state.trial_name init_args = {} if trial_name is not None: init_args["name"] = trial_name init_args["group"] = args.run_name elif args.run_name is not None: init_args["name"] = args.run_name if args.run_name == args.output_dir: self._wandb.termwarn( "The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was " "not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter.", repeat=False, ) if self._wandb.run is None: self._wandb.init( project=os.getenv("WANDB_PROJECT", "huggingface"), **init_args, ) # add config parameters (run may have been created manually) self._wandb.config.update(combined_dict, allow_val_change=True) # define default x-axis (for latest wandb versions) if getattr(self._wandb, "define_metric", None): self._wandb.define_metric("train/global_step") self._wandb.define_metric("*", step_metric="train/global_step", step_sync=True) # keep track of model topology and gradients, unsupported on TPU _watch_model = os.getenv("WANDB_WATCH", "false") if not is_torch_xla_available() and _watch_model in ("all", "parameters", "gradients"): self._wandb.watch(model, log=_watch_model, log_freq=max(100, state.logging_steps)) self._wandb.run._label(code="transformers_trainer") # add number of model parameters to wandb config try: self._wandb.config["model/num_parameters"] = model.num_parameters() except AttributeError: logger.info("Could not log the number of model parameters in Weights & Biases.") # log the initial model architecture to an artifact if self._log_model.is_enabled: with tempfile.TemporaryDirectory() as temp_dir: model_name = ( f"model-{self._wandb.run.id}" if (args.run_name is None or args.run_name == args.output_dir) else f"model-{self._wandb.run.name}" ) model_artifact = self._wandb.Artifact( name=model_name, type="model", metadata={ "model_config": model.config.to_dict() if hasattr(model, "config") else None, "num_parameters": self._wandb.config.get("model/num_parameters"), "initial_model": True, }, ) # add the architecture to a separate text file save_model_architecture_to_file(model, temp_dir) for f in Path(temp_dir).glob("*"): if f.is_file(): with model_artifact.new_file(f.name, mode="wb") as fa: fa.write(f.read_bytes()) self._wandb.run.log_artifact(model_artifact, aliases=["base_model"]) badge_markdown = ( f'[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge' f'-28.svg" alt="Visualize in Weights & Biases" width="20' f'0" height="32"/>]({self._wandb.run.get_url()})' ) modelcard.AUTOGENERATED_TRAINER_COMMENT += f"\n{badge_markdown}" def on_train_begin(self, args, state, control, model=None, **kwargs): if self._wandb is None: return hp_search = state.is_hyper_param_search if hp_search: self._wandb.finish() self._initialized = False args.run_name = None if not self._initialized: self.setup(args, state, model, **kwargs) def on_train_end(self, args, state, control, model=None, tokenizer=None, **kwargs): if self._wandb is None: return if self._log_model.is_enabled and self._initialized and state.is_world_process_zero: from ..trainer import Trainer fake_trainer = Trainer(args=args, model=model, tokenizer=tokenizer) with tempfile.TemporaryDirectory() as temp_dir: fake_trainer.save_model(temp_dir) metadata = ( { k: v for k, v in dict(self._wandb.summary).items() if isinstance(v, numbers.Number) and not k.startswith("_") } if not args.load_best_model_at_end else { f"eval/{args.metric_for_best_model}": state.best_metric, "train/total_floss": state.total_flos, "model/num_parameters": self._wandb.config.get("model/num_parameters"), } ) metadata["final_model"] = True logger.info("Logging model artifacts. ...") model_name = ( f"model-{self._wandb.run.id}" if (args.run_name is None or args.run_name == args.output_dir) else f"model-{self._wandb.run.name}" ) # add the model architecture to a separate text file save_model_architecture_to_file(model, temp_dir) artifact = self._wandb.Artifact(name=model_name, type="model", metadata=metadata) for f in Path(temp_dir).glob("*"): if f.is_file(): with artifact.new_file(f.name, mode="wb") as fa: fa.write(f.read_bytes()) self._wandb.run.log_artifact(artifact, aliases=["final_model"]) def on_log(self, args, state, control, model=None, logs=None, **kwargs): single_value_scalars = [ "train_runtime", "train_samples_per_second", "train_steps_per_second", "train_loss", "total_flos", ] if self._wandb is None: return if not self._initialized: self.setup(args, state, model) if state.is_world_process_zero: for k, v in logs.items(): if k in single_value_scalars: self._wandb.run.summary[k] = v non_scalar_logs = {k: v for k, v in logs.items() if k not in single_value_scalars} non_scalar_logs = rewrite_logs(non_scalar_logs) self._wandb.log({**non_scalar_logs, "train/global_step": state.global_step}) def on_save(self, args, state, control, **kwargs): if self._log_model == WandbLogModel.CHECKPOINT and self._initialized and state.is_world_process_zero: checkpoint_metadata = { k: v for k, v in dict(self._wandb.summary).items() if isinstance(v, numbers.Number) and not k.startswith("_") } checkpoint_metadata["model/num_parameters"] = self._wandb.config.get("model/num_parameters") ckpt_dir = f"checkpoint-{state.global_step}" artifact_path = os.path.join(args.output_dir, ckpt_dir) logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. ...") checkpoint_name = ( f"model-{self._wandb.run.id}" if (args.run_name is None or args.run_name == args.output_dir) else f"model-{self._wandb.run.name}" ) artifact = self._wandb.Artifact(name=checkpoint_name, type="model", metadata=checkpoint_metadata) artifact.add_dir(artifact_path) self._wandb.log_artifact( artifact, aliases=[f"epoch_{round(state.epoch, 2)}", f"checkpoint_global_step_{state.global_step}"] ) def on_predict(self, args, state, control, metrics, **kwargs): if self._wandb is None: return if not self._initialized: self.setup(args, state, **kwargs) if state.is_world_process_zero: metrics = rewrite_logs(metrics) self._wandb.log(metrics) class CometCallback(TrainerCallback): """ A [`TrainerCallback`] that sends the logs to [Comet ML](https://www.comet.com/site/). """ def __init__(self): if _is_comet_installed is False or _is_comet_recent_enough is False: raise RuntimeError( f"CometCallback requires comet-ml>={_MIN_COMET_VERSION} to be installed. Run `pip install comet-ml>={_MIN_COMET_VERSION}`." ) self._initialized = False self._log_assets = False self._experiment = None def setup(self, args, state, model): """ Setup the optional Comet integration. Environment: - **COMET_MODE** (`str`, *optional*, default to `get_or_create`): Control whether to create and log to a new Comet experiment or append to an existing experiment. It accepts the following values: * `get_or_create`: Decides automatically depending if `COMET_EXPERIMENT_KEY` is set and whether an Experiment with that key already exists or not. * `create`: Always create a new Comet Experiment. * `get`: Always try to append to an Existing Comet Experiment. Requires `COMET_EXPERIMENT_KEY` to be set. * `ONLINE`: **deprecated**, used to create an online Experiment. Use `COMET_START_ONLINE=1` instead. * `OFFLINE`: **deprecated**, used to created an offline Experiment. Use `COMET_START_ONLINE=0` instead. * `DISABLED`: **deprecated**, used to disable Comet logging. Use the `--report_to` flag to control the integrations used for logging result instead. - **COMET_PROJECT_NAME** (`str`, *optional*): Comet project name for experiments. - **COMET_LOG_ASSETS** (`str`, *optional*, defaults to `TRUE`): Whether or not to log training assets (tf event logs, checkpoints, etc), to Comet. Can be `TRUE`, or `FALSE`. For a number of configurable items in the environment, see [here](https://www.comet.com/docs/v2/guides/experiment-management/configure-sdk/#explore-comet-configuration-options). """ self._initialized = True log_assets = os.getenv("COMET_LOG_ASSETS", "FALSE").upper() if log_assets in {"TRUE", "1"}: self._log_assets = True if state.is_world_process_zero: comet_old_mode = os.getenv("COMET_MODE") mode = None online = None if comet_old_mode is not None: comet_old_mode = comet_old_mode.lower() if comet_old_mode == "online": online = True elif comet_old_mode == "offline": online = False elif comet_old_mode in ("get", "get_or_create", "create"): mode = comet_old_mode elif comet_old_mode: logger.warning("Invalid COMET_MODE env value %r, Comet logging is disabled", comet_old_mode) return # For HPO, we always create a new experiment for each trial if state.is_hyper_param_search: if mode is not None: logger.warning( "Hyperparameter Search is enabled, forcing the creation of new experimetns, COMET_MODE value %r is ignored", comet_old_mode, ) mode = "create" import comet_ml # Do not use the default run_name as the experiment name if args.run_name is not None and args.run_name != args.output_dir: experiment_config = comet_ml.ExperimentConfig(name=args.run_name) else: experiment_config = comet_ml.ExperimentConfig() self._experiment = comet_ml.start(online=online, mode=mode, experiment_config=experiment_config) self._experiment.__internal_api__set_model_graph__(model, framework="transformers") params = {"args": args.to_dict()} if hasattr(model, "config") and model.config is not None: model_config = model.config.to_dict() params["config"] = model_config if hasattr(model, "peft_config") and model.peft_config is not None: peft_config = model.peft_config params["peft_config"] = peft_config self._experiment.__internal_api__log_parameters__( params, framework="transformers", source="manual", flatten_nested=True ) if state.is_hyper_param_search: optimization_id = getattr(state, "trial_name", None) optimization_params = getattr(state, "trial_params", None) self._experiment.log_optimization(optimization_id=optimization_id, parameters=optimization_params) def on_train_begin(self, args, state, control, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) def on_log(self, args, state, control, model=None, logs=None, **kwargs): if not self._initialized: self.setup(args, state, model) if state.is_world_process_zero: if self._experiment is not None: rewritten_logs = rewrite_logs(logs) self._experiment.__internal_api__log_metrics__( rewritten_logs, step=state.global_step, epoch=state.epoch, framework="transformers" ) def on_train_end(self, args, state, control, **kwargs): if self._initialized and state.is_world_process_zero: if self._experiment is not None: if self._log_assets is True: logger.info("Logging checkpoints. This may take time.") self._experiment.log_asset_folder( args.output_dir, recursive=True, log_file_name=True, step=state.global_step ) # We create one experiment per trial in HPO mode if state.is_hyper_param_search: self._experiment.clean() self._initialized = False def on_predict(self, args, state, control, metrics, **kwargs): if not self._initialized: self.setup(args, state, model=None) if state.is_world_process_zero and self._experiment is not None: rewritten_metrics = rewrite_logs(metrics) self._experiment.__internal_api__log_metrics__( rewritten_metrics, step=state.global_step, epoch=state.epoch, framework="transformers" ) class AzureMLCallback(TrainerCallback): """ A [`TrainerCallback`] that sends the logs to [AzureML](https://pypi.org/project/azureml-sdk/). """ def __init__(self, azureml_run=None): if not is_azureml_available(): raise RuntimeError("AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`.") self.azureml_run = azureml_run def on_init_end(self, args, state, control, **kwargs): from azureml.core.run import Run if self.azureml_run is None and state.is_world_process_zero: self.azureml_run = Run.get_context() def on_log(self, args, state, control, logs=None, **kwargs): if self.azureml_run and state.is_world_process_zero: for k, v in logs.items(): if isinstance(v, (int, float)): self.azureml_run.log(k, v, description=k) class MLflowCallback(TrainerCallback): """ A [`TrainerCallback`] that sends the logs to [MLflow](https://www.mlflow.org/). Can be disabled by setting environment variable `DISABLE_MLFLOW_INTEGRATION = TRUE`. """ def __init__(self): if not is_mlflow_available(): raise RuntimeError("MLflowCallback requires mlflow to be installed. Run `pip install mlflow`.") import mlflow self._MAX_PARAM_VAL_LENGTH = mlflow.utils.validation.MAX_PARAM_VAL_LENGTH self._MAX_PARAMS_TAGS_PER_BATCH = mlflow.utils.validation.MAX_PARAMS_TAGS_PER_BATCH self._initialized = False self._auto_end_run = False self._log_artifacts = False self._ml_flow = mlflow def setup(self, args, state, model): """ Setup the optional MLflow integration. Environment: - **HF_MLFLOW_LOG_ARTIFACTS** (`str`, *optional*): Whether to use MLflow `.log_artifact()` facility to log artifacts. This only makes sense if logging to a remote server, e.g. s3 or GCS. If set to `True` or *1*, will copy each saved checkpoint on each save in [`TrainingArguments`]'s `output_dir` to the local or remote artifact storage. Using it without a remote storage will just copy the files to your artifact location. - **MLFLOW_TRACKING_URI** (`str`, *optional*): Whether to store runs at a specific path or remote server. Unset by default, which skips setting the tracking URI entirely. - **MLFLOW_EXPERIMENT_NAME** (`str`, *optional*, defaults to `None`): Whether to use an MLflow experiment_name under which to launch the run. Default to `None` which will point to the `Default` experiment in MLflow. Otherwise, it is a case sensitive name of the experiment to be activated. If an experiment with this name does not exist, a new experiment with this name is created. - **MLFLOW_TAGS** (`str`, *optional*): A string dump of a dictionary of key/value pair to be added to the MLflow run as tags. Example: `os.environ['MLFLOW_TAGS']='{"release.candidate": "RC1", "release.version": "2.2.0"}'`. - **MLFLOW_NESTED_RUN** (`str`, *optional*): Whether to use MLflow nested runs. If set to `True` or *1*, will create a nested run inside the current run. - **MLFLOW_RUN_ID** (`str`, *optional*): Allow to reattach to an existing run which can be usefull when resuming training from a checkpoint. When `MLFLOW_RUN_ID` environment variable is set, `start_run` attempts to resume a run with the specified run ID and other parameters are ignored. - **MLFLOW_FLATTEN_PARAMS** (`str`, *optional*, defaults to `False`): Whether to flatten the parameters dictionary before logging. """ self._log_artifacts = os.getenv("HF_MLFLOW_LOG_ARTIFACTS", "FALSE").upper() in ENV_VARS_TRUE_VALUES self._nested_run = os.getenv("MLFLOW_NESTED_RUN", "FALSE").upper() in ENV_VARS_TRUE_VALUES self._tracking_uri = os.getenv("MLFLOW_TRACKING_URI", None) self._experiment_name = os.getenv("MLFLOW_EXPERIMENT_NAME", None) self._flatten_params = os.getenv("MLFLOW_FLATTEN_PARAMS", "FALSE").upper() in ENV_VARS_TRUE_VALUES self._run_id = os.getenv("MLFLOW_RUN_ID", None) # "synchronous" flag is only available with mlflow version >= 2.8.0 # https://github.com/mlflow/mlflow/pull/9705 # https://github.com/mlflow/mlflow/releases/tag/v2.8.0 self._async_log = packaging.version.parse(self._ml_flow.__version__) >= packaging.version.parse("2.8.0") logger.debug( f"MLflow experiment_name={self._experiment_name}, run_name={args.run_name}, nested={self._nested_run}," f" tags={self._nested_run}, tracking_uri={self._tracking_uri}" ) if state.is_world_process_zero: if not self._ml_flow.is_tracking_uri_set(): if self._tracking_uri: self._ml_flow.set_tracking_uri(self._tracking_uri) logger.debug(f"MLflow tracking URI is set to {self._tracking_uri}") else: logger.debug( "Environment variable `MLFLOW_TRACKING_URI` is not provided and therefore will not be" " explicitly set." ) else: logger.debug(f"MLflow tracking URI is set to {self._ml_flow.get_tracking_uri()}") if self._ml_flow.active_run() is None or self._nested_run or self._run_id: if self._experiment_name: # Use of set_experiment() ensure that Experiment is created if not exists self._ml_flow.set_experiment(self._experiment_name) self._ml_flow.start_run(run_name=args.run_name, nested=self._nested_run) logger.debug(f"MLflow run started with run_id={self._ml_flow.active_run().info.run_id}") self._auto_end_run = True combined_dict = args.to_dict() if hasattr(model, "config") and model.config is not None: model_config = model.config.to_dict() combined_dict = {**model_config, **combined_dict} combined_dict = flatten_dict(combined_dict) if self._flatten_params else combined_dict # remove params that are too long for MLflow for name, value in list(combined_dict.items()): # internally, all values are converted to str in MLflow if len(str(value)) > self._MAX_PARAM_VAL_LENGTH: logger.warning( f'Trainer is attempting to log a value of "{value}" for key "{name}" as a parameter. MLflow\'s' " log_param() only accepts values no longer than 250 characters so we dropped this attribute." " You can use `MLFLOW_FLATTEN_PARAMS` environment variable to flatten the parameters and" " avoid this message." ) del combined_dict[name] # MLflow cannot log more than 100 values in one go, so we have to split it combined_dict_items = list(combined_dict.items()) for i in range(0, len(combined_dict_items), self._MAX_PARAMS_TAGS_PER_BATCH): if self._async_log: self._ml_flow.log_params( dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH]), synchronous=False ) else: self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH])) mlflow_tags = os.getenv("MLFLOW_TAGS", None) if mlflow_tags: mlflow_tags = json.loads(mlflow_tags) self._ml_flow.set_tags(mlflow_tags) self._initialized = True def on_train_begin(self, args, state, control, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) def on_log(self, args, state, control, logs, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) if state.is_world_process_zero: metrics = {} for k, v in logs.items(): if isinstance(v, (int, float)): metrics[k] = v elif isinstance(v, torch.Tensor) and v.numel() == 1: metrics[k] = v.item() else: logger.warning( f'Trainer is attempting to log a value of "{v}" of type {type(v)} for key "{k}" as a metric. ' "MLflow's log_metric() only accepts float and int types so we dropped this attribute." ) if self._async_log: self._ml_flow.log_metrics(metrics=metrics, step=state.global_step, synchronous=False) else: self._ml_flow.log_metrics(metrics=metrics, step=state.global_step) def on_train_end(self, args, state, control, **kwargs): if self._initialized and state.is_world_process_zero: if self._auto_end_run and self._ml_flow.active_run(): self._ml_flow.end_run() def on_save(self, args, state, control, **kwargs): if self._initialized and state.is_world_process_zero and self._log_artifacts: ckpt_dir = f"checkpoint-{state.global_step}" artifact_path = os.path.join(args.output_dir, ckpt_dir) logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time.") self._ml_flow.pyfunc.log_model( ckpt_dir, artifacts={"model_path": artifact_path}, python_model=self._ml_flow.pyfunc.PythonModel(), ) def __del__(self): # if the previous run is not terminated correctly, the fluent API will # not let you start a new run before the previous one is killed if ( self._auto_end_run and callable(getattr(self._ml_flow, "active_run", None)) and self._ml_flow.active_run() is not None ): self._ml_flow.end_run() class DagsHubCallback(MLflowCallback): """ A [`TrainerCallback`] that logs to [DagsHub](https://dagshub.com/). Extends [`MLflowCallback`] """ def __init__(self): super().__init__() if not is_dagshub_available(): raise ImportError("DagsHubCallback requires dagshub to be installed. Run `pip install dagshub`.") from dagshub.upload import Repo self.Repo = Repo def setup(self, *args, **kwargs): """ Setup the DagsHub's Logging integration. Environment: - **HF_DAGSHUB_LOG_ARTIFACTS** (`str`, *optional*): Whether to save the data and model artifacts for the experiment. Default to `False`. """ self.log_artifacts = os.getenv("HF_DAGSHUB_LOG_ARTIFACTS", "FALSE").upper() in ENV_VARS_TRUE_VALUES self.name = os.getenv("HF_DAGSHUB_MODEL_NAME") or "main" self.remote = os.getenv("MLFLOW_TRACKING_URI") self.repo = self.Repo( owner=self.remote.split(os.sep)[-2], name=self.remote.split(os.sep)[-1].split(".")[0], branch=os.getenv("BRANCH") or "main", ) self.path = Path("artifacts") if self.remote is None: raise RuntimeError( "DagsHubCallback requires the `MLFLOW_TRACKING_URI` environment variable to be set. Did you run" " `dagshub.init()`?" ) super().setup(*args, **kwargs) def on_train_end(self, args, state, control, **kwargs): if self.log_artifacts: if getattr(self, "train_dataloader", None): torch.save(self.train_dataloader.dataset, os.path.join(args.output_dir, "dataset.pt")) self.repo.directory(str(self.path)).add_dir(args.output_dir) class NeptuneMissingConfiguration(Exception): def __init__(self): super().__init__( """ ------ Unsupported ---- We were not able to create new runs. You provided a custom Neptune run to `NeptuneCallback` with the `run` argument. For the integration to work fully, provide your `api_token` and `project` by saving them as environment variables or passing them to the callback. """ ) class NeptuneCallback(TrainerCallback): """TrainerCallback that sends the logs to [Neptune](https://app.neptune.ai). Args: api_token (`str`, *optional*): Neptune API token obtained upon registration. You can leave this argument out if you have saved your token to the `NEPTUNE_API_TOKEN` environment variable (strongly recommended). See full setup instructions in the [docs](https://docs.neptune.ai/setup/installation). project (`str`, *optional*): Name of an existing Neptune project, in the form "workspace-name/project-name". You can find and copy the name in Neptune from the project settings -> Properties. If None (default), the value of the `NEPTUNE_PROJECT` environment variable is used. name (`str`, *optional*): Custom name for the run. base_namespace (`str`, *optional*, defaults to "finetuning"): In the Neptune run, the root namespace that will contain all of the metadata logged by the callback. log_parameters (`bool`, *optional*, defaults to `True`): If True, logs all Trainer arguments and model parameters provided by the Trainer. log_checkpoints (`str`, *optional*): If "same", uploads checkpoints whenever they are saved by the Trainer. If "last", uploads only the most recently saved checkpoint. If "best", uploads the best checkpoint (among the ones saved by the Trainer). If `None`, does not upload checkpoints. run (`Run`, *optional*): Pass a Neptune run object if you want to continue logging to an existing run. Read more about resuming runs in the [docs](https://docs.neptune.ai/logging/to_existing_object). **neptune_run_kwargs (*optional*): Additional keyword arguments to be passed directly to the [`neptune.init_run()`](https://docs.neptune.ai/api/neptune#init_run) function when a new run is created. For instructions and examples, see the [Transformers integration guide](https://docs.neptune.ai/integrations/transformers) in the Neptune documentation. """ integration_version_key = "source_code/integrations/transformers" model_parameters_key = "model_parameters" trial_name_key = "trial" trial_params_key = "trial_params" trainer_parameters_key = "trainer_parameters" flat_metrics = {"train/epoch"} def __init__( self, *, api_token: Optional[str] = None, project: Optional[str] = None, name: Optional[str] = None, base_namespace: str = "finetuning", run=None, log_parameters: bool = True, log_checkpoints: Optional[str] = None, **neptune_run_kwargs, ): if not is_neptune_available(): raise ValueError( "NeptuneCallback requires the Neptune client library to be installed. " "To install the library, run `pip install neptune`." ) try: from neptune import Run from neptune.internal.utils import verify_type except ImportError: from neptune.new.internal.utils import verify_type from neptune.new.metadata_containers.run import Run verify_type("api_token", api_token, (str, type(None))) verify_type("project", project, (str, type(None))) verify_type("name", name, (str, type(None))) verify_type("base_namespace", base_namespace, str) verify_type("run", run, (Run, type(None))) verify_type("log_parameters", log_parameters, bool) verify_type("log_checkpoints", log_checkpoints, (str, type(None))) self._base_namespace_path = base_namespace self._log_parameters = log_parameters self._log_checkpoints = log_checkpoints self._initial_run: Optional[Run] = run self._run = None self._is_monitoring_run = False self._run_id = None self._force_reset_monitoring_run = False self._init_run_kwargs = {"api_token": api_token, "project": project, "name": name, **neptune_run_kwargs} self._volatile_checkpoints_dir = None self._should_upload_checkpoint = self._log_checkpoints is not None self._recent_checkpoint_path = None if self._log_checkpoints in {"last", "best"}: self._target_checkpoints_namespace = f"checkpoints/{self._log_checkpoints}" self._should_clean_recently_uploaded_checkpoint = True else: self._target_checkpoints_namespace = "checkpoints" self._should_clean_recently_uploaded_checkpoint = False def _stop_run_if_exists(self): if self._run: self._run.stop() del self._run self._run = None def _initialize_run(self, **additional_neptune_kwargs): try: from neptune import init_run from neptune.exceptions import NeptuneMissingApiTokenException, NeptuneMissingProjectNameException except ImportError: from neptune.new import init_run from neptune.new.exceptions import NeptuneMissingApiTokenException, NeptuneMissingProjectNameException self._stop_run_if_exists() try: run_params = additional_neptune_kwargs.copy() run_params.update(self._init_run_kwargs) self._run = init_run(**run_params) self._run_id = self._run["sys/id"].fetch() except (NeptuneMissingProjectNameException, NeptuneMissingApiTokenException) as e: raise NeptuneMissingConfiguration() from e def _use_initial_run(self): self._run = self._initial_run self._is_monitoring_run = True self._run_id = self._run["sys/id"].fetch() self._initial_run = None def _ensure_run_with_monitoring(self): if self._initial_run is not None: self._use_initial_run() else: if not self._force_reset_monitoring_run and self._is_monitoring_run: return if self._run and not self._is_monitoring_run and not self._force_reset_monitoring_run: self._initialize_run(with_id=self._run_id) self._is_monitoring_run = True else: self._initialize_run() self._force_reset_monitoring_run = False def _ensure_at_least_run_without_monitoring(self): if self._initial_run is not None: self._use_initial_run() else: if not self._run: self._initialize_run( with_id=self._run_id, capture_stdout=False, capture_stderr=False, capture_hardware_metrics=False, capture_traceback=False, ) self._is_monitoring_run = False @property def run(self): if self._run is None: self._ensure_at_least_run_without_monitoring() return self._run @property def _metadata_namespace(self): return self.run[self._base_namespace_path] def _log_integration_version(self): self.run[NeptuneCallback.integration_version_key] = version def _log_trainer_parameters(self, args): self._metadata_namespace[NeptuneCallback.trainer_parameters_key] = args.to_sanitized_dict() def _log_model_parameters(self, model): from neptune.utils import stringify_unsupported if model and hasattr(model, "config") and model.config is not None: self._metadata_namespace[NeptuneCallback.model_parameters_key] = stringify_unsupported( model.config.to_dict() ) def _log_hyper_param_search_parameters(self, state): if state and hasattr(state, "trial_name"): self._metadata_namespace[NeptuneCallback.trial_name_key] = state.trial_name if state and hasattr(state, "trial_params") and state.trial_params is not None: self._metadata_namespace[NeptuneCallback.trial_params_key] = state.trial_params def _log_model_checkpoint(self, source_directory: str, checkpoint: str): target_path = relative_path = os.path.join(source_directory, checkpoint) if self._volatile_checkpoints_dir is not None: consistent_checkpoint_path = os.path.join(self._volatile_checkpoints_dir, checkpoint) try: # Remove leading ../ from a relative path. cpkt_path = relative_path.replace("..", "").lstrip(os.path.sep) copy_path = os.path.join(consistent_checkpoint_path, cpkt_path) shutil.copytree(relative_path, copy_path) target_path = consistent_checkpoint_path except IOError as e: logger.warning( "NeptuneCallback was unable to made a copy of checkpoint due to I/O exception: '{}'. " "Could fail trying to upload.".format(e) ) self._metadata_namespace[self._target_checkpoints_namespace].upload_files(target_path) if self._should_clean_recently_uploaded_checkpoint and self._recent_checkpoint_path is not None: self._metadata_namespace[self._target_checkpoints_namespace].delete_files(self._recent_checkpoint_path) self._recent_checkpoint_path = relative_path def on_init_end(self, args, state, control, **kwargs): self._volatile_checkpoints_dir = None if self._log_checkpoints and (args.overwrite_output_dir or args.save_total_limit is not None): self._volatile_checkpoints_dir = tempfile.TemporaryDirectory().name if self._log_checkpoints == "best" and not args.load_best_model_at_end: raise ValueError("To save the best model checkpoint, the load_best_model_at_end argument must be enabled.") def on_train_begin(self, args, state, control, model=None, **kwargs): if not state.is_world_process_zero: return self._ensure_run_with_monitoring() self._force_reset_monitoring_run = True self._log_integration_version() if self._log_parameters: self._log_trainer_parameters(args) self._log_model_parameters(model) if state.is_hyper_param_search: self._log_hyper_param_search_parameters(state) def on_train_end(self, args, state, control, **kwargs): self._stop_run_if_exists() def __del__(self): if self._volatile_checkpoints_dir is not None: shutil.rmtree(self._volatile_checkpoints_dir, ignore_errors=True) self._stop_run_if_exists() def on_save(self, args, state, control, **kwargs): if self._should_upload_checkpoint: self._log_model_checkpoint(args.output_dir, f"checkpoint-{state.global_step}") def on_evaluate(self, args, state, control, metrics=None, **kwargs): if self._log_checkpoints == "best": best_metric_name = args.metric_for_best_model if not best_metric_name.startswith("eval_"): best_metric_name = f"eval_{best_metric_name}" metric_value = metrics.get(best_metric_name) operator = np.greater if args.greater_is_better else np.less self._should_upload_checkpoint = state.best_metric is None or operator(metric_value, state.best_metric) @classmethod def get_run(cls, trainer): for callback in trainer.callback_handler.callbacks: if isinstance(callback, cls): return callback.run raise Exception("The trainer doesn't have a NeptuneCallback configured.") def on_log(self, args, state, control, logs: Optional[Dict[str, float]] = None, **kwargs): if not state.is_world_process_zero: return if logs is not None: for name, value in rewrite_logs(logs).items(): if isinstance(value, (int, float)): if name in NeptuneCallback.flat_metrics: self._metadata_namespace[name] = value else: self._metadata_namespace[name].log(value, step=state.global_step) class CodeCarbonCallback(TrainerCallback): """ A [`TrainerCallback`] that tracks the CO2 emission of training. """ def __init__(self): if not is_codecarbon_available(): raise RuntimeError( "CodeCarbonCallback requires `codecarbon` to be installed. Run `pip install codecarbon`." ) elif torch.version.hip: raise RuntimeError( "CodeCarbonCallback requires `codecarbon` package, which is not compatible with AMD ROCm (https://github.com/mlco2/codecarbon/pull/490). When using the Trainer, please specify the `report_to` argument (https://huggingface.co/docs/transformers/v4.39.3/en/main_classes/trainer#transformers.TrainingArguments.report_to) to disable CodeCarbonCallback." ) import codecarbon self._codecarbon = codecarbon self.tracker = None def on_init_end(self, args, state, control, **kwargs): if self.tracker is None and state.is_local_process_zero: # CodeCarbon will automatically handle environment variables for configuration self.tracker = self._codecarbon.EmissionsTracker(output_dir=args.output_dir) def on_train_begin(self, args, state, control, model=None, **kwargs): if self.tracker and state.is_local_process_zero: self.tracker.start() def on_train_end(self, args, state, control, **kwargs): if self.tracker and state.is_local_process_zero: self.tracker.stop() class ClearMLCallback(TrainerCallback): """ A [`TrainerCallback`] that sends the logs to [ClearML](https://clear.ml/). Environment: - **CLEARML_PROJECT** (`str`, *optional*, defaults to `HuggingFace Transformers`): ClearML project name. - **CLEARML_TASK** (`str`, *optional*, defaults to `Trainer`): ClearML task name. - **CLEARML_LOG_MODEL** (`bool`, *optional*, defaults to `False`): Whether to log models as artifacts during training. """ log_suffix = "" _hparams_section = "Transformers" _model_config_section = "Model Configuration" _ignore_hparams_overrides = "_ignore_hparams_ui_overrides_" _ignoge_model_config_overrides = "_ignore_model_config_ui_overrides_" _model_config_description = "The configuration of model number {}." _model_config_description_note = ( "Note that, when cloning this task and running it remotely," " the configuration might be applied to another model instead of this one." " To avoid this, initialize the task externally by calling `Task.init`" " before the `ClearMLCallback` is instantiated." ) _train_run_counter = 0 _model_connect_counter = 0 _task_created_in_callback = False _should_close_on_train_end = None def __init__(self): if is_clearml_available(): import clearml self._clearml = clearml else: raise RuntimeError("ClearMLCallback requires 'clearml' to be installed. Run `pip install clearml`.") self._initialized = False self._clearml_task = None self._log_model = False self._checkpoints_saved = [] def setup(self, args, state, model, tokenizer, **kwargs): if self._clearml is None: return if self._initialized: return ClearMLCallback._train_run_counter += 1 ClearMLCallback._model_connect_counter += 1 ClearMLCallback.log_suffix = ( "" if ClearMLCallback._train_run_counter == 1 else "_" + str(ClearMLCallback._train_run_counter) ) if state.is_world_process_zero: logger.info("Automatic ClearML logging enabled.") if self._clearml_task is None: if ClearMLCallback._should_close_on_train_end is None: if not self._clearml.Task.running_locally() or self._clearml.Task.current_task(): ClearMLCallback._should_close_on_train_end = False else: ClearMLCallback._should_close_on_train_end = True # This might happen when running inside of a pipeline, where the task is already initialized # from outside of Hugging Face if self._clearml.Task.running_locally() and self._clearml.Task.current_task(): self._clearml_task = self._clearml.Task.current_task() self._log_model = os.getenv( "CLEARML_LOG_MODEL", "FALSE" if not ClearMLCallback._task_created_in_callback else "TRUE", ).upper() in ENV_VARS_TRUE_VALUES.union({"TRUE"}) logger.info("External ClearML Task has been connected.") else: self._clearml_task = self._clearml.Task.init( project_name=os.getenv("CLEARML_PROJECT", "HuggingFace Transformers"), task_name=os.getenv("CLEARML_TASK", "Trainer"), auto_connect_frameworks={"tensorboard": False, "pytorch": False}, output_uri=True, ) self._log_model = os.getenv("CLEARML_LOG_MODEL", "TRUE").upper() in ENV_VARS_TRUE_VALUES.union( {"TRUE"} ) ClearMLCallback._task_created_in_callback = True logger.info("ClearML Task has been initialized.") self._initialized = True suffixed_hparams_section = ClearMLCallback._hparams_section + ClearMLCallback.log_suffix ignore_hparams_config_section = suffixed_hparams_section + "/" + ClearMLCallback._ignore_hparams_overrides if self._clearml.Task.running_locally(): self._copy_training_args_as_hparams(args, suffixed_hparams_section) self._clearml_task.set_parameter( name=ignore_hparams_config_section, value=True, value_type=bool, description=( "If True, ignore Transformers hyperparameters overrides done in the UI/backend " + "when running remotely. Otherwise, the overrides will be applied when running remotely" ), ) elif not self._clearml_task.get_parameter(ignore_hparams_config_section, default=True, cast=True): self._clearml_task.connect(args, suffixed_hparams_section) else: self._copy_training_args_as_hparams( args, ClearMLCallback._hparams_section + ClearMLCallback.log_suffix ) if getattr(model, "config", None) is not None: ignore_model_config_section = ( suffixed_hparams_section + "/" + ClearMLCallback._ignoge_model_config_overrides ) configuration_object_description = ClearMLCallback._model_config_description.format( ClearMLCallback._model_connect_counter ) if ClearMLCallback._model_connect_counter != ClearMLCallback._train_run_counter: configuration_object_description += " " + ClearMLCallback._model_config_description_note if self._clearml.Task.running_locally(): self._clearml_task.set_parameter( name=ignore_model_config_section, value=True, value_type=bool, description=( "If True, ignore Transformers model configuration overrides done in the UI/backend " + "when running remotely. Otherwise, the overrides will be applied when running remotely" ), ) self._clearml_task.set_configuration_object( name=ClearMLCallback._model_config_section + ClearMLCallback.log_suffix, config_dict=model.config.to_dict(), description=configuration_object_description, ) elif not self._clearml_task.get_parameter(ignore_model_config_section, default=True, cast=True): model.config = model.config.from_dict( self._clearml_task.get_configuration_object_as_dict( ClearMLCallback._model_config_section + ClearMLCallback.log_suffix ) ) else: self._clearml_task.set_configuration_object( name=ClearMLCallback._model_config_section + ClearMLCallback.log_suffix, config_dict=model.config.to_dict(), description=configuration_object_description, ) def on_train_begin(self, args, state, control, model=None, tokenizer=None, **kwargs): if self._clearml is None: return self._checkpoints_saved = [] if state.is_hyper_param_search: self._initialized = False if not self._initialized: self.setup(args, state, model, tokenizer, **kwargs) def on_train_end(self, args, state, control, **kwargs): if ClearMLCallback._should_close_on_train_end: self._clearml_task.close() ClearMLCallback._train_run_counter = 0 def on_log(self, args, state, control, model=None, tokenizer=None, logs=None, **kwargs): if self._clearml is None: return if not self._initialized: self.setup(args, state, model, tokenizer, **kwargs) if state.is_world_process_zero: eval_prefix = "eval_" eval_prefix_len = len(eval_prefix) test_prefix = "test_" test_prefix_len = len(test_prefix) single_value_scalars = [ "train_runtime", "train_samples_per_second", "train_steps_per_second", "train_loss", "total_flos", "epoch", ] for k, v in logs.items(): if isinstance(v, (int, float)): if k in single_value_scalars: self._clearml_task.get_logger().report_single_value( name=k + ClearMLCallback.log_suffix, value=v ) elif k.startswith(eval_prefix): self._clearml_task.get_logger().report_scalar( title="eval" + ClearMLCallback.log_suffix, series=k[eval_prefix_len:], value=v, iteration=state.global_step, ) elif k.startswith(test_prefix): self._clearml_task.get_logger().report_scalar( title="test" + ClearMLCallback.log_suffix, series=k[test_prefix_len:], value=v, iteration=state.global_step, ) else: self._clearml_task.get_logger().report_scalar( title="train" + ClearMLCallback.log_suffix, series=k, value=v, iteration=state.global_step, ) else: logger.warning( "Trainer is attempting to log a value of " f'"{v}" of type {type(v)} for key "{k}" as a scalar. ' "This invocation of ClearML logger's report_scalar() " "is incorrect so we dropped this attribute." ) def on_save(self, args, state, control, **kwargs): if self._log_model and self._clearml_task and state.is_world_process_zero: ckpt_dir = f"checkpoint-{state.global_step}" artifact_path = os.path.join(args.output_dir, ckpt_dir) name = ckpt_dir + ClearMLCallback.log_suffix logger.info(f"Logging checkpoint artifact `{name}`. This may take some time.") output_model = self._clearml.OutputModel(task=self._clearml_task, name=name) output_model.connect(task=self._clearml_task, name=name) output_model.update_weights_package( weights_path=artifact_path, target_filename=ckpt_dir, iteration=state.global_step, auto_delete_file=False, ) self._checkpoints_saved.append(output_model) while args.save_total_limit and args.save_total_limit < len(self._checkpoints_saved): try: self._clearml.model.Model.remove( self._checkpoints_saved[0], delete_weights_file=True, force=True, raise_on_errors=True, ) except Exception as e: logger.warning( "Could not remove checkpoint `{}` after going over the `save_total_limit`. Error is: {}".format( self._checkpoints_saved[0].name, e ) ) break self._checkpoints_saved = self._checkpoints_saved[1:] def _copy_training_args_as_hparams(self, training_args, prefix): as_dict = { field.name: getattr(training_args, field.name) for field in fields(training_args) if field.init and not field.name.endswith("_token") } flat_dict = {str(k): v for k, v in self._clearml.utilities.proxy_object.flatten_dictionary(as_dict).items()} self._clearml_task._arguments.copy_from_dict(flat_dict, prefix=prefix) class FlyteCallback(TrainerCallback): """A [`TrainerCallback`] that sends the logs to [Flyte](https://flyte.org/). NOTE: This callback only works within a Flyte task. Args: save_log_history (`bool`, *optional*, defaults to `True`): When set to True, the training logs are saved as a Flyte Deck. sync_checkpoints (`bool`, *optional*, defaults to `True`): When set to True, checkpoints are synced with Flyte and can be used to resume training in the case of an interruption. Example: ```python # Note: This example skips over some setup steps for brevity. from flytekit import current_context, task @task def train_hf_transformer(): cp = current_context().checkpoint trainer = Trainer(..., callbacks=[FlyteCallback()]) output = trainer.train(resume_from_checkpoint=cp.restore()) ``` """ def __init__(self, save_log_history: bool = True, sync_checkpoints: bool = True): super().__init__() if not is_flytekit_available(): raise ImportError("FlyteCallback requires flytekit to be installed. Run `pip install flytekit`.") if not is_flyte_deck_standard_available() or not is_pandas_available(): logger.warning( "Syncing log history requires both flytekitplugins-deck-standard and pandas to be installed. " "Run `pip install flytekitplugins-deck-standard pandas` to enable this feature." ) save_log_history = False from flytekit import current_context self.cp = current_context().checkpoint self.save_log_history = save_log_history self.sync_checkpoints = sync_checkpoints def on_save(self, args, state, control, **kwargs): if self.sync_checkpoints and state.is_world_process_zero: ckpt_dir = f"checkpoint-{state.global_step}" artifact_path = os.path.join(args.output_dir, ckpt_dir) logger.info(f"Syncing checkpoint in {ckpt_dir} to Flyte. This may take time.") self.cp.save(artifact_path) def on_train_end(self, args, state, control, **kwargs): if self.save_log_history: import pandas as pd from flytekit import Deck from flytekitplugins.deck.renderer import TableRenderer log_history_df = pd.DataFrame(state.log_history) Deck("Log History", TableRenderer().to_html(log_history_df)) class DVCLiveCallback(TrainerCallback): """ A [`TrainerCallback`] that sends the logs to [DVCLive](https://www.dvc.org/doc/dvclive). Use the environment variables below in `setup` to configure the integration. To customize this callback beyond those environment variables, see [here](https://dvc.org/doc/dvclive/ml-frameworks/huggingface). Args: live (`dvclive.Live`, *optional*, defaults to `None`): Optional Live instance. If None, a new instance will be created using **kwargs. log_model (Union[Literal["all"], bool], *optional*, defaults to `None`): Whether to use `dvclive.Live.log_artifact()` to log checkpoints created by [`Trainer`]. If set to `True`, the final checkpoint is logged at the end of training. If set to `"all"`, the entire [`TrainingArguments`]'s `output_dir` is logged at each checkpoint. """ def __init__( self, live: Optional[Any] = None, log_model: Optional[Union[Literal["all"], bool]] = None, **kwargs, ): if not is_dvclive_available(): raise RuntimeError("DVCLiveCallback requires dvclive to be installed. Run `pip install dvclive`.") from dvclive import Live self._initialized = False self.live = None if isinstance(live, Live): self.live = live elif live is not None: raise RuntimeError(f"Found class {live.__class__} for live, expected dvclive.Live") self._log_model = log_model if self._log_model is None: log_model_env = os.getenv("HF_DVCLIVE_LOG_MODEL", "FALSE") if log_model_env.upper() in ENV_VARS_TRUE_VALUES: self._log_model = True elif log_model_env.lower() == "all": self._log_model = "all" def setup(self, args, state, model): """ Setup the optional DVCLive integration. To customize this callback beyond the environment variables below, see [here](https://dvc.org/doc/dvclive/ml-frameworks/huggingface). Environment: - **HF_DVCLIVE_LOG_MODEL** (`str`, *optional*): Whether to use `dvclive.Live.log_artifact()` to log checkpoints created by [`Trainer`]. If set to `True` or *1*, the final checkpoint is logged at the end of training. If set to `all`, the entire [`TrainingArguments`]'s `output_dir` is logged at each checkpoint. """ from dvclive import Live self._initialized = True if state.is_world_process_zero: if not self.live: self.live = Live() self.live.log_params(args.to_dict()) def on_train_begin(self, args, state, control, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) def on_log(self, args, state, control, model=None, logs=None, **kwargs): if not self._initialized: self.setup(args, state, model) if state.is_world_process_zero: from dvclive.plots import Metric from dvclive.utils import standardize_metric_name for key, value in logs.items(): if Metric.could_log(value): self.live.log_metric(standardize_metric_name(key, "dvclive.huggingface"), value) else: logger.warning( "Trainer is attempting to log a value of " f'"{value}" of type {type(value)} for key "{key}" as a scalar. ' "This invocation of DVCLive's Live.log_metric() " "is incorrect so we dropped this attribute." ) self.live.next_step() def on_save(self, args, state, control, **kwargs): if self._log_model == "all" and self._initialized and state.is_world_process_zero: self.live.log_artifact(args.output_dir) def on_train_end(self, args, state, control, **kwargs): if self._initialized and state.is_world_process_zero: from transformers.trainer import Trainer if self._log_model is True: fake_trainer = Trainer(args=args, model=kwargs.get("model"), tokenizer=kwargs.get("tokenizer")) name = "best" if args.load_best_model_at_end else "last" output_dir = os.path.join(args.output_dir, name) fake_trainer.save_model(output_dir) self.live.log_artifact(output_dir, name=name, type="model", copy=True) self.live.end() INTEGRATION_TO_CALLBACK = { "azure_ml": AzureMLCallback, "comet_ml": CometCallback, "mlflow": MLflowCallback, "neptune": NeptuneCallback, "tensorboard": TensorBoardCallback, "wandb": WandbCallback, "codecarbon": CodeCarbonCallback, "clearml": ClearMLCallback, "dagshub": DagsHubCallback, "flyte": FlyteCallback, "dvclive": DVCLiveCallback, } def get_reporting_integration_callbacks(report_to): for integration in report_to: if integration not in INTEGRATION_TO_CALLBACK: raise ValueError( f"{integration} is not supported, only {', '.join(INTEGRATION_TO_CALLBACK.keys())} are supported." ) return [INTEGRATION_TO_CALLBACK[integration] for integration in report_to]
transformers/src/transformers/integrations/integration_utils.py/0
{ "file_path": "transformers/src/transformers/integrations/integration_utils.py", "repo_id": "transformers", "token_count": 43327 }
327
// File from https://github.com/mlpen/YOSO/blob/main/encoders/backbones/efficient_attentions/yoso/yoso_v1/cuda/fast_lsh_cumulation.cu #include <torch/extension.h> #include <ATen/ATen.h> #include "fast_lsh_cumulation.h" #include "fast_lsh_cumulation_cuda.h" #include "common_cuda.h" #include "common.h" #include <vector> ////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////// std::vector<at::Tensor> fast_hash_ver1_kernel( at::Tensor query_mask, at::Tensor query_vector, at::Tensor key_mask, at::Tensor key_vector, int num_hash_f, int hash_code_len, bool use_cuda ) { int batch_size = query_vector.size(0); int num_query = query_vector.size(1); int num_key = key_vector.size(1); int vector_dim = query_vector.size(2); int num_hash_per_part = vector_dim / hash_code_len; int num_part = max(1, ceil_divide(num_hash_f, num_hash_per_part)); at::Tensor Dmat = 2 * at::randint(0, 2, {batch_size, 3, num_part, vector_dim}, query_mask.options()) - 1; at::Tensor query_hash_code = at::zeros({batch_size, num_query, num_hash_f}, query_mask.options()); at::Tensor key_hash_code = at::zeros({batch_size, num_key, num_hash_f}, key_mask.options()); int *query_mask_ptr = query_mask.data_ptr<int>(); float *query_vector_ptr = query_vector.data_ptr<float>(); int *key_mask_ptr = key_mask.data_ptr<int>(); float *key_vector_ptr = key_vector.data_ptr<float>(); int *Dmat_ptr = Dmat.data_ptr<int>(); int *query_hash_code_ptr = query_hash_code.data_ptr<int>(); int *key_hash_code_ptr = key_hash_code.data_ptr<int>(); if (use_cuda) { { dim3 threads(vector_dim); dim3 blocks(num_part, num_query, batch_size); int shared_mem = vector_dim * sizeof(float); fast_hash_ver1_cuda_kernel<<<blocks, threads, shared_mem>>>( query_mask_ptr, query_vector_ptr, Dmat_ptr, query_hash_code_ptr, batch_size, num_query, vector_dim, num_part, num_hash_f, hash_code_len ); } { dim3 threads(vector_dim); dim3 blocks(num_part, num_key, batch_size); int shared_mem = vector_dim * sizeof(float); fast_hash_ver1_cuda_kernel<<<blocks, threads, shared_mem>>>( key_mask_ptr, key_vector_ptr, Dmat_ptr, key_hash_code_ptr, batch_size, num_key, vector_dim, num_part, num_hash_f, hash_code_len ); } } return {query_hash_code, key_hash_code}; } at::Tensor lsh_cumulation_ver1_kernel( at::Tensor query_mask, at::Tensor query_hash_code, at::Tensor key_mask, at::Tensor key_hash_code, at::Tensor value, int hashtable_capacity, bool use_cuda ) { int batch_size = query_hash_code.size(0); int num_hash_f = query_hash_code.size(2); int num_query = query_hash_code.size(1); int num_key = key_hash_code.size(1); int value_dim = value.size(2); at::Tensor hashtable_value = at::empty({batch_size, num_hash_f, hashtable_capacity, WARP_SIZE}, value.options()); at::Tensor cumulation_value = at::zeros({batch_size, num_query, value_dim}, value.options()); if (use_cuda) { int threads_x = WARP_SIZE; int threads_y = OPTIMAL_THREADS_PER_BLOCK / WARP_SIZE; int block_x_step1 = num_key / threads_y; int block_x_step2 = num_query / threads_y; int block_y = batch_size; dim3 threads(threads_x, threads_y); dim3 blocks_step1(block_x_step1, block_y); dim3 blocks_step2(block_x_step2, block_y); int *query_mask_ptr = query_mask.data_ptr<int>(); int *query_hash_code_ptr = query_hash_code.data_ptr<int>(); int *key_mask_ptr = key_mask.data_ptr<int>(); int *key_hash_code_ptr = key_hash_code.data_ptr<int>(); float *value_ptr = value.data_ptr<float>(); float *hashtable_value_ptr = hashtable_value.data_ptr<float>(); float *cumulation_value_ptr = cumulation_value.data_ptr<float>(); for (int value_offset = 0; value_offset < value_dim; value_offset = value_offset + WARP_SIZE) { cudaMemset(hashtable_value_ptr, 0, (batch_size * num_hash_f * hashtable_capacity * WARP_SIZE) * sizeof(float)); lsh_cumulation_ver1_step1_cuda_kernel<<<blocks_step1, threads>>>( key_mask_ptr, key_hash_code_ptr, value_ptr, hashtable_value_ptr, batch_size, num_hash_f, hashtable_capacity, num_key, value_dim, value_offset ); lsh_cumulation_ver1_step2_cuda_kernel<<<blocks_step2, threads>>>( query_mask_ptr, query_hash_code_ptr, hashtable_value_ptr, cumulation_value_ptr, batch_size, num_hash_f, hashtable_capacity, num_query, value_dim, value_offset ); } } return cumulation_value; } at::Tensor lsh_weighted_cumulation_ver1_kernel( at::Tensor query_mask, at::Tensor query_hash_code, at::Tensor query_weight, at::Tensor key_mask, at::Tensor key_hash_code, at::Tensor key_weight, at::Tensor value, int hashtable_capacity, bool use_cuda ) { int batch_size = query_hash_code.size(0); int num_hash_f = query_hash_code.size(2); int num_query = query_hash_code.size(1); int num_key = key_hash_code.size(1); int value_dim = value.size(2); int weight_dim = query_weight.size(2); at::Tensor hashtable_value = at::zeros({batch_size, num_hash_f, hashtable_capacity, WARP_SIZE}, value.options()); at::Tensor cumulation_value = at::zeros({batch_size, num_query, value_dim}, value.options()); if (use_cuda) { int threads_x = WARP_SIZE; int threads_y = OPTIMAL_THREADS_PER_BLOCK / WARP_SIZE; int block_x_step1 = num_key / threads_y; int block_x_step2 = num_query / threads_y; int block_y = batch_size; dim3 threads(threads_x, threads_y); dim3 blocks_step1(block_x_step1, block_y); dim3 blocks_step2(block_x_step2, block_y); int *query_mask_ptr = query_mask.data_ptr<int>(); int *query_hash_code_ptr = query_hash_code.data_ptr<int>(); float *query_weight_ptr = query_weight.data_ptr<float>(); int *key_mask_ptr = key_mask.data_ptr<int>(); int *key_hash_code_ptr = key_hash_code.data_ptr<int>(); float *key_weight_ptr = key_weight.data_ptr<float>(); float *value_ptr = value.data_ptr<float>(); float *hashtable_value_ptr = hashtable_value.data_ptr<float>(); float *cumulation_value_ptr = cumulation_value.data_ptr<float>(); for (int value_offset = 0; value_offset < value_dim; value_offset = value_offset + WARP_SIZE) { for (int weight_idx = 0; weight_idx < weight_dim; weight_idx++) { cudaMemset(hashtable_value_ptr, 0, (batch_size * num_hash_f * hashtable_capacity * WARP_SIZE) * sizeof(float)); lsh_weighted_cumulation_ver1_step1_cuda_kernel<<<blocks_step1, threads>>>( key_mask_ptr, key_hash_code_ptr, key_weight_ptr, value_ptr, hashtable_value_ptr, batch_size, num_hash_f, hashtable_capacity, num_key, value_dim, weight_dim, value_offset, weight_idx ); lsh_weighted_cumulation_ver1_step2_cuda_kernel<<<blocks_step2, threads>>>( query_mask_ptr, query_hash_code_ptr, query_weight_ptr, hashtable_value_ptr, cumulation_value_ptr, batch_size, num_hash_f, hashtable_capacity, num_query, value_dim, weight_dim, value_offset, weight_idx ); } } } return cumulation_value; } at::Tensor lsh_weighted_cumulation_ver2_kernel( at::Tensor query_mask, at::Tensor query_hash_code, at::Tensor query_weight, at::Tensor key_mask, at::Tensor key_hash_code, at::Tensor key_weight, at::Tensor value, int hashtable_capacity, bool use_cuda ) { int batch_size = query_hash_code.size(0); int num_hash_f = query_hash_code.size(2); int num_query = query_hash_code.size(1); int num_key = key_hash_code.size(1); int value_dim = value.size(2); int weight_dim = query_weight.size(2); at::Tensor count_sort_table = at::zeros({batch_size, num_hash_f, hashtable_capacity}, query_hash_code.options()); at::Tensor key_sorted_idxes = at::zeros({batch_size, num_hash_f, num_key}, query_hash_code.options()); at::Tensor query_info = at::zeros({batch_size, num_query, 2, num_hash_f}, query_hash_code.options()); at::Tensor cumulation_value = at::zeros({batch_size, num_query, value_dim}, value.options()); if (use_cuda) { int *query_mask_ptr = query_mask.data_ptr<int>(); int *query_hash_code_ptr = query_hash_code.data_ptr<int>(); float *query_weight_ptr = query_weight.data_ptr<float>(); int *key_mask_ptr = key_mask.data_ptr<int>(); int *key_hash_code_ptr = key_hash_code.data_ptr<int>(); float *key_weight_ptr = key_weight.data_ptr<float>(); float *value_ptr = value.data_ptr<float>(); int *count_sort_table_ptr = count_sort_table.data_ptr<int>(); int *key_sorted_idxes_ptr = key_sorted_idxes.data_ptr<int>(); int *query_info_ptr = query_info.data_ptr<int>(); float *cumulation_value_ptr = cumulation_value.data_ptr<float>(); { dim3 threads_step13(num_hash_f, max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f)); dim3 blocks_step13(num_key / max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f), batch_size); dim3 threads_step2(min(hashtable_capacity, OPTIMAL_THREADS_PER_BLOCK)); dim3 blocks_step2(num_hash_f, batch_size); int shared_mem = hashtable_capacity * sizeof(float); count_sort_step1_cuda_kernel<<<blocks_step13, threads_step13>>>( key_mask_ptr, key_hash_code_ptr, count_sort_table_ptr, batch_size, num_hash_f, hashtable_capacity, num_key ); count_sort_step2_cuda_kernel<<<blocks_step2, threads_step2, shared_mem>>>( count_sort_table_ptr, batch_size, num_hash_f, hashtable_capacity ); count_sort_step3_cuda_kernel<<<blocks_step13, threads_step13>>>( key_mask_ptr, key_hash_code_ptr, count_sort_table_ptr, key_sorted_idxes_ptr, batch_size, num_hash_f, hashtable_capacity, num_key ); } { dim3 threads(num_hash_f, max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f)); dim3 blocks(num_query / max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f), batch_size); extract_query_info_cuda_kernel<<<blocks, threads>>>( query_mask_ptr, query_hash_code_ptr, count_sort_table_ptr, query_info_ptr, batch_size, num_hash_f, hashtable_capacity, num_query ); } { dim3 threads(WARP_SIZE, OPTIMAL_THREADS_PER_BLOCK / WARP_SIZE); dim3 blocks(num_query, num_hash_f, batch_size); int shared_mem = (weight_dim + WARP_SIZE) * sizeof(float); lsh_weighted_cumulation_ver2_step2_cuda_kernel<<<blocks, threads, shared_mem>>>( query_mask_ptr, query_info_ptr, key_sorted_idxes_ptr, query_weight_ptr, key_weight_ptr, value_ptr, cumulation_value_ptr, batch_size, num_hash_f, num_query, num_key, value_dim, weight_dim ); } } return cumulation_value; } at::Tensor lsh_weighted_cumulation_ver3_kernel( at::Tensor query_mask, at::Tensor query_hash_code, at::Tensor query_weight, at::Tensor key_mask, at::Tensor key_hash_code, at::Tensor key_weight, at::Tensor value, int hashtable_capacity, bool use_cuda ) { int batch_size = query_hash_code.size(0); int num_hash_f = query_hash_code.size(2); int num_query = query_hash_code.size(1); int num_key = key_hash_code.size(1); int value_dim = value.size(2); int weight_dim = query_weight.size(2); at::Tensor count_sort_table = at::zeros({batch_size, num_hash_f, hashtable_capacity}, query_hash_code.options()); at::Tensor query_sorted_idxes = at::zeros({batch_size, num_hash_f, num_query}, query_hash_code.options()); at::Tensor key_info = at::zeros({batch_size, num_key, 2, num_hash_f}, query_hash_code.options()); at::Tensor cumulation_value = at::zeros({batch_size, num_query, value_dim}, value.options()); if (use_cuda) { int *query_mask_ptr = query_mask.data_ptr<int>(); int *query_hash_code_ptr = query_hash_code.data_ptr<int>(); float *query_weight_ptr = query_weight.data_ptr<float>(); int *key_mask_ptr = key_mask.data_ptr<int>(); int *key_hash_code_ptr = key_hash_code.data_ptr<int>(); float *key_weight_ptr = key_weight.data_ptr<float>(); float *value_ptr = value.data_ptr<float>(); int *count_sort_table_ptr = count_sort_table.data_ptr<int>(); int *query_sorted_idxes_ptr = query_sorted_idxes.data_ptr<int>(); int *key_info_ptr = key_info.data_ptr<int>(); float *cumulation_value_ptr = cumulation_value.data_ptr<float>(); { dim3 threads_step13(num_hash_f, max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f)); dim3 blocks_step13(num_query / max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f), batch_size); dim3 threads_step2(min(hashtable_capacity, OPTIMAL_THREADS_PER_BLOCK)); dim3 blocks_step2(num_hash_f, batch_size); int shared_mem = hashtable_capacity * sizeof(float); count_sort_step1_cuda_kernel<<<blocks_step13, threads_step13>>>( query_mask_ptr, query_hash_code_ptr, count_sort_table_ptr, batch_size, num_hash_f, hashtable_capacity, num_query ); count_sort_step2_cuda_kernel<<<blocks_step2, threads_step2, shared_mem>>>( count_sort_table_ptr, batch_size, num_hash_f, hashtable_capacity ); count_sort_step3_cuda_kernel<<<blocks_step13, threads_step13>>>( query_mask_ptr, query_hash_code_ptr, count_sort_table_ptr, query_sorted_idxes_ptr, batch_size, num_hash_f, hashtable_capacity, num_query ); } { dim3 threads(num_hash_f, max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f)); dim3 blocks(num_key / max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f), batch_size); extract_query_info_cuda_kernel<<<blocks, threads>>>( key_mask_ptr, key_hash_code_ptr, count_sort_table_ptr, key_info_ptr, batch_size, num_hash_f, hashtable_capacity, num_key ); } { dim3 threads(WARP_SIZE, OPTIMAL_THREADS_PER_BLOCK / WARP_SIZE); dim3 blocks(num_key, num_hash_f, batch_size); int shared_mem = (weight_dim + value_dim + WARP_SIZE) * sizeof(float); lsh_weighted_cumulation_ver3_step2_cuda_kernel<<<blocks, threads, shared_mem>>>( query_sorted_idxes_ptr, key_mask_ptr, key_info_ptr, query_weight_ptr, key_weight_ptr, value_ptr, cumulation_value_ptr, batch_size, num_hash_f, num_query, num_key, value_dim, weight_dim ); } } return cumulation_value; } at::Tensor lsh_weighted_cumulation_ver4_kernel( at::Tensor query_mask, at::Tensor query_hash_code, at::Tensor query_weight, at::Tensor key_mask, at::Tensor key_hash_code, at::Tensor key_weight, at::Tensor value, int hashtable_capacity, bool use_cuda ) { int batch_size = query_hash_code.size(0); int num_hash_f = query_hash_code.size(2); int num_query = query_hash_code.size(1); int num_key = key_hash_code.size(1); int value_dim = value.size(2); int weight_dim = query_weight.size(2); at::Tensor count_sort_table = at::zeros({batch_size, num_hash_f, hashtable_capacity}, query_hash_code.options()); at::Tensor query_sorted_idxes = at::zeros({batch_size, num_hash_f, num_query}, query_hash_code.options()); at::Tensor key_info = at::zeros({batch_size, num_key, 2, num_hash_f}, query_hash_code.options()); at::Tensor cumulation_value = at::zeros({batch_size, num_query, value_dim}, value.options()); if (use_cuda) { int *query_mask_ptr = query_mask.data_ptr<int>(); int *query_hash_code_ptr = query_hash_code.data_ptr<int>(); float *query_weight_ptr = query_weight.data_ptr<float>(); int *key_mask_ptr = key_mask.data_ptr<int>(); int *key_hash_code_ptr = key_hash_code.data_ptr<int>(); float *key_weight_ptr = key_weight.data_ptr<float>(); float *value_ptr = value.data_ptr<float>(); int *count_sort_table_ptr = count_sort_table.data_ptr<int>(); int *query_sorted_idxes_ptr = query_sorted_idxes.data_ptr<int>(); int *key_info_ptr = key_info.data_ptr<int>(); float *cumulation_value_ptr = cumulation_value.data_ptr<float>(); { dim3 threads_step13(num_hash_f, max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f)); dim3 blocks_step13(num_query / max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f), batch_size); dim3 threads_step2(min(hashtable_capacity, OPTIMAL_THREADS_PER_BLOCK)); dim3 blocks_step2(num_hash_f, batch_size); int shared_mem = hashtable_capacity * sizeof(float); count_sort_step1_cuda_kernel<<<blocks_step13, threads_step13>>>( query_mask_ptr, query_hash_code_ptr, count_sort_table_ptr, batch_size, num_hash_f, hashtable_capacity, num_query ); count_sort_step2_cuda_kernel<<<blocks_step2, threads_step2, shared_mem>>>( count_sort_table_ptr, batch_size, num_hash_f, hashtable_capacity ); count_sort_step3_cuda_kernel<<<blocks_step13, threads_step13>>>( query_mask_ptr, query_hash_code_ptr, count_sort_table_ptr, query_sorted_idxes_ptr, batch_size, num_hash_f, hashtable_capacity, num_query ); } { dim3 threads(num_hash_f, max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f)); dim3 blocks(num_key / max(1, OPTIMAL_THREADS_PER_BLOCK / num_hash_f), batch_size); extract_query_info_cuda_kernel<<<blocks, threads>>>( key_mask_ptr, key_hash_code_ptr, count_sort_table_ptr, key_info_ptr, batch_size, num_hash_f, hashtable_capacity, num_key ); } { dim3 threads(WARP_SIZE, OPTIMAL_THREADS_PER_BLOCK / WARP_SIZE); dim3 blocks(num_key, batch_size); int shared_mem = (weight_dim + value_dim + 2 * num_hash_f) * sizeof(float); lsh_weighted_cumulation_ver4_step2_cuda_kernel<<<blocks, threads, shared_mem>>>( query_sorted_idxes_ptr, key_mask_ptr, key_info_ptr, query_weight_ptr, key_weight_ptr, value_ptr, cumulation_value_ptr, batch_size, num_hash_f, num_query, num_key, value_dim, weight_dim ); } } return cumulation_value; }
transformers/src/transformers/kernels/yoso/fast_lsh_cumulation.cu/0
{ "file_path": "transformers/src/transformers/kernels/yoso/fast_lsh_cumulation.cu", "repo_id": "transformers", "token_count": 8662 }
328
# coding=utf-8 # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """TF general model utils.""" from __future__ import annotations import functools import gc import inspect import json import os import pickle import re import warnings from collections.abc import Mapping from pathlib import Path from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union import h5py import numpy as np import tensorflow as tf from packaging.version import parse from . import DataCollatorWithPadding, DefaultDataCollator from .activations_tf import get_tf_activation from .configuration_utils import PretrainedConfig from .dynamic_module_utils import custom_object_save from .generation import GenerationConfig, TFGenerationMixin from .tf_utils import ( convert_batch_encoding, expand_1d, load_attributes_from_hdf5_group, save_attributes_to_hdf5_group, shape_list, ) from .utils import ( SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, TF2_WEIGHTS_INDEX_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME, ModelOutput, PushToHubMixin, cached_file, download_url, find_labels, has_file, is_offline_mode, is_remote_url, is_safetensors_available, is_tf_symbolic_tensor, logging, requires_backends, working_or_temp_dir, ) from .utils.hub import convert_file_size_to_int, get_checkpoint_shard_files if is_safetensors_available(): from safetensors import safe_open from safetensors.tensorflow import save_file as safe_save_file if TYPE_CHECKING: from . import PreTrainedTokenizerBase logger = logging.get_logger(__name__) if "TF_USE_LEGACY_KERAS" not in os.environ: os.environ["TF_USE_LEGACY_KERAS"] = "1" # Compatibility fix to make sure tf.keras stays at Keras 2 elif os.environ["TF_USE_LEGACY_KERAS"] != "1": logger.warning( "Transformers is only compatible with Keras 2, but you have explicitly set `TF_USE_LEGACY_KERAS` to `0`. " "This may result in unexpected behaviour or errors if Keras 3 objects are passed to Transformers models." ) try: import tf_keras as keras from tf_keras import backend as K except (ModuleNotFoundError, ImportError): import keras from keras import backend as K if parse(keras.__version__).major > 2: raise ValueError( "Your currently installed version of Keras is Keras 3, but this is not yet supported in " "Transformers. Please install the backwards-compatible tf-keras package with " "`pip install tf-keras`." ) tf_logger = tf.get_logger() TFModelInputType = Union[ List[tf.Tensor], List[np.ndarray], Dict[str, tf.Tensor], Dict[str, np.ndarray], tf.Tensor, np.ndarray, ] def dummy_loss(y_true, y_pred): if y_pred.shape.rank <= 1: return y_pred else: reduction_axes = list(range(1, y_pred.shape.rank)) return tf.reduce_mean(y_pred, axis=reduction_axes) class TFModelUtilsMixin: """ A few utilities for `keras.Model`, to be used as a mixin. """ def num_parameters(self, only_trainable: bool = False) -> int: """ Get the number of (optionally, trainable) parameters in the model. Args: only_trainable (`bool`, *optional*, defaults to `False`): Whether or not to return only the number of trainable parameters Returns: `int`: The number of parameters. """ if only_trainable: return int(sum(np.prod(w.shape.as_list()) for w in self.trainable_variables)) else: return self.count_params() def keras_serializable(cls): """ Decorate a Keras Layer class to support Keras serialization. This is done by: 1. Adding a `transformers_config` dict to the Keras config dictionary in `get_config` (called by Keras at serialization time. 2. Wrapping `__init__` to accept that `transformers_config` dict (passed by Keras at deserialization time) and convert it to a config object for the actual layer initializer. 3. Registering the class as a custom object in Keras (if the Tensorflow version supports this), so that it does not need to be supplied in `custom_objects` in the call to `keras.models.load_model`. Args: cls (a `keras.layers.Layers subclass`): Typically a `TF.MainLayer` class in this project, in general must accept a `config` argument to its initializer. Returns: The same class object, with modifications for Keras deserialization. """ initializer = cls.__init__ config_class = getattr(cls, "config_class", None) if config_class is None: raise AttributeError("Must set `config_class` to use @keras_serializable") @functools.wraps(initializer) def wrapped_init(self, *args, **kwargs): config = args[0] if args and isinstance(args[0], PretrainedConfig) else kwargs.pop("config", None) if isinstance(config, dict): config = config_class.from_dict(config) initializer(self, config, *args, **kwargs) elif isinstance(config, PretrainedConfig): if len(args) > 0: initializer(self, *args, **kwargs) else: initializer(self, config, *args, **kwargs) else: raise ValueError("Must pass either `config` (PretrainedConfig) or `config` (dict)") self._config = config self._kwargs = kwargs cls.__init__ = wrapped_init if not hasattr(cls, "get_config"): raise TypeError("Only use @keras_serializable on keras.layers.Layer subclasses") if hasattr(cls.get_config, "_is_default"): def get_config(self): cfg = super(cls, self).get_config() cfg["config"] = self._config.to_dict() cfg.update(self._kwargs) return cfg cls.get_config = get_config cls._keras_serializable = True if hasattr(keras.utils, "register_keras_serializable"): cls = keras.utils.register_keras_serializable()(cls) return cls class TFCausalLanguageModelingLoss: """ Loss function suitable for causal language modeling (CLM), that is, the task of guessing the next token. <Tip> Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. </Tip> """ def hf_compute_loss(self, labels, logits): loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction=keras.losses.Reduction.NONE) if self.config.tf_legacy_loss: # make sure only labels that are not equal to -100 affect the loss active_loss = tf.not_equal(tf.reshape(labels, (-1,)), -100) reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss) return loss_fn(labels, reduced_logits) # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway unmasked_loss = loss_fn(tf.nn.relu(labels), logits) # make sure only labels that are not equal to -100 affect the loss loss_mask = tf.cast(labels != -100, dtype=unmasked_loss.dtype) masked_loss = unmasked_loss * loss_mask reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(loss_mask) return tf.reshape(reduced_masked_loss, (1,)) class TFQuestionAnsweringLoss: """ Loss function suitable for question answering. """ def hf_compute_loss(self, labels, logits): loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction=keras.losses.Reduction.NONE) start_loss = loss_fn(labels["start_position"], logits[0]) end_loss = loss_fn(labels["end_position"], logits[1]) return (start_loss + end_loss) / 2.0 class TFTokenClassificationLoss: """ Loss function suitable for token classification. <Tip> Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. </Tip> """ def hf_compute_loss(self, labels, logits): loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction=keras.losses.Reduction.NONE) if tf.executing_eagerly(): # Data-dependent conditionals are forbidden in XLA if tf.math.reduce_any(labels == -1): tf.print("Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.") if self.config.tf_legacy_loss: # make sure only labels that are not equal to -100 # are taken into account as loss if tf.math.reduce_any(labels == -1): tf.print("Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.") active_loss = tf.reshape(labels, (-1,)) != -1 else: active_loss = tf.reshape(labels, (-1,)) != -100 reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss) return loss_fn(labels, reduced_logits) # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway unmasked_loss = loss_fn(tf.nn.relu(labels), logits) # make sure only labels that are not equal to -100 or -1 # are taken into account as loss loss_mask = tf.cast(labels >= 0, dtype=unmasked_loss.dtype) # Avoid possible division by zero later # Masked positions will have a loss of NaN because -100 and -1 are not valid labels masked_loss = unmasked_loss * loss_mask reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(loss_mask) return tf.reshape(reduced_masked_loss, (1,)) class TFSequenceClassificationLoss: """ Loss function suitable for sequence classification. """ def hf_compute_loss(self, labels, logits): if logits.shape.rank == 1 or logits.shape[1] == 1: loss_fn = keras.losses.MeanSquaredError(reduction=keras.losses.Reduction.NONE) if labels.shape.rank == 1: # MeanSquaredError returns a scalar loss if the labels are 1D, so avoid that labels = tf.expand_dims(labels, axis=-1) else: loss_fn = keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=keras.losses.Reduction.NONE ) return loss_fn(labels, logits) class TFMultipleChoiceLoss: """Loss function suitable for multiple choice tasks.""" def hf_compute_loss(self, labels, logits): loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction=keras.losses.Reduction.NONE) return loss_fn(labels, logits) class TFMaskedLanguageModelingLoss(TFCausalLanguageModelingLoss): """ Loss function suitable for masked language modeling (MLM), that is, the task of guessing the masked tokens. <Tip> Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. </Tip> """ class TFNextSentencePredictionLoss: """ Loss function suitable for next sentence prediction (NSP), that is, the task of guessing the next sentence. <Tip> Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. </Tip> """ def hf_compute_loss(self, labels, logits): loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction=keras.losses.Reduction.NONE) if self.config.tf_legacy_loss: # make sure only labels that are not equal to -100 # are taken into account as loss next_sentence_active_loss = tf.not_equal(tf.reshape(labels, (-1,)), -100) next_sentence_reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, 2)), next_sentence_active_loss) next_sentence_label = tf.boolean_mask(tf.reshape(labels, (-1,)), next_sentence_active_loss) return loss_fn(next_sentence_label, next_sentence_reduced_logits) # make sure only labels that are not equal to -100 # are taken into account as loss # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway unmasked_ns_loss = loss_fn(y_true=tf.nn.relu(labels), y_pred=logits) ns_loss_mask = tf.cast(labels != -100, dtype=unmasked_ns_loss.dtype) # Just zero out samples where label is -100, no reduction masked_ns_loss = unmasked_ns_loss * ns_loss_mask return masked_ns_loss def booleans_processing(config, **kwargs): """ Process the input booleans of each model. Args: config ([`PretrainedConfig`]): The config of the running model. **kwargs: The boolean parameters Returns: A dictionary with the proper values for each boolean """ final_booleans = {} # Pure conv models (such as ConvNext) do not have `output_attentions`. If the signature has # `output_attentions`, it will be present here in `kwargs`, even if unset (in that case, as `None`) if "output_attentions" in kwargs: final_booleans["output_attentions"] = ( kwargs["output_attentions"] if kwargs["output_attentions"] is not None else config.output_attentions ) final_booleans["output_hidden_states"] = ( kwargs["output_hidden_states"] if kwargs["output_hidden_states"] is not None else config.output_hidden_states ) final_booleans["return_dict"] = kwargs["return_dict"] if kwargs["return_dict"] is not None else config.return_dict if "use_cache" in kwargs: final_booleans["use_cache"] = ( kwargs["use_cache"] if kwargs["use_cache"] is not None else getattr(config, "use_cache", None) ) return final_booleans def unpack_inputs(func): """ Decorator that processes the inputs to a Keras layer, passing them to the layer as keyword arguments. This enables downstream use of the inputs by their variable name, even if they arrive packed as a dictionary in the first input (common case in Keras). Args: func (`callable`): The callable function of the TensorFlow model. Returns: A callable that wraps the original `func` with the behavior described above. """ original_signature = inspect.signature(func) @functools.wraps(func) def run_call_with_unpacked_inputs(self, *args, **kwargs): # isolates the actual `**kwargs` for the decorated function kwargs_call = {key: val for key, val in kwargs.items() if key not in dict(original_signature.parameters)} fn_args_and_kwargs = {key: val for key, val in kwargs.items() if key not in kwargs_call} fn_args_and_kwargs.update({"kwargs_call": kwargs_call}) # move any arg into kwargs, if they exist fn_args_and_kwargs.update(dict(zip(func.__code__.co_varnames[1:], args))) # Encoder Decoder models delegate the application of the configuration options to their inner models. if "EncoderDecoder" in self.__class__.__name__: config = None else: config = self.config unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs) return func(self, **unpacked_inputs) # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This # function does not follow wrapper chains (i.e. ignores `functools.wraps()`), meaning that without the line below # Keras would attempt to check the first argument against the literal signature of the wrapper. run_call_with_unpacked_inputs.__signature__ = original_signature return run_call_with_unpacked_inputs def input_processing(func, config, **kwargs): """ Process the input of each TensorFlow model including the booleans. In case of a list of symbolic inputs, each input has to be named accordingly to the parameters name, i.e. `input_ids = keras.Input(shape=(128,), dtype='int32', name="input_ids")` otherwise the order of the tensors will not be guaranteed during the training. Args: func (`callable`): The callable function of the TensorFlow model. config ([`PretrainedConfig`]): The config of the running model. **kwargs: The inputs of the model. Returns: Two lists, one for the missing layers, and another one for the unexpected layers. """ signature = dict(inspect.signature(func).parameters) has_kwargs = bool(signature.pop("kwargs", None)) signature.pop("self", None) parameter_names = list(signature.keys()) main_input_name = parameter_names[0] main_input = kwargs.pop(main_input_name, None) output = {} allowed_types = (tf.Tensor, bool, int, ModelOutput, tuple, list, dict, np.ndarray) if "inputs" in kwargs["kwargs_call"]: warnings.warn( "The `inputs` argument is deprecated and will be removed in a future version, use `input_ids` instead.", FutureWarning, ) output["input_ids"] = kwargs["kwargs_call"].pop("inputs") if "decoder_cached_states" in kwargs["kwargs_call"]: warnings.warn( "The `decoder_cached_states` argument is deprecated and will be removed in a future version, use" " `past_key_values` instead.", FutureWarning, ) output["past_key_values"] = kwargs["kwargs_call"].pop("decoder_cached_states") if "past" in kwargs["kwargs_call"] and "past_key_values" in parameter_names: warnings.warn( "The `past` argument is deprecated and will be removed in a future version, use `past_key_values`" " instead.", FutureWarning, ) kwargs["past_key_values"] = kwargs["kwargs_call"].pop("past") elif "past_key_values" in kwargs["kwargs_call"] and "past" in parameter_names: kwargs["past"] = kwargs["kwargs_call"].pop("past_key_values") if has_kwargs: output["kwargs"] = kwargs.pop("kwargs_call", {}) else: if len(kwargs["kwargs_call"]) > 0: raise ValueError( "The following keyword arguments are not supported by this model:" f" {list(kwargs['kwargs_call'].keys())}." ) kwargs.pop("kwargs_call") for k, v in kwargs.items(): if isinstance(v, allowed_types) or tf.is_tensor(v) or v is None: output[k] = v else: raise ValueError(f"Data of type {type(v)} is not allowed only {allowed_types} is accepted for {k}.") if isinstance(main_input, (tuple, list)): for i, input in enumerate(main_input): # EagerTensors don't allow to use the .name property so we check for a real Tensor if is_tf_symbolic_tensor(input): # Tensor names have always the pattern `name:id` then we check only the # `name` part tensor_name = input.name.split(":")[0] if tensor_name in parameter_names: output[tensor_name] = input else: output[parameter_names[i]] = input elif isinstance(input, allowed_types) or input is None: output[parameter_names[i]] = input else: raise ValueError( f"Data of type {type(input)} is not allowed only {allowed_types} is accepted for" f" {parameter_names[i]}." ) elif isinstance(main_input, Mapping): if "inputs" in main_input: warnings.warn( "The `inputs` argument is deprecated and will be removed in a future version, use `input_ids`" " instead.", FutureWarning, ) output["input_ids"] = main_input.pop("inputs") if "decoder_cached_states" in main_input: warnings.warn( "The `decoder_cached_states` argument is deprecated and will be removed in a future version, use" " `past_key_values` instead.", FutureWarning, ) output["past_key_values"] = main_input.pop("decoder_cached_states") for k, v in dict(main_input).items(): if isinstance(v, allowed_types) or v is None: output[k] = v elif k not in parameter_names and "args" not in parameter_names: logger.warning( f"The parameter {k} does not belongs to the parameter list {parameter_names} and will be ignored." ) continue else: raise ValueError(f"Data of type {type(v)} is not allowed only {allowed_types} is accepted for {k}.") else: if tf.is_tensor(main_input) or main_input is None: output[main_input_name] = main_input else: raise ValueError( f"Data of type {type(main_input)} is not allowed only {allowed_types} is accepted for" f" {main_input_name}." ) # Populates any unspecified argument with their default value, according to the signature. for name in parameter_names: if name not in list(output.keys()) and name != "args": output[name] = kwargs.pop(name, signature[name].default) # When creating a SavedModel TF calls the method with LayerCall.__call__(args, **kwargs) # So to respect the proper output we have to add this exception if "args" in output: if output["args"] is not None and is_tf_symbolic_tensor(output["args"]): tensor_name = output["args"].name.split(":")[0] output[tensor_name] = output["args"] else: # `args` in this case is always the first parameter, then `input_ids` output["input_ids"] = output["args"] del output["args"] if "kwargs" in output: del output["kwargs"] cast_output = {} for key, val in output.items(): if isinstance(val, tf.Tensor) and val.dtype == tf.int64: cast_output[key] = tf.cast(val, tf.int32) elif isinstance(val, np.ndarray) and val.dtype == np.int64: cast_output[key] = val.astype(np.int32) else: cast_output[key] = val output = cast_output del cast_output if config is not None: boolean_dict = { k: v for k, v in output.items() if k in ["return_dict", "output_attentions", "output_hidden_states", "use_cache"] } output.update( booleans_processing( config=config, **boolean_dict, ) ) return output def dtype_byte_size(dtype): """ Returns the size (in bytes) occupied by one parameter of type `dtype`. Example: ```py >>> dtype_byte_size(tf.float32) 4 ``` """ if dtype == tf.bool: return 1 / 8 bit_search = re.search(r"[^\d](\d+)$", dtype.name) if bit_search is None: raise ValueError(f"`dtype` is not a valid dtype: {dtype}.") bit_size = int(bit_search.groups()[0]) return bit_size // 8 def strip_model_name_and_prefix(name, _prefix=None): if _prefix is not None and name.startswith(_prefix): name = name[len(_prefix) :] if name.startswith("/"): name = name[1:] if "model." not in name and len(name.split("/")) > 1: name = "/".join(name.split("/")[1:]) return name def tf_shard_checkpoint(weights, max_shard_size="10GB", weights_name: str = TF2_WEIGHTS_NAME): """ Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a given size. The sub-checkpoints are determined by iterating through the `state_dict` in the order of its keys, so there is no optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}> If one of the model's weight is bigger that `max_shard_size`, it will end up in its own sub-checkpoint which will have a size greater than `max_shard_size`. </Tip> Args: weights (`Dict[str, tf.RessourceVariable]`): The list of tf.RessourceVariable of a model to save. max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). """ max_shard_size = convert_file_size_to_int(max_shard_size) sharded_state_dicts = [] current_block = [] current_block_size = 0 total_size = 0 for item in weights: weight_size = item.numpy().size * dtype_byte_size(item.dtype) # If this weight is going to tip up over the maximal size, we split. if current_block_size + weight_size > max_shard_size: sharded_state_dicts.append(current_block) current_block = [] current_block_size = 0 current_block.append(item) current_block_size += weight_size total_size += weight_size # Add the last block sharded_state_dicts.append(current_block) # If we only have one shard, we return it if len(sharded_state_dicts) == 1: return {weights_name: sharded_state_dicts[0]}, None # Otherwise, let's build the index weight_map = {} shards = {} for idx, shard in enumerate(sharded_state_dicts): shard_file = weights_name.replace(".h5", f"-{idx+1:05d}-of-{len(sharded_state_dicts):05d}.h5") shard_file = shard_file.replace( ".safetensors", f"-{idx + 1:05d}-of-{len(sharded_state_dicts):05d}.safetensors" ) shards[shard_file] = shard for weight in shard: weight_name = weight.name weight_map[weight_name] = shard_file # Add the metadata metadata = {"total_size": total_size} index = {"metadata": metadata, "weight_map": weight_map} return shards, index def load_tf_sharded_weights(model, shard_files, ignore_mismatched_sizes=False, strict=False, _prefix=None): """ This is the same as `load_tf_weights` but for a sharded checkpoint. Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and shapes. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model. Args: model (`keras.models.Model`): The model in which to load the checkpoint. shard_files (`str` or `os.PathLike`): A list containing the sharded checkpoint names. ignore_mismatched_sizes`bool`, *optional`, defaults to `True`): Whether or not to ignore the mismatch between the sizes strict (`bool`, *optional*, defaults to `True`): Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint. Returns: Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the mismatched layers. """ # Load the index unexpected_keys = set() saved_keys = set() mismatched_keys = set() # Since TF adds the name of the class to its weights, and uses the index and not the name of the layer to load # the weight, we have to get rid of the first prefix of the name of the layer. model_keys = set() model_layer_map = {} for i, k in enumerate(model.weights): layer_name = k.name if _prefix is not None and layer_name.startswith(_prefix): layer_name = layer_name[len(_prefix) :] layer_name = layer_name.lstrip("/") if not ("model." in layer_name or len(layer_name.split("/")) == 1): layer_name = "/".join(layer_name.split("/")[1:]) model_keys.add(layer_name) model_layer_map[layer_name] = i for shard_file in shard_files: saved_weight_names_set, unexpected_keys_set, mismatched_keys_set = load_tf_shard( model, model_layer_map, shard_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=_prefix, ) saved_keys.update(saved_weight_names_set) unexpected_keys.update(unexpected_keys_set) mismatched_keys.update(mismatched_keys_set) gc.collect() missing_keys = model_keys - saved_keys if strict and (len(missing_keys) > 0 or len(unexpected_keys) > 0): error_message = f"Error(s) in loading state_dict for {model.__class__.__name__}" if len(missing_keys) > 0: str_missing_keys = ",".join([f'"{k}"' for k in missing_keys]) error_message += f"\nMissing key(s): {str_missing_keys}." if len(unexpected_keys) > 0: str_unexpected_keys = ",".join([f'"{k}"' for k in unexpected_keys]) error_message += f"\nMissing key(s): {str_unexpected_keys}." raise RuntimeError(error_message) return missing_keys, unexpected_keys, mismatched_keys def load_tf_shard(model, model_layer_map, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): """ Loads a shard from a sharded checkpoint file. Can be either H5 or Safetensors. Handles missing keys and unexpected keys. Args: model (`keras.models.Model`): Model in which the weights are loaded model_layer_map (`Dict`): A dictionary mapping the layer name to the index of the layer in the model. resolved_archive_file (`str`): Path to the checkpoint file from which the weights will be loaded ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): Whether to ignore the mismatched keys Returns: `keras.models.Model`: Three lists, one for the layers that were found and succesfully restored (from the shard file), one for the mismatched layers, and another one for the unexpected layers. """ saved_weight_names_set = set() saved_weights = {} mismatched_keys = set() unexpected_keys = set() # Read the H5 file try: with h5py.File(resolved_archive_file, "r") as sharded_checkpoint_file: # Retrieve the name of each layer from the H5 file saved_h5_model_layers_name = set(load_attributes_from_hdf5_group(sharded_checkpoint_file, "layer_names")) weight_value_tuples = [] # Compute missing and unexpected sub layers # Store the weights in list of tuples that looks like [(weight_object, value_of_weight),...] for layer_name in saved_h5_model_layers_name: h5_layer_object = sharded_checkpoint_file[layer_name] saved_weights[layer_name] = np.asarray(h5_layer_object) saved_weight_names_set.add(layer_name) if layer_name not in model_layer_map: unexpected_keys.add(layer_name) else: symbolic_weight = model.weights[model_layer_map[layer_name]] saved_weight_value = saved_weights[layer_name] # If the current weight is found if saved_weight_value is not None: # Check if the shape of the current weight and the one from the H5 file are different if K.int_shape(symbolic_weight) != saved_weight_value.shape: # If yes we reshape the weight from the H5 file accordingly to the current weight # If the two shapes are not compatible we raise an issue try: array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight)) except ValueError as e: if ignore_mismatched_sizes: mismatched_keys.add( (layer_name, saved_weight_value.shape, K.int_shape(symbolic_weight)) ) continue else: raise e else: array = saved_weight_value # We create the tuple that will be loaded and add it to the final list weight_value_tuples.append((symbolic_weight, array)) K.batch_set_value(weight_value_tuples) return saved_weight_names_set, unexpected_keys, mismatched_keys except Exception as e: try: with open(resolved_archive_file) as f: if f.read().startswith("version"): raise OSError( "You seem to have cloned a repository without having git-lfs installed. Please install " "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " "you cloned." ) else: raise ValueError( f"Unable to locate the file {resolved_archive_file} which is necessary to load this pretrained" " model. Make sure you have saved the model properly." ) from e except (UnicodeDecodeError, ValueError): raise OSError( f"Unable to load weights from TF checkpoint file for '{resolved_archive_file}' " f"at '{resolved_archive_file}'. " "If you tried to load a TF model from a sharded checkpoint, you should try converting the model " "by loading it in pytorch and saving it localy. A convertion script should be realeased soon." ) def load_tf_sharded_weights_from_safetensors( model, shard_files, ignore_mismatched_sizes=False, strict=False, _prefix=None ): """ This is the same as `load_tf_weights_from_safetensors` but for a sharded TF-format safetensors checkpoint. Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and shapes. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model. Args: model (`keras.models.Model`): The model in which to load the checkpoint. shard_files (`str` or `os.PathLike`): A list containing the sharded checkpoint names. ignore_mismatched_sizes`bool`, *optional`, defaults to `True`): Whether or not to ignore the mismatch between the sizes strict (`bool`, *optional*, defaults to `True`): Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint. Returns: Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the mismatched layers. """ # Load the index unexpected_keys = set() all_missing_keys = [] mismatched_keys = set() for shard_file in shard_files: missing_layers, unexpected_layers, mismatched_layers = load_tf_weights_from_safetensors( model, shard_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=_prefix, ) all_missing_keys.append(set(missing_layers)) unexpected_keys.update(unexpected_layers) mismatched_keys.update(mismatched_layers) gc.collect() missing_keys = set.intersection(*all_missing_keys) if strict and (len(missing_keys) > 0 or len(unexpected_keys) > 0): error_message = f"Error(s) in loading state_dict for {model.__class__.__name__}" if len(missing_keys) > 0: str_missing_keys = ",".join([f'"{k}"' for k in missing_keys]) error_message += f"\nMissing key(s): {str_missing_keys}." if len(unexpected_keys) > 0: str_unexpected_keys = ",".join([f'"{k}"' for k in unexpected_keys]) error_message += f"\nMissing key(s): {str_unexpected_keys}." raise RuntimeError(error_message) return missing_keys, unexpected_keys, mismatched_keys def load_tf_weights(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): """ Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and shapes. Args: model (`keras.models.Model`): The model to load the weights into. resolved_archive_file (`str`): The location of the H5 file. ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): Whether or not to ignore weights with shapes that don't match between the checkpoint of the model. Returns: Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the mismatched layers. """ if resolved_archive_file.endswith(".safetensors"): load_function = load_tf_weights_from_safetensors else: load_function = load_tf_weights_from_h5 return load_function( model, resolved_archive_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=_prefix ) def load_tf_weights_from_h5(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): mismatched_layers = [] # Read the H5 file with h5py.File(resolved_archive_file, "r") as sharded_checkpoint_file: # Retrieve the name of each layer from the H5 file saved_h5_model_layers_name = set(load_attributes_from_hdf5_group(sharded_checkpoint_file, "layer_names")) # Find the missing layers from the high level list of layers missing_layers = list({layer.name for layer in model.layers} - saved_h5_model_layers_name) # Find the unexpected layers from the high level list of layers unexpected_layers = list(saved_h5_model_layers_name - {layer.name for layer in model.layers}) saved_weight_names_set = set() symbolic_weights_names = set() weight_value_tuples = [] # Compute missing and unexpected sub layers # Store the weights in list of tuples that looks like [(weight_object, value_of_weight),...] for layer in model.layers: # if layer_name from the H5 file belongs to the layers from the instantiated model if layer.name in saved_h5_model_layers_name: # Get the H5 layer object from its name h5_layer_object = sharded_checkpoint_file[layer.name] # Get all the weights as a list from the layer object symbolic_weights = layer.trainable_weights + layer.non_trainable_weights saved_weights = {} # Create a dict from the H5 saved model that looks like {"weight_name": weight_value} # And a set with only the names for weight_name in load_attributes_from_hdf5_group(h5_layer_object, "weight_names"): # TF names always start with the model name so we ignore it name = "/".join(weight_name.split("/")[1:]) if _prefix is not None: name = _prefix + "/" + name saved_weights[name] = np.asarray(h5_layer_object[weight_name]) # Add the updated name to the final list for computing missing/unexpected values saved_weight_names_set.add(name) # Loop over each weights from the instantiated model and compare with the weights from the H5 file for symbolic_weight in symbolic_weights: # TF names always start with the model name so we ignore it if _prefix is not None: delimeter = len(_prefix.split("/")) symbolic_weight_name = "/".join( symbolic_weight.name.split("/")[:delimeter] + symbolic_weight.name.split("/")[delimeter + 1 :] ) else: symbolic_weight_name = "/".join(symbolic_weight.name.split("/")[1:]) # here we check if the current weight is among the weights from the H5 file # If yes, get the weight_value of the corresponding weight from the H5 file # If not, make the value to None saved_weight_value = saved_weights.get(symbolic_weight_name, None) # Retrocompatibility patch: some embeddings are stored with the weights name (e.g. Bart's # `model.shared/embeddings:0` are stored as `model.shared/weights:0`) if saved_weight_value is None and symbolic_weight_name.endswith("embeddings:0"): symbolic_weight_name = symbolic_weight_name[:-12] + "weight:0" saved_weight_value = saved_weights.get(symbolic_weight_name, None) # Add the updated name to the final list for computing missing/unexpected values symbolic_weights_names.add(symbolic_weight_name) # If the current weight is found if saved_weight_value is not None: # Check if the shape of the current weight and the one from the H5 file are different if K.int_shape(symbolic_weight) != saved_weight_value.shape: # If yes we reshape the weight from the H5 file accordingly to the current weight # If the two shapes are not compatible we raise an issue try: array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight)) except ValueError as e: if ignore_mismatched_sizes: mismatched_layers.append( (symbolic_weight_name, saved_weight_value.shape, K.int_shape(symbolic_weight)) ) continue else: raise e else: array = saved_weight_value # We create the tuple that will be loaded and add it to the final list weight_value_tuples.append((symbolic_weight, array)) # Load all the weights K.batch_set_value(weight_value_tuples) # Compute the missing and unexpected layers missing_layers.extend(list(symbolic_weights_names - saved_weight_names_set)) unexpected_layers.extend(list(saved_weight_names_set - symbolic_weights_names)) return missing_layers, unexpected_layers, mismatched_layers def load_tf_weights_from_safetensors(model, resolved_archive_file, ignore_mismatched_sizes=False, _prefix=None): # Read the safetensors file with safe_open(resolved_archive_file, framework="tf") as safetensors_archive: mismatched_layers = [] weight_names = [strip_model_name_and_prefix(w.name, _prefix=_prefix) for w in model.weights] loaded_weight_names = list(safetensors_archive.keys()) # Find the missing layers from the high level list of layers missing_layers = list(set(weight_names) - set(loaded_weight_names)) # Find the unexpected layers from the high level list of layers unexpected_layers = list(set(loaded_weight_names) - set(weight_names)) for weight in model.weights: weight_name = strip_model_name_and_prefix(weight.name, _prefix=_prefix) if weight_name in loaded_weight_names: weight_value = safetensors_archive.get_tensor(weight_name) # Check if the shape of the current weight and the one from the H5 file are different if K.int_shape(weight) != weight_value.shape: # If yes we reshape the weight from the H5 file accordingly to the current weight # If the two shapes are not compatible we raise an issue try: weight_value = tf.reshape(weight_value, K.int_shape(weight)) except (ValueError, tf.errors.InvalidArgumentError) as e: if ignore_mismatched_sizes: mismatched_layers.append((weight_name, weight_value.shape, K.int_shape(weight))) continue else: raise e K.set_value(weight, weight_value) # weight.assign() might break if weight is a DTensor return missing_layers, unexpected_layers, mismatched_layers def init_copy_embeddings(old_embeddings, new_num_tokens): r""" This function aims to reduce the embeddings in case new_num_tokens < old_num_tokens or to pad with -1 in case new_num_tokens > old_num_tokens. A mask is also computed in order to know which weight in the embeddings should be kept or not. Example: - if new_num_tokens=5 and old_num_tokens=4 and old_embeddings=[w1,w2,w3,w4] - mask=[True,True,True,True,False] and current_weights=[w1,w2,w3,w4,-1] - if new_num_tokens=4 and old_num_tokens=5 and old_embeddings=[w1,w2,w3,w4,w5] - mask=[True,True,True,True] and current_weights=[w1,w2,w3,w4] """ old_num_tokens, old_embedding_dim = shape_list(old_embeddings) size_diff = new_num_tokens - old_num_tokens # initialize new embeddings # Copy token embeddings from the previous ones if tf.math.greater(size_diff, 0): # if the new size is greater than the old one, we extend the current embeddings with a padding until getting new size # and we create a mask to properly identify the padded values and be replaced by the values of the newly created # embeddings current_weights = tf.pad( old_embeddings.value(), tf.convert_to_tensor([[0, size_diff], [0, 0]]), constant_values=-1 ) num_tokens_to_copy = min(old_num_tokens, new_num_tokens) mask = tf.fill(tf.convert_to_tensor([num_tokens_to_copy, 1]), True) mask = tf.pad(mask, tf.convert_to_tensor([[0, size_diff], [0, 0]]), constant_values=False) else: # if the new size if lower than the old one, we take the current embeddings until the new size current_weights = tf.slice( old_embeddings.value(), tf.convert_to_tensor([0, 0]), tf.convert_to_tensor([new_num_tokens, old_embedding_dim]), ) mask = tf.fill(tf.convert_to_tensor([new_num_tokens, 1]), True) return mask, current_weights class TFPreTrainedModel(keras.Model, TFModelUtilsMixin, TFGenerationMixin, PushToHubMixin): r""" Base class for all TF models. [`TFPreTrainedModel`] takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to: - resize the input embeddings, - prune heads in the self-attention heads. Class attributes (overridden by derived classes): - **config_class** ([`PretrainedConfig`]) -- A subclass of [`PretrainedConfig`] to use as configuration class for this model architecture. - **base_model_prefix** (`str`) -- A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model. - **main_input_name** (`str`) -- The name of the principal input to the model (often `input_ids` for NLP models, `pixel_values` for vision models and `input_values` for speech models). """ config_class = None base_model_prefix = "" main_input_name = "input_ids" _auto_class = None _using_dummy_loss = None _label_to_output_map = None # a list of re pattern of tensor names to ignore from the model when loading the model weights # (and avoid unnecessary warnings). _keys_to_ignore_on_load_missing = None # a list of re pattern of tensor names to ignore from the weights when loading the model weights # (and avoid unnecessary warnings). _keys_to_ignore_on_load_unexpected = None _requires_load_weight_prefix = False @property def dummy_inputs(self) -> Dict[str, tf.Tensor]: """ Dummy inputs to build the network. Returns: `Dict[str, tf.Tensor]`: The dummy inputs. """ dummies = {} for key, spec in self.input_signature.items(): # 2 is the most correct arbitrary size. I will not be taking questions dummy_shape = [dim if dim is not None else 2 for dim in spec.shape] if spec.shape[0] is None: # But let's make the batch size 1 to save memory anyway dummy_shape[0] = 1 dummies[key] = tf.ones(shape=dummy_shape, dtype=spec.dtype) if key == "token_type_ids": # Some models have token_type_ids but with a vocab_size of 1 dummies[key] = tf.zeros_like(dummies[key]) if self.config.add_cross_attention and "encoder_hidden_states" in inspect.signature(self.call).parameters: if "encoder_hidden_states" not in dummies: if self.main_input_name == "input_ids": dummies["encoder_hidden_states"] = tf.ones( shape=(1, 2, self.config.hidden_size), dtype=tf.float32, name="encoder_hidden_states" ) else: raise NotImplementedError( "Model has cross-attention but we couldn't infer the shape for the encoder hidden states. Please manually override dummy_inputs!" ) return dummies def build_in_name_scope(self): with tf.name_scope(self.name): self.build(input_shape=None) @property def framework(self) -> str: """ :str: Identifies that this is a TensorFlow model. """ return "tf" def build(self, input_shape=None): pass # This is just here to make sure we don't call the superclass build() def __init__(self, config, *inputs, **kwargs): super().__init__(*inputs, **kwargs) if not isinstance(config, PretrainedConfig): raise TypeError( f"Parameter config in `{self.__class__.__name__}(config)` should be an instance of class " "`PretrainedConfig`. To create a model from a pretrained model use " f"`model = {self.__class__.__name__}.from_pretrained(PRETRAINED_MODEL_NAME)`" ) # Save config and origin of the pretrained weights if given in model self.config = config self.name_or_path = config.name_or_path self.generation_config = GenerationConfig.from_model_config(config) if self.can_generate() else None self._set_save_spec(self.input_signature) def get_config(self): return self.config.to_dict() @functools.wraps(keras.Model.fit) def fit(self, *args, **kwargs): args, kwargs = convert_batch_encoding(*args, **kwargs) return super().fit(*args, **kwargs) @functools.wraps(keras.Model.train_on_batch) def train_on_batch(self, *args, **kwargs): args, kwargs = convert_batch_encoding(*args, **kwargs) return super().train_on_batch(*args, **kwargs) @functools.wraps(keras.Model.test_on_batch) def test_on_batch(self, *args, **kwargs): args, kwargs = convert_batch_encoding(*args, **kwargs) return super().test_on_batch(*args, **kwargs) @functools.wraps(keras.Model.predict_on_batch) def predict_on_batch(self, *args, **kwargs): args, kwargs = convert_batch_encoding(*args, **kwargs) return super().predict_on_batch(*args, **kwargs) @functools.wraps(keras.Model.predict) def predict(self, *args, **kwargs): args, kwargs = convert_batch_encoding(*args, **kwargs) return super().predict(*args, **kwargs) @functools.wraps(keras.Model.evaluate) def evaluate(self, *args, **kwargs): args, kwargs = convert_batch_encoding(*args, **kwargs) return super().evaluate(*args, **kwargs) @classmethod def from_config(cls, config, **kwargs): if isinstance(config, PretrainedConfig): return cls._from_config(config, **kwargs) return cls._from_config(cls.config_class.from_dict(config, **kwargs)) @classmethod def _from_config(cls, config, **kwargs): """ All context managers that the model should be initialized under go here. """ return cls(config, **kwargs) def get_head_mask(self, head_mask: tf.Tensor | None, num_hidden_layers: int) -> tf.Tensor: """ Prepare the head mask if needed. Args: head_mask (`tf.Tensor` with shape `[num_heads]` or `[num_hidden_layers x num_heads]`, *optional*): The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard). num_hidden_layers (`int`): The number of hidden layers in the model. Returns: `tf.Tensor` with shape `[num_hidden_layers x batch x num_heads x seq_length x seq_length]` or list with `[None]` for each layer. """ if head_mask is not None: head_mask = self._convert_head_mask_to_5d(head_mask, num_hidden_layers) else: head_mask = [None] * num_hidden_layers return head_mask def _convert_head_mask_to_5d(self, head_mask, num_hidden_layers): """-> [num_hidden_layers x batch x num_heads x seq_length x seq_length]""" if head_mask.shape.rank == 1: head_mask = head_mask[None, None, :, None, None] head_mask = tf.repeat(head_mask, repeats=num_hidden_layers, axis=0) elif head_mask.shape.rank == 2: head_mask = head_mask[:, None, :, None, None] assert head_mask.shape.rank == 5, f"head_mask.dim != 5, instead {head_mask.dim()}" head_mask = tf.cast(head_mask, tf.float32) # switch to float if need + fp16 compatibility return head_mask @tf.function def serving(self, inputs): """ Args: Method used for serving the model. Does not have a specific signature, but will be specialized as concrete functions when saving with `save_pretrained`. inputs (`Dict[str, tf.Tensor]`): The input of the saved model as a dictionary of tensors. """ output = self.call(inputs) return self.serving_output(output) @property def input_signature(self) -> Dict[str, tf.TensorSpec]: """ This property should return a dict mapping input names to tf.TensorSpec objects, representing the expected shape and dtype for model inputs. It is used for both serving and for generating dummy inputs. """ model_inputs = list(inspect.signature(self.call).parameters) sig = {} if "input_ids" in model_inputs: if self.__class__.__name__.endswith("ForMultipleChoice"): text_dims = 3 else: text_dims = 2 for input_name in ( "input_ids", "attention_mask", "token_type_ids", "decoder_input_ids", "decoder_attention_mask", ): if input_name in model_inputs: sig[input_name] = tf.TensorSpec([None] * text_dims, tf.int32, name=input_name) if "pixel_values" in model_inputs: pixel_values_shape = [None, None, None, None] if hasattr(self.config, "vision_config"): vision_config = self.config.vision_config else: vision_config = self.config if hasattr(vision_config, "num_channels"): pixel_values_shape[1] = vision_config.num_channels else: raise NotImplementedError( "Could not infer number of channels from config, please override input_signature to specify input shapes." ) if hasattr(vision_config, "image_size"): pixel_values_shape[2] = pixel_values_shape[3] = vision_config.image_size elif hasattr(vision_config, "input_size"): pixel_values_shape[2] = pixel_values_shape[3] = vision_config.input_size else: raise NotImplementedError( "Could not infer input image shape from config, please override input_signature to specify input shapes." ) sig["pixel_values"] = tf.TensorSpec(pixel_values_shape, tf.float32, name="pixel_values") if "input_features" in model_inputs: raise NotImplementedError("Audio models need a manually defined input_signature") return sig def serving_output(self, output): """ Prepare the output of the saved model. Can be overridden if specific serving modifications are required. """ if not isinstance(output, ModelOutput): return output for key in output: if key.endswith("hidden_states") and not getattr(self.config, "output_hidden_states", False): output[key] = None elif key.endswith("attentions") and not getattr(self.config, "output_attentions", False): output[key] = None elif key == "past_key_values" and not getattr(self.config, "use_cache", False): output[key] = None elif key == "cross_attentions" and not ( getattr(self.config, "output_attentions", False) and getattr(self.config, "add_cross_attention", False) ): output[key] = None if isinstance(output[key], (tuple, list)): try: output[key] = tf.convert_to_tensor(output[key]) except (ValueError, tf.errors.InvalidArgumentError): pass # Layers may not have the same dimensions return output @classmethod def can_generate(cls) -> bool: """ Returns whether this model can generate sequences with `.generate()`. Returns: `bool`: Whether this model can generate sequences with `.generate()`. """ # Detects whether `prepare_inputs_for_generation` has been overwritten, which is a requirement for generation. # Alternativelly, the model can also have a custom `generate` function. if "GenerationMixin" in str(cls.prepare_inputs_for_generation) and "GenerationMixin" in str(cls.generate): return False return True def get_input_embeddings(self) -> keras.layers.Layer: """ Returns the model's input embeddings layer. Returns: `tf.Variable`: The embeddings layer mapping vocabulary to hidden states. """ main_layer = getattr(self, self.base_model_prefix, self) if main_layer is not self: return main_layer.get_input_embeddings() else: raise NotImplementedError def _save_checkpoint(self, checkpoint_dir, epoch): if not os.path.isdir(checkpoint_dir): os.mkdir(checkpoint_dir) # We avoid tf.train.checkpoint or saving weights in TF format, even though that includes optimizer # state for us, because it requires special handling for objects like custom losses, which we use # internally and which users are likely to use too weights_path = os.path.join(checkpoint_dir, "weights.h5") self.save_weights(weights_path) extra_data = {"epoch": epoch, "optimizer_state": self.optimizer.get_weights()} extra_data_path = os.path.join(checkpoint_dir, "extra_data.pickle") with open(extra_data_path, "wb") as f: pickle.dump(extra_data, f) def prepare_tf_dataset( self, dataset: "datasets.Dataset", # noqa:F821 batch_size: int = 8, shuffle: bool = True, tokenizer: Optional["PreTrainedTokenizerBase"] = None, collate_fn: Optional[Callable] = None, collate_fn_args: Optional[Dict[str, Any]] = None, drop_remainder: Optional[bool] = None, prefetch: bool = True, ): """ Wraps a HuggingFace [`~datasets.Dataset`] as a `tf.data.Dataset` with collation and batching. This method is designed to create a "ready-to-use" dataset that can be passed directly to Keras methods like `fit()` without further modification. The method will drop columns from the dataset if they don't match input names for the model. If you want to specify the column names to return rather than using the names that match this model, we recommend using `Dataset.to_tf_dataset()` instead. Args: dataset (`Any`): A [~`datasets.Dataset`] to be wrapped as a `tf.data.Dataset`. batch_size (`int`, *optional*, defaults to 8): The size of batches to return. shuffle (`bool`, defaults to `True`): Whether to return samples from the dataset in random order. Usually `True` for training datasets and `False` for validation/test datasets. tokenizer ([`PreTrainedTokenizerBase`], *optional*): A `PreTrainedTokenizer` that will be used to pad samples to create batches. Has no effect if a specific `collate_fn` is passed instead. collate_fn (`Callable`, *optional*): A function that collates samples from the dataset into a single batch. Defaults to `DefaultDataCollator` if no `tokenizer` is supplied or `DataCollatorWithPadding` if a `tokenizer` is passed. collate_fn_args (`Dict[str, Any]`, *optional*): A dict of arguments to pass to the `collate_fn` alongside the list of samples. drop_remainder (`bool`, *optional*): Whether to drop the final batch, if the batch_size does not evenly divide the dataset length. Defaults to the same setting as `shuffle`. prefetch (`bool`, defaults to `True`): Whether to add prefetching to the end of the `tf.data` pipeline. This is almost always beneficial for performance, but can be disabled in edge cases. Returns: `Dataset`: A `tf.data.Dataset` which is ready to pass to the Keras API. """ requires_backends(self, ["datasets"]) import datasets if collate_fn is None: if tokenizer is None: collate_fn = DefaultDataCollator(return_tensors="np") else: collate_fn = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="np") if collate_fn_args is None: collate_fn_args = {} if not isinstance(dataset, datasets.Dataset): raise TypeError("Dataset argument should be a datasets.Dataset!") model_inputs = list(inspect.signature(self.call).parameters) model_labels = find_labels(self.__class__) if "cols_to_retain" in list(inspect.signature(dataset._get_output_signature).parameters.keys()): output_signature, _ = dataset._get_output_signature( dataset, batch_size=None, collate_fn=collate_fn, collate_fn_args=collate_fn_args, cols_to_retain=model_inputs, ) else: # TODO Matt: This is a workaround for older versions of datasets that are missing the `cols_to_retain` # argument. We should remove this once the minimum supported version of datasets is > 2.3.2 unwanted_columns = [ feature for feature in dataset.features if feature not in model_inputs and feature not in ("label_ids", "label") ] dataset = dataset.remove_columns(unwanted_columns) output_signature, _ = dataset._get_output_signature( dataset, batch_size=None, collate_fn=collate_fn, collate_fn_args=collate_fn_args ) output_columns = list(output_signature.keys()) feature_cols = [col for col in output_columns if col in model_inputs and col not in model_labels] label_cols = [col for col in output_columns if col in model_labels] # Backwards compatibility for older versions of datasets. Previously, if `columns` or `label_cols` # were a single element list, the returned element spec would be a single element. Now, passing [feature] # will return a dict structure {"feature": feature}, and passing a single string will return a single element. feature_cols = feature_cols[0] if len(feature_cols) == 1 else feature_cols label_cols = label_cols[0] if len(label_cols) == 1 else label_cols if drop_remainder is None: drop_remainder = shuffle tf_dataset = dataset.to_tf_dataset( columns=feature_cols, label_cols=label_cols, batch_size=batch_size, shuffle=shuffle, drop_remainder=drop_remainder, collate_fn=collate_fn, collate_fn_args=collate_fn_args, prefetch=prefetch, ) return tf_dataset def compile( self, optimizer="rmsprop", loss="auto_with_warning", metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs, ): """ This is a thin wrapper that sets the model's loss output head as the loss if the user does not specify a loss function themselves. """ if loss in ("auto_with_warning", "passthrough"): # "passthrough" for workflow backward compatibility logger.info( "No loss specified in compile() - the model's internal loss computation will be used as the " "loss. Don't panic - this is a common way to train TensorFlow models in Transformers! " "To disable this behaviour please pass a loss argument, or explicitly pass " "`loss=None` if you do not want your model to compute a loss. You can also specify `loss='auto'` to " "get the internal loss without printing this info string." ) loss = "auto" if loss == "auto": loss = dummy_loss self._using_dummy_loss = True else: self._using_dummy_loss = False parent_args = list(inspect.signature(keras.Model.compile).parameters.keys()) # This argument got renamed, we need to support both versions if "steps_per_execution" in parent_args: super().compile( optimizer=optimizer, loss=loss, metrics=metrics, loss_weights=loss_weights, weighted_metrics=weighted_metrics, run_eagerly=run_eagerly, steps_per_execution=steps_per_execution, **kwargs, ) else: super().compile( optimizer=optimizer, loss=loss, metrics=metrics, loss_weights=loss_weights, weighted_metrics=weighted_metrics, run_eagerly=run_eagerly, experimental_steps_per_execution=steps_per_execution, **kwargs, ) def compute_loss(self, *args, **kwargs): if hasattr(keras.Model, "compute_loss"): # This will be true in TF 2.8 or greater return super().compute_loss(*args, **kwargs) else: warnings.warn( "The old compute_loss method is deprecated as it conflicts with the Keras compute_loss " "method added in TF 2.8. If you want the original HF compute_loss, please call " "hf_compute_loss() instead. From TF versions >= 2.8, or Transformers versions >= 5, " "calling compute_loss() will get the Keras method instead.", FutureWarning, ) return self.hf_compute_loss(*args, **kwargs) def get_label_to_output_name_mapping(self): arg_names = list(inspect.signature(self.call).parameters) if self._label_to_output_map is not None: return self._label_to_output_map elif "start_positions" in arg_names: return {"start_positions": "start_logits", "end_positions": "end_logits"} elif "sentence_order_label" in arg_names: return {"labels": "prediction_logits", "sentence_order_label": "sop_logits"} elif "next_sentence_label" in arg_names: return {"labels": "prediction_logits", "next_sentence_label": "seq_relationship_logits"} elif "mc_labels" in arg_names: return {"labels": "logits", "mc_labels": "mc_logits"} else: return {} def train_step(self, data): """ A modification of Keras's default `train_step` that correctly handles matching outputs to labels for our models and supports directly training on the loss output head. In addition, it ensures input keys are copied to the labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure that they are available to the model during the forward pass. """ # We hardcode the most common renamings; models with weirder names can set `self._label_to_output_map` arg_names = list(inspect.signature(self.call).parameters) label_kwargs = find_labels(self.__class__) label_to_output = self.get_label_to_output_name_mapping() output_to_label = {val: key for key, val in label_to_output.items()} if not self._using_dummy_loss and parse(tf.__version__) < parse("2.11.0"): # Newer TF train steps leave this out data = expand_1d(data) x, y, sample_weight = keras.utils.unpack_x_y_sample_weight(data) # If the inputs are mutable dictionaries, make a shallow copy of them because we will modify # them during input/label pre-processing. This avoids surprising the user by wrecking their data. # In addition, modifying mutable Python inputs makes XLA compilation impossible. if isinstance(x, dict): x = x.copy() if isinstance(y, dict): y = y.copy() # When using a dummy loss, we ensure that separate labels are copied to the correct model arguments, # if those keys are not already present in the input dict if self._using_dummy_loss and y is not None: # If y is a tensor and the model only has one label-like input, map y to that input if len(label_kwargs) == 1 and isinstance(y, tf.Tensor): if isinstance(x, tf.Tensor): x = {arg_names[0]: x} label_kwarg = next(iter(label_kwargs)) if label_kwarg not in x: x[label_kwarg] = y # Otherwise, copy keys from y to x as long as they weren't already present in x elif isinstance(y, dict): if isinstance(x, tf.Tensor): x = {arg_names[0]: x} for key, val in y.items(): if key in arg_names and key not in x: x[key] = val elif output_to_label.get(key, None) in arg_names and key not in x: x[output_to_label[key]] = val if y is None: y = {key: val for key, val in x.items() if key in label_kwargs} if not y and not self._using_dummy_loss: raise ValueError("Could not find label column(s) in input dict and no separate labels were provided!") if isinstance(y, dict): # Rename labels at this point to match output heads y = {label_to_output.get(key, key): val for key, val in y.items()} # Run forward pass. with tf.GradientTape() as tape: if self._using_dummy_loss and "return_loss" in arg_names: y_pred = self(x, training=True, return_loss=True) else: y_pred = self(x, training=True) if self._using_dummy_loss: loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses) else: loss = None # This next block matches outputs to label keys. Tensorflow's standard method for doing this # can get very confused if any of the keys contain nested values (e.g. lists/tuples of Tensors) if isinstance(y, dict) and len(y) == 1: if list(y.keys())[0] in y_pred.keys(): y_pred = y_pred[list(y.keys())[0]] elif list(y_pred.keys())[0] == "loss": y_pred = y_pred[1] else: y_pred = y_pred[0] _, y = y.popitem() elif isinstance(y, dict): # If the labels are a dict, match keys from the output by name y_pred = {key: val for key, val in y_pred.items() if key in y} elif isinstance(y, tuple) or isinstance(y, list): # If the labels are a tuple/list, match keys to the output by order, skipping the loss. if list(y_pred.keys())[0] == "loss": y_pred = y_pred.to_tuple()[1:] else: y_pred = y_pred.to_tuple() y_pred = y_pred[: len(y)] # Remove unused fields in case those cause problems else: # If the labels are a single tensor, match them to the first non-loss tensor in the output if list(y_pred.keys())[0] == "loss": y_pred = y_pred[1] else: y_pred = y_pred[0] if loss is None: loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses) # Run backwards pass. self.optimizer.minimize(loss, self.trainable_variables, tape=tape) self.compiled_metrics.update_state(y, y_pred, sample_weight) # Collect metrics to return return_metrics = {} for metric in self.metrics: result = metric.result() if isinstance(result, dict): return_metrics.update(result) else: return_metrics[metric.name] = result return return_metrics def test_step(self, data): """ A modification of Keras's default `train_step` that correctly handles matching outputs to labels for our models and supports directly training on the loss output head. In addition, it ensures input keys are copied to the labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure that they are available to the model during the forward pass. """ # We hardcode the most common renamings; models with weirder names can set `self._label_to_output_map` arg_names = list(inspect.signature(self.call).parameters) label_kwargs = find_labels(self.__class__) label_to_output = self.get_label_to_output_name_mapping() output_to_label = {val: key for key, val in label_to_output.items()} if not self._using_dummy_loss and parse(tf.__version__) < parse("2.11.0"): # Newer versions leave this out data = expand_1d(data) x, y, sample_weight = keras.utils.unpack_x_y_sample_weight(data) # If the inputs are mutable dictionaries, make a shallow copy of them because we will modify # them during input/label pre-processing. This avoids surprising the user by wrecking their data. # In addition, modifying mutable Python inputs makes XLA compilation impossible. if isinstance(x, dict): x = x.copy() if isinstance(y, dict): y = y.copy() # When using a dummy loss, we ensure that separate labels are copied to the correct model arguments, # if those keys are not already present in the input dict if self._using_dummy_loss and y is not None: arg_names = list(inspect.signature(self.call).parameters) # If y is a tensor and the model only has one label-like input, map y to that input if len(label_kwargs) == 1 and isinstance(y, tf.Tensor): if isinstance(x, tf.Tensor): x = {arg_names[0]: x} label_kwarg = next(iter(label_kwargs)) if label_kwarg not in x: x[label_kwarg] = y # Otherwise, copy keys from y to x as long as they weren't already present in x elif isinstance(y, dict): if isinstance(x, tf.Tensor): x = {arg_names[0]: x} for key, val in y.items(): if key in arg_names and key not in x: x[key] = val elif output_to_label.get(key, None) in arg_names and key not in x: x[output_to_label[key]] = val if y is None: y = {key: val for key, val in x.items() if key in label_kwargs} if not y and not self._using_dummy_loss: raise ValueError("Could not find label column(s) in input dict and no separate labels were provided!") if isinstance(y, dict): # Rename labels at this point to match output heads y = {label_to_output.get(key, key): val for key, val in y.items()} # Run forward pass. if self._using_dummy_loss and "return_loss" in arg_names: y_pred = self(x, return_loss=True, training=False) else: y_pred = self(x, training=False) if self._using_dummy_loss: loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses) else: loss = None # This next block matches outputs to label keys. Tensorflow's standard method for doing this # can get very confused if any of the keys contain nested values (e.g. lists/tuples of Tensors) if isinstance(y, dict) and len(y) == 1: if list(y.keys())[0] in y_pred.keys(): y_pred = y_pred[list(y.keys())[0]] elif list(y_pred.keys())[0] == "loss": y_pred = y_pred[1] else: y_pred = y_pred[0] _, y = y.popitem() elif isinstance(y, dict): # If the labels are a dict, match keys from the output by name y_pred = {key: val for key, val in y_pred.items() if key in y} elif isinstance(y, tuple) or isinstance(y, list): # If the labels are a tuple/list, match keys to the output by order, skipping the loss. if list(y_pred.keys())[0] == "loss": y_pred = y_pred.to_tuple()[1:] else: y_pred = y_pred.to_tuple() y_pred = y_pred[: len(y)] # Remove unused fields in case those cause problems else: # If the labels are a single tensor, match them to the first non-loss tensor in the output if list(y_pred.keys())[0] == "loss": y_pred = y_pred[1] else: y_pred = y_pred[0] if loss is None: loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses) self.compiled_metrics.update_state(y, y_pred, sample_weight) # Collect metrics to return return_metrics = {} for metric in self.metrics: result = metric.result() if isinstance(result, dict): return_metrics.update(result) else: return_metrics[metric.name] = result return return_metrics def create_model_card( self, output_dir, model_name: str, language: Optional[str] = None, license: Optional[str] = None, tags: Optional[str] = None, finetuned_from: Optional[str] = None, tasks: Optional[str] = None, dataset_tags: Optional[Union[str, List[str]]] = None, dataset: Optional[Union[str, List[str]]] = None, dataset_args: Optional[Union[str, List[str]]] = None, ): """ Creates a draft of a model card using the information available to the `Trainer`. Args: output_dir (`str` or `os.PathLike`): The folder in which to create the model card. model_name (`str`, *optional*): The name of the model. language (`str`, *optional*): The language of the model (if applicable) license (`str`, *optional*): The license of the model. Will default to the license of the pretrained model used, if the original model given to the `Trainer` comes from a repo on the Hub. tags (`str` or `List[str]`, *optional*): Some tags to be included in the metadata of the model card. finetuned_from (`str`, *optional*): The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo of the original model given to the `Trainer` (if it comes from the Hub). tasks (`str` or `List[str]`, *optional*): One or several task identifiers, to be included in the metadata of the model card. dataset_tags (`str` or `List[str]`, *optional*): One or several dataset tags, to be included in the metadata of the model card. dataset (`str` or `List[str]`, *optional*): One or several dataset identifiers, to be included in the metadata of the model card. dataset_args (`str` or `List[str]`, *optional*): One or several dataset arguments, to be included in the metadata of the model card. """ # Avoids a circular import by doing this when necessary. from .modelcard import TrainingSummary # tests_ignore training_summary = TrainingSummary.from_keras( self, keras_history=self.history, language=language, license=license, tags=tags, model_name=model_name, finetuned_from=finetuned_from, tasks=tasks, dataset_tags=dataset_tags, dataset=dataset, dataset_args=dataset_args, ) model_card = training_summary.to_model_card() with open(os.path.join(output_dir, "README.md"), "w") as f: f.write(model_card) def set_input_embeddings(self, value): """ Set model's input embeddings Args: value (`tf.Variable`): The new weights mapping hidden states to vocabulary. """ main_layer = getattr(self, self.base_model_prefix) if main_layer is None: raise NotImplementedError("The model does not implements the base_model_prefix attribute.") try: main_layer.set_input_embeddings(value) except AttributeError: logger.info("Building the model") self.build_in_name_scope() main_layer.set_input_embeddings(value) def get_output_embeddings(self) -> Union[None, keras.layers.Layer]: """ Returns the model's output embeddings Returns: `tf.Variable`: The new weights mapping vocabulary to hidden states. """ if self.get_lm_head() is not None: lm_head = self.get_lm_head() try: return lm_head.get_output_embeddings() except AttributeError: logger.info("Building the model") self.build_in_name_scope() return lm_head().get_output_embeddings() return None # Overwrite for models with output embeddings def set_output_embeddings(self, value): """ Set model's output embeddings Args: value (`tf.Variable`): The new weights mapping hidden states to vocabulary. """ if self.get_lm_head() is not None: lm_head = self.get_lm_head() try: lm_head.set_output_embeddings(value) except AttributeError: logger.info("Building the model") self.build_in_name_scope() lm_head.set_output_embeddings(value) def get_output_layer_with_bias(self) -> Union[None, keras.layers.Layer]: """ Get the layer that handles a bias attribute in case the model has an LM head with weights tied to the embeddings Return: `keras.layers.Layer`: The layer that handles the bias, None if not an LM model. """ warnings.warn( "The method get_output_layer_with_bias is deprecated. Please use `get_lm_head` instead.", FutureWarning ) return self.get_lm_head() def get_prefix_bias_name(self) -> Union[None, str]: """ Get the concatenated _prefix name of the bias from the model name to the parent layer Return: `str`: The _prefix name of the bias. """ warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) return None def get_bias(self) -> Union[None, Dict[str, tf.Variable]]: """ Dict of bias attached to an LM head. The key represents the name of the bias attribute. Return: `tf.Variable`: The weights representing the bias, None if not an LM model. """ if self.get_lm_head() is not None: lm_head = self.get_lm_head() try: return lm_head.get_bias() except AttributeError: self.build_in_name_scope() return lm_head.get_bias() return None def set_bias(self, value): """ Set all the bias in the LM head. Args: value (`Dict[tf.Variable]`): All the new bias attached to an LM head. """ if self.get_lm_head() is not None: lm_head = self.get_lm_head() try: lm_head.set_bias(value) except AttributeError: self.build_in_name_scope() lm_head.set_bias(value) def get_lm_head(self) -> keras.layers.Layer: """ The LM Head layer. This method must be overwritten by all the models that have a lm head. Return: `keras.layers.Layer`: The LM head layer if the model has one, None if not. """ return None def resize_token_embeddings( self, new_num_tokens: Optional[int] = None ) -> Union[keras.layers.Embedding, tf.Variable]: """ Resizes input token embeddings matrix of the model if `new_num_tokens != config.vocab_size`. Takes care of tying weights embeddings afterwards if the model class has a `tie_weights()` method. Arguments: new_num_tokens (`int`, *optional*): The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just returns a pointer to the input tokens without doing anything. Return: `tf.Variable` or `keras.layers.Embedding`: Pointer to the input tokens of the model. """ # TODO (joao): flagged for replacement (by `_v2_resized_token_embeddings`) due to embeddings refactor # Run the new code path if the model has a keras embeddings layer if isinstance(self.get_input_embeddings(), keras.layers.Embedding): return self._v2_resized_token_embeddings(new_num_tokens) if new_num_tokens is None or new_num_tokens == self.config.vocab_size: return self._get_word_embedding_weight(self.get_input_embeddings()) model_embeds = self._resize_token_embeddings(new_num_tokens) # Update base model and current model config self.config.vocab_size = new_num_tokens return model_embeds def _v2_resized_token_embeddings(self, new_num_tokens: Optional[int] = None) -> keras.layers.Embedding: """ Resizes input token embeddings matrix of the model if `new_num_tokens != config.vocab_size`. Arguments: new_num_tokens (`int`, *optional*): The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just returns a pointer to the input tokens without doing anything. Return: `keras.layers.Embedding`: Pointer to the input tokens of the model. """ if new_num_tokens is None or new_num_tokens == self.config.vocab_size: return self.get_input_embeddings() model_embeds = self._v2_resize_token_embeddings(new_num_tokens) # Update base model and current model config self.config.vocab_size = new_num_tokens return model_embeds def _get_word_embedding_weight(model, embedding_layer): # TODO (joao): flagged for delection due to embeddings refactor # If the variable holds the weights themselves, return them if isinstance(embedding_layer, tf.Tensor): return embedding_layer # Otherwise, try to get them from the layer's attributes embeds = getattr(embedding_layer, "weight", None) if embeds is not None: return embeds embeds = getattr(embedding_layer, "decoder", None) if embeds is not None: return embeds # The reason why the attributes don't exist might be # because the model is not built, so retry getting # the argument after building the model model.build_in_name_scope() embeds = getattr(embedding_layer, "weight", None) if embeds is not None: return embeds embeds = getattr(embedding_layer, "decoder", None) if embeds is not None: return embeds return None def _resize_token_embeddings(self, new_num_tokens): # TODO (joao): flagged for replacement (by `_v2_resize_token_embeddings`) due to embeddings refactor old_embeddings = self._get_word_embedding_weight(self.get_input_embeddings()) new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens) # if word embeddings are not tied, make sure that lm head bias is resized as well if self.get_bias() is not None: old_lm_head_bias = self.get_bias() new_lm_head_bias = self._get_resized_lm_head_bias(old_lm_head_bias, new_num_tokens) self.set_bias(new_lm_head_bias) # if word embeddings are not tied, make sure that lm head decoder is resized as well if self.get_output_embeddings() is not None: old_lm_head_decoder = self._get_word_embedding_weight(self.get_output_embeddings()) new_lm_head_decoder = self._get_resized_lm_head_decoder(old_lm_head_decoder, new_num_tokens) self.set_output_embeddings(new_lm_head_decoder) self.set_input_embeddings(new_embeddings) return self.get_input_embeddings() def _v2_resize_token_embeddings(self, new_num_tokens): old_embeddings = self.get_input_embeddings() new_embeddings = self._v2_get_resized_embeddings(old_embeddings, new_num_tokens) self.set_input_embeddings(new_embeddings) # If word embeddings are not tied, make sure that lm head bias is resized as well if self.get_bias() is not None: old_lm_head_bias = self.get_bias() new_lm_head_bias = self._v2_get_resized_lm_head_bias(old_lm_head_bias, new_num_tokens) self.set_bias(new_lm_head_bias) # If word embeddings are not tied, make sure that lm head decoder is resized as well. tied_weights = self.get_input_embeddings() == self.get_output_embeddings() if self.get_output_embeddings() is not None and not tied_weights: old_lm_head_decoder = self._get_word_embedding_weight(self.get_output_embeddings()) # TODO (joao): this one probably needs a v2 version with other models new_lm_head_decoder = self._get_resized_lm_head_decoder(old_lm_head_decoder, new_num_tokens) self.set_output_embeddings(new_lm_head_decoder) return self.get_input_embeddings() def _get_resized_lm_head_bias(self, old_lm_head_bias, new_num_tokens): """ Build a resized bias from the old ones. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end Args: old_lm_head_bias (`tf.Variable`): Old lm head bias to be resized. new_num_tokens (`int`, *optional*): New number of tokens in the linear matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just returns None Return: `tf.Variable`: Pointer to the resized bias. """ # TODO (joao): flagged for replacement (by `_v2_get_resized_lm_head_bias`) due to embeddings refactor new_lm_head_bias = {} for attr, weight in old_lm_head_bias.items(): first_dim, old_num_tokens = (None, shape_list(weight)[0]) if tf.rank(weight) == 1 else shape_list(weight) size_diff = new_num_tokens - old_num_tokens final_shape = [new_num_tokens] if first_dim is None else [first_dim, new_num_tokens] # initialize new bias if tf.math.greater(size_diff, 0): padding_shape = [[0, size_diff]] if first_dim is None else [[0, 0], [0, size_diff]] current_bias = tf.pad(weight.value(), tf.convert_to_tensor(padding_shape), constant_values=-1) num_tokens_to_copy = min(old_num_tokens, new_num_tokens) mask_shape = [num_tokens_to_copy] if first_dim is None else [1, num_tokens_to_copy] bias_mask = tf.fill(tf.convert_to_tensor(mask_shape), True) bias_mask = tf.pad(bias_mask, tf.convert_to_tensor(padding_shape), constant_values=False) else: slice_from = [0] if first_dim is None else [0, 0] current_bias = tf.slice( weight.value(), tf.convert_to_tensor(slice_from), tf.convert_to_tensor(final_shape) ) bias_mask = tf.fill(tf.convert_to_tensor(final_shape), True) new_bias = self.add_weight( shape=final_shape, initializer="zeros", trainable=True, name=weight.name.split(":")[0], ) init_bias = tf.where(bias_mask, current_bias, new_bias.value()) new_bias.assign(init_bias) new_lm_head_bias[attr] = new_bias return new_lm_head_bias def _v2_get_resized_lm_head_bias( self, old_lm_head_bias: Dict[str, tf.Variable], new_num_tokens: int ) -> Dict[str, tf.Tensor]: """ Build a resized bias from the old ones. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end Args: old_lm_head_bias (`Dict[str, tf.Variable]`): Old lm head bias to be resized. new_num_tokens (`int`): New number of tokens in the linear matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. Return: `tf.Tensor`: Values for the resized bias. """ new_lm_head_bias = {} for attr, weight in old_lm_head_bias.items(): # Determine the size difference (depending on the shape) first_dim, old_num_tokens = (None, shape_list(weight)[0]) if tf.rank(weight) == 1 else shape_list(weight) size_diff = new_num_tokens - old_num_tokens # Copy the old bias values to the new bias if old_num_tokens > new_num_tokens: new_bias = weight.value()[..., :new_num_tokens] else: padding_shape = [[0, size_diff]] if first_dim is None else [[0, 0], [0, size_diff]] new_bias = tf.pad(weight.value(), tf.convert_to_tensor(padding_shape)) new_lm_head_bias[attr] = new_bias return new_lm_head_bias def _get_resized_lm_head_decoder(self, old_lm_head_decoder, new_num_tokens): """ Build a resized decoder from the old ones. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end Args: old_lm_head_decoder (`tf.Variable`): Old lm head decoder to be resized. new_num_tokens (`int`, *optional*): New number of tokens in the linear matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just returns None Return: `tf.Variable`: Pointer to the resized decoder or None if the output embeddings are different from the input ones. """ new_lm_head_decoder = old_lm_head_decoder is_input_output_equals = tf.reduce_any( self._get_word_embedding_weight(self.get_input_embeddings()) == old_lm_head_decoder ) if old_lm_head_decoder is not None and not is_input_output_equals: old_embedding_dim = shape_list(old_lm_head_decoder)[1] decoder_mask, current_decoder = init_copy_embeddings(old_lm_head_decoder, new_num_tokens) new_lm_head_decoder = self.add_weight( shape=(new_num_tokens, old_embedding_dim), initializer="zeros", trainable=True, name=old_lm_head_decoder.name.split(":")[0], ) init_decoder = tf.where(decoder_mask, current_decoder, new_lm_head_decoder.value()) new_lm_head_decoder.assign(init_decoder) return new_lm_head_decoder def _get_resized_embeddings(self, old_embeddings, new_num_tokens=None) -> tf.Variable: """ Build a resized Embedding weights from a provided token Embedding weights. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end Args: old_embeddings (`tf.Variable`): Old embeddings to be resized. new_num_tokens (`int`, *optional*): New number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just returns a pointer to the input tokens `tf.Variable` module of the model without doing anything. Return: `tf.Variable`: Pointer to the resized Embedding Module or the old Embedding Module if `new_num_tokens` is `None` """ # TODO (joao): flagged for replacement (by `_v2_get_resized_embeddings`) due to embeddings refactor old_embedding_dim = shape_list(old_embeddings)[1] init_range = getattr(self.config, "initializer_range", 0.02) embeddings_mask, current_embeddings = init_copy_embeddings(old_embeddings, new_num_tokens) new_embeddings = self.add_weight( name=old_embeddings.name.split(":")[0], shape=[new_num_tokens, old_embedding_dim], initializer=get_initializer(init_range), dtype=tf.float32, ) init_embeddings = tf.where(embeddings_mask, current_embeddings, new_embeddings.value()) new_embeddings.assign(init_embeddings) return new_embeddings def _v2_get_resized_embeddings( self, old_embeddings: keras.layers.Embedding, new_num_tokens: int ) -> keras.layers.Embedding: """ Build a resized Embedding layer from a provided Embedding layer. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. Args: old_embeddings (`keras.layers.Embedding`): Old embeddings to be resized. new_num_tokens (`int`, *optional*): New number of tokens in the embedding matrix. Return: `keras.layers.Embedding`: Resized Embedding layer. """ # Get the initialization range for the embeddings init_range = 0.02 # default value potential_initialization_variable_names = [ "initializer_range", # most common "initializer_factor", # e.g. T5 "init_std", # e.g BART ] for var_name in potential_initialization_variable_names: if hasattr(self.config, var_name): init_range = getattr(self.config, var_name) # Get a new (initialized) embeddings layer new_embeddings = keras.layers.Embedding( input_dim=new_num_tokens, output_dim=old_embeddings.output_dim, embeddings_initializer=keras.initializers.TruncatedNormal(stddev=init_range), name=old_embeddings.embeddings.name[:-13], # exact same scoped name except "/embeddings:0" ) new_embeddings(tf.constant([[0]])) # Copy the old embeddings to the new embeddings if old_embeddings.input_dim >= new_num_tokens: init_embeddings = old_embeddings.embeddings[:new_num_tokens] else: init_embeddings = tf.concat( [old_embeddings.embeddings, new_embeddings.embeddings[old_embeddings.input_dim :]], axis=0 ) new_embeddings.embeddings.assign(init_embeddings) return new_embeddings def prune_heads(self, heads_to_prune): """ Prunes heads of the base model. Arguments: heads_to_prune (`Dict[int, List[int]]`): Dictionary with keys being selected layer indices (`int`) and associated values being the list of heads to prune in said layer (list of `int`). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2. """ raise NotImplementedError def save_pretrained( self, save_directory, saved_model=False, version=1, push_to_hub=False, signatures=None, max_shard_size: Union[int, str] = "5GB", create_pr: bool = False, safe_serialization: bool = False, token: Optional[Union[str, bool]] = None, **kwargs, ): """ Save a model and its configuration file to a directory, so that it can be re-loaded using the [`~TFPreTrainedModel.from_pretrained`] class method. Arguments: save_directory (`str`): Directory to which to save. Will be created if it doesn't exist. saved_model (`bool`, *optional*, defaults to `False`): If the model has to be saved in saved model format as well or not. version (`int`, *optional*, defaults to 1): The version of the saved model. A saved model needs to be versioned in order to be properly loaded by TensorFlow Serving as detailed in the official documentation https://www.tensorflow.org/tfx/serving/serving_basic push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). signatures (`dict` or `tf.function`, *optional*): Model's signature used for serving. This will be passed to the `signatures` argument of model.save(). max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). <Tip warning={true}> If a single weight of the model is bigger than `max_shard_size`, it will be in its own checkpoint shard which will be bigger than `max_shard_size`. </Tip> create_pr (`bool`, *optional*, defaults to `False`): Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (`bool`, *optional*, defaults to `False`): Whether to save the model using `safetensors` or the traditional TensorFlow way (that uses `h5`). token (`str` or `bool`, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). kwargs (`Dict[str, Any]`, *optional*): Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. """ use_auth_token = kwargs.pop("use_auth_token", None) if use_auth_token is not None: warnings.warn( "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.", FutureWarning, ) if token is not None: raise ValueError( "`token` and `use_auth_token` are both specified. Please set only the argument `token`." ) token = use_auth_token if token is not None: kwargs["token"] = token if os.path.isfile(save_directory): logger.error(f"Provided path ({save_directory}) should be a directory, not a file") return os.makedirs(save_directory, exist_ok=True) if push_to_hub: commit_message = kwargs.pop("commit_message", None) repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1]) repo_id = self._create_repo(repo_id, **kwargs) files_timestamps = self._get_files_timestamps(save_directory) if saved_model: # If `torch_dtype` is in the config with a torch dtype class as the value, we need to change it to string. # (Although TF doesn't care about this attribute, we can't just remove it or set it to `None`.) if getattr(self.config, "torch_dtype", None) is not None and not isinstance(self.config.torch_dtype, str): self.config.torch_dtype = str(self.config.torch_dtype).split(".")[1] if signatures is None: serving_default = self.serving.get_concrete_function(self.input_signature) if any(spec.dtype == tf.int32 for spec in self.input_signature.values()): int64_spec = { key: tf.TensorSpec( shape=spec.shape, dtype=tf.int64 if spec.dtype == tf.int32 else spec.dtype, name=spec.name ) for key, spec in self.input_signature.items() } int64_serving = self.serving.get_concrete_function(int64_spec) signatures = {"serving_default": serving_default, "int64_serving": int64_serving} else: signatures = serving_default saved_model_dir = os.path.join(save_directory, "saved_model", str(version)) self.save(saved_model_dir, include_optimizer=False, signatures=signatures) logger.info(f"Saved model created in {saved_model_dir}") # Save configuration file self.config.architectures = [self.__class__.__name__[2:]] # If we have a custom model, we copy the file defining it in the folder and set the attributes so it can be # loaded from the Hub. if self._auto_class is not None: custom_object_save(self, save_directory, config=self.config) self.config.save_pretrained(save_directory) if self.can_generate(): self.generation_config.save_pretrained(save_directory) # If we save using the predefined names, we can load using `from_pretrained` weights_name = SAFE_WEIGHTS_NAME if safe_serialization else TF2_WEIGHTS_NAME output_model_file = os.path.join(save_directory, weights_name) shards, index = tf_shard_checkpoint(self.weights, max_shard_size, weights_name=weights_name) # Clean the folder from a previous save for filename in os.listdir(save_directory): full_filename = os.path.join(save_directory, filename) # If we have a shard file that is not going to be replaced, we delete it, but only from the main process # in distributed settings to avoid race conditions. weights_no_suffix = weights_name.replace(".bin", "").replace(".safetensors", "") if ( filename.startswith(weights_no_suffix) and os.path.isfile(full_filename) and filename not in shards.keys() ): os.remove(full_filename) if index is None: if safe_serialization: state_dict = {strip_model_name_and_prefix(w.name): w.value() for w in self.weights} safe_save_file(state_dict, output_model_file, metadata={"format": "tf"}) else: self.save_weights(output_model_file) logger.info(f"Model weights saved in {output_model_file}") else: save_index_file = SAFE_WEIGHTS_INDEX_NAME if safe_serialization else TF2_WEIGHTS_INDEX_NAME save_index_file = os.path.join(save_directory, save_index_file) # Save the index as well with open(save_index_file, "w", encoding="utf-8") as index_file: content = json.dumps(index, indent=2, sort_keys=True) + "\n" index_file.write(content) logger.info( f"The model is bigger than the maximum size per checkpoint ({max_shard_size}) and is going to be " f"split in {len(shards)} checkpoint shards. You can find where each parameters has been saved in the " f"index located at {save_index_file}." ) for shard_file, shard in shards.items(): if safe_serialization: shard_state_dict = {strip_model_name_and_prefix(w.name): w.value() for w in shard} safe_save_file( shard_state_dict, os.path.join(save_directory, shard_file), metadata={"format": "tf"} ) else: with h5py.File(os.path.join(save_directory, shard_file), mode="w") as shard_file: layers = [] for layer in sorted(shard, key=lambda x: x.name): if "model." in layer.name or len(layer.name.split("/")) == 1: layer_name = layer.name else: layer_name = "/".join(layer.name.split("/")[1:]) param_dset = shard_file.create_dataset( layer_name, layer.numpy().shape, dtype=layer.numpy().dtype ) param_dset[:] = layer.numpy() layers.append(layer_name.encode("utf8")) save_attributes_to_hdf5_group(shard_file, "layer_names", layers) if push_to_hub: self._upload_modified_files( save_directory, repo_id, files_timestamps, commit_message=commit_message, token=token, ) @classmethod def from_pretrained( cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, config: Optional[Union[PretrainedConfig, str, os.PathLike]] = None, cache_dir: Optional[Union[str, os.PathLike]] = None, ignore_mismatched_sizes: bool = False, force_download: bool = False, local_files_only: bool = False, token: Optional[Union[str, bool]] = None, revision: str = "main", use_safetensors: bool = None, **kwargs, ): r""" Instantiate a pretrained TF 2.0 model from a pre-trained model configuration. The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task. The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those weights are discarded. Parameters: pretrained_model_name_or_path (`str`, *optional*): Can be either: - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [`~TFPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - A path or url to a *PyTorch state_dict save file* (e.g, `./pt_model/pytorch_model.bin`). In this case, `from_pt` should be set to `True` and a configuration object should be provided as `config` argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. - `None` if you are both providing the configuration and state dictionary (resp. with keyword arguments `config` and `state_dict`). model_args (sequence of positional arguments, *optional*): All remaining positional arguments will be passed to the underlying model's `__init__` method. config (`Union[PretrainedConfig, str]`, *optional*): Can be either: - an instance of a class derived from [`PretrainedConfig`], - a string valid as input to [`~PretrainedConfig.from_pretrained`]. Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [`~TFPreTrainedModel.save_pretrained`] and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory. from_pt (`bool`, *optional*, defaults to `False`): Load the model weights from a PyTorch state_dict save file (see docstring of `pretrained_model_name_or_path` argument). ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels). cache_dir (`str`, *optional*): Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download: Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies: (`Dict[str, str], `optional`): A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(`bool`, *optional*, defaults to `False`): Whether or not to only look at local files (e.g., not try downloading the model). token (`str` or `bool`, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. <Tip> To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>". </Tip> mirror (`str`, *optional*): Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information. subfolder (`str`, *optional*, defaults to `""`): In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. tf_to_pt_weight_rename (`Callable`, *optional*): A function that is called to transform the names of weights during the PyTorch to TensorFlow crossloading process. This is not necessary for most models, but is useful to allow composite models to be crossloaded correctly. use_safetensors (`bool`, *optional*, defaults to `None`): Whether or not to use `safetensors` checkpoints. Defaults to `None`. If not specified and `safetensors` is not installed, it will be set to `False`. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded: - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([`~PretrainedConfig.from_pretrained`]). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function. Examples: ```python >>> from transformers import BertConfig, TFBertModel >>> # Download model and configuration from huggingface.co and cache. >>> model = TFBertModel.from_pretrained("google-bert/bert-base-uncased") >>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). >>> model = TFBertModel.from_pretrained("./test/saved_model/") >>> # Update configuration during loading. >>> model = TFBertModel.from_pretrained("google-bert/bert-base-uncased", output_attentions=True) >>> assert model.config.output_attentions == True >>> # Loading from a Pytorch model file instead of a TensorFlow checkpoint (slower, for example purposes, not runnable). >>> config = BertConfig.from_json_file("./pt_model/my_pt_model_config.json") >>> model = TFBertModel.from_pretrained("./pt_model/my_pytorch_model.bin", from_pt=True, config=config) ```""" from_pt = kwargs.pop("from_pt", False) resume_download = kwargs.pop("resume_download", None) proxies = kwargs.pop("proxies", None) output_loading_info = kwargs.pop("output_loading_info", False) use_auth_token = kwargs.pop("use_auth_token", None) trust_remote_code = kwargs.pop("trust_remote_code", None) _ = kwargs.pop("mirror", None) load_weight_prefix = kwargs.pop("load_weight_prefix", None) from_pipeline = kwargs.pop("_from_pipeline", None) from_auto_class = kwargs.pop("_from_auto", False) subfolder = kwargs.pop("subfolder", "") commit_hash = kwargs.pop("_commit_hash", None) tf_to_pt_weight_rename = kwargs.pop("tf_to_pt_weight_rename", None) # Not relevant for TF models _ = kwargs.pop("adapter_kwargs", None) if use_auth_token is not None: warnings.warn( "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.", FutureWarning, ) if token is not None: raise ValueError( "`token` and `use_auth_token` are both specified. Please set only the argument `token`." ) token = use_auth_token if trust_remote_code is True: logger.warning( "The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is" " ignored." ) user_agent = {"file_type": "model", "framework": "tensorflow", "from_auto_class": from_auto_class} if from_pipeline is not None: user_agent["using_pipeline"] = from_pipeline if is_offline_mode() and not local_files_only: logger.info("Offline mode: forcing local_files_only=True") local_files_only = True if use_safetensors is None and not is_safetensors_available(): use_safetensors = False # Load config if we don't provide a configuration if not isinstance(config, PretrainedConfig): config_path = config if config is not None else pretrained_model_name_or_path config, model_kwargs = cls.config_class.from_pretrained( config_path, cache_dir=cache_dir, return_unused_kwargs=True, force_download=force_download, resume_download=resume_download, proxies=proxies, local_files_only=local_files_only, token=token, revision=revision, _from_auto=from_auto_class, _from_pipeline=from_pipeline, _commit_hash=commit_hash, **kwargs, ) else: model_kwargs = kwargs if commit_hash is None: commit_hash = getattr(config, "_commit_hash", None) # This variable will flag if we're loading a sharded checkpoint. In this case the archive file is just the # index of the files. is_sharded = False # Load model if pretrained_model_name_or_path is not None: pretrained_model_name_or_path = str(pretrained_model_name_or_path) is_local = os.path.isdir(pretrained_model_name_or_path) if is_local: if from_pt and os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)): # Load from a PyTorch checkpoint in priority if from_pt archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) elif from_pt and os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_INDEX_NAME)): # Load from a sharded PyTorch checkpoint archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_INDEX_NAME) is_sharded = True elif use_safetensors is not False and os.path.isfile( os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_NAME) ): # Load from a safetensors checkpoint archive_file = os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_NAME) elif use_safetensors is not False and os.path.isfile( os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_INDEX_NAME) ): # Load from a sharded safetensors checkpoint archive_file = os.path.join(pretrained_model_name_or_path, SAFE_WEIGHTS_INDEX_NAME) is_sharded = True elif os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)): # Load from a TF 2.0 checkpoint archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME) elif os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_INDEX_NAME)): # Load from a sharded TF 2.0 checkpoint archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_INDEX_NAME) is_sharded = True # At this stage we don't have a weight file so we will raise an error. elif use_safetensors: raise EnvironmentError( f"Error no file named {SAFE_WEIGHTS_NAME} or {SAFE_WEIGHTS_INDEX_NAME} found in directory {pretrained_model_name_or_path}. " f"Please make sure that the model has been saved with `safe_serialization=True` or do not " f"set `use_safetensors=True`." ) elif os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)) or os.path.isfile( os.path.join(pretrained_model_name_or_path, WEIGHTS_INDEX_NAME) ): raise EnvironmentError( f"Error no file named {TF2_WEIGHTS_NAME} or {SAFE_WEIGHTS_NAME} found in directory {pretrained_model_name_or_path} " "but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those " "weights." ) else: raise EnvironmentError( f"Error no file named {TF2_WEIGHTS_NAME}, {SAFE_WEIGHTS_NAME} or {WEIGHTS_NAME} found in directory " f"{pretrained_model_name_or_path}." ) elif os.path.isfile(pretrained_model_name_or_path): archive_file = pretrained_model_name_or_path is_local = True elif os.path.isfile(pretrained_model_name_or_path + ".index"): archive_file = pretrained_model_name_or_path + ".index" is_local = True elif is_remote_url(pretrained_model_name_or_path): filename = pretrained_model_name_or_path resolved_archive_file = download_url(pretrained_model_name_or_path) else: # set correct filename if from_pt: filename = WEIGHTS_NAME elif use_safetensors is not False: filename = SAFE_WEIGHTS_NAME else: filename = TF2_WEIGHTS_NAME try: # Load from URL or cache if already cached cached_file_kwargs = { "cache_dir": cache_dir, "force_download": force_download, "proxies": proxies, "resume_download": resume_download, "local_files_only": local_files_only, "token": token, "user_agent": user_agent, "revision": revision, "subfolder": subfolder, "_raise_exceptions_for_gated_repo": False, "_raise_exceptions_for_missing_entries": False, "_commit_hash": commit_hash, } resolved_archive_file = cached_file(pretrained_model_name_or_path, filename, **cached_file_kwargs) # Since we set _raise_exceptions_for_missing_entries=False, we don't get an exception but a None # result when internet is up, the repo and revision exist, but the file does not. if resolved_archive_file is None and filename == SAFE_WEIGHTS_NAME: # Did not find the safetensors file, let's fallback to TF. # No support for sharded safetensors yet, so we'll raise an error if that's all we find. filename = TF2_WEIGHTS_NAME resolved_archive_file = cached_file( pretrained_model_name_or_path, TF2_WEIGHTS_NAME, **cached_file_kwargs ) if resolved_archive_file is None and filename == TF2_WEIGHTS_NAME: # Maybe the checkpoint is sharded, we try to grab the index name in this case. resolved_archive_file = cached_file( pretrained_model_name_or_path, TF2_WEIGHTS_INDEX_NAME, **cached_file_kwargs ) if resolved_archive_file is not None: is_sharded = True if resolved_archive_file is None and filename == WEIGHTS_NAME: # Maybe the checkpoint is sharded, we try to grab the index name in this case. resolved_archive_file = cached_file( pretrained_model_name_or_path, WEIGHTS_INDEX_NAME, **cached_file_kwargs ) if resolved_archive_file is not None: is_sharded = True if resolved_archive_file is None: # Otherwise, maybe there is a PyTorch or Flax model file. We try those to give a helpful error # message. has_file_kwargs = { "revision": revision, "proxies": proxies, "token": token, "cache_dir": cache_dir, "local_files_only": local_files_only, } if has_file(pretrained_model_name_or_path, SAFE_WEIGHTS_INDEX_NAME, **has_file_kwargs): is_sharded = True elif has_file(pretrained_model_name_or_path, WEIGHTS_NAME, **has_file_kwargs): raise EnvironmentError( f"{pretrained_model_name_or_path} does not appear to have a file named" f" {TF2_WEIGHTS_NAME} but there is a file for PyTorch weights. Use `from_pt=True` to" " load this model from those weights." ) else: raise EnvironmentError( f"{pretrained_model_name_or_path} does not appear to have a file named {WEIGHTS_NAME}," f" {TF2_WEIGHTS_NAME} or {TF_WEIGHTS_NAME}" ) except EnvironmentError: # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted # to the original exception. raise except Exception: # For any other exception, we throw a generic error. raise EnvironmentError( f"Can't load the model for '{pretrained_model_name_or_path}'. If you were trying to load it" " from 'https://huggingface.co/models', make sure you don't have a local directory with the" f" same name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a" f" directory containing a file named {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME} or {TF_WEIGHTS_NAME}" ) if is_local: logger.info(f"loading weights file {archive_file}") resolved_archive_file = archive_file filename = resolved_archive_file.split(os.path.sep)[-1] else: logger.info(f"loading weights file {filename} from cache at {resolved_archive_file}") else: resolved_archive_file = None # We'll need to download and cache each checkpoint shard if the checkpoint is sharded. if is_sharded: # resolved_archive_file becomes a list of files that point to the different checkpoint shards in this case. resolved_archive_file, sharded_metadata = get_checkpoint_shard_files( pretrained_model_name_or_path, resolved_archive_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, _commit_hash=commit_hash, ) safetensors_from_pt = False if filename == SAFE_WEIGHTS_NAME: with safe_open(resolved_archive_file, framework="tf") as f: safetensors_metadata = f.metadata() if safetensors_metadata is None or safetensors_metadata.get("format") not in ["pt", "tf", "flax", "mlx"]: raise OSError( f"The safetensors archive passed at {resolved_archive_file} does not contain the valid metadata." " Make sure you save your model with the `save_pretrained` method." ) safetensors_from_pt = safetensors_metadata.get("format") == "pt" elif filename == SAFE_WEIGHTS_INDEX_NAME: with safe_open(resolved_archive_file[0], framework="tf") as f: safetensors_metadata = f.metadata() if safetensors_metadata is None or safetensors_metadata.get("format") not in ["pt", "tf", "flax", "mlx"]: raise OSError( f"The safetensors archive passed at {resolved_archive_file} does not contain the valid metadata." " Make sure you save your model with the `save_pretrained` method." ) safetensors_from_pt = safetensors_metadata.get("format") == "pt" config.name_or_path = pretrained_model_name_or_path # composed models, *e.g.* TFRag, require special treatment when it comes to loading # pre-trained weights. if cls._requires_load_weight_prefix and model_kwargs.get("name") is not None: model_kwargs["load_weight_prefix"] = load_weight_prefix + "/" + model_kwargs.get("name") # Instantiate model. model = cls(config, *model_args, **model_kwargs) if tf_to_pt_weight_rename is None and hasattr(model, "tf_to_pt_weight_rename"): # TODO Matt: This is a temporary workaround to allow weight renaming, but requires a method # to be defined for each class that requires a rename. We can probably just have a class-level # dict and a single top-level method or something and cut down a lot of boilerplate code tf_to_pt_weight_rename = model.tf_to_pt_weight_rename if from_pt: from .modeling_tf_pytorch_utils import load_pytorch_checkpoint_in_tf2_model # Load from a PyTorch checkpoint return load_pytorch_checkpoint_in_tf2_model( model, resolved_archive_file, allow_missing_keys=True, output_loading_info=output_loading_info, _prefix=load_weight_prefix, tf_to_pt_weight_rename=tf_to_pt_weight_rename, ) # we might need to extend the variable scope for composite models if load_weight_prefix is not None: with tf.compat.v1.variable_scope(load_weight_prefix): model.build_in_name_scope() # build the network with dummy inputs else: model.build_in_name_scope() # build the network with dummy inputs if safetensors_from_pt and not is_sharded: from .modeling_tf_pytorch_utils import load_pytorch_state_dict_in_tf2_model with safe_open(resolved_archive_file, framework="tf") as safetensors_archive: # Load from a PyTorch safetensors checkpoint # We load in TF format here because PT weights often need to be transposed, and this is much # faster on GPU. Loading as numpy and transposing on CPU adds several seconds to load times. return load_pytorch_state_dict_in_tf2_model( model, safetensors_archive, tf_inputs=False, # No need to build the model again allow_missing_keys=True, output_loading_info=output_loading_info, _prefix=load_weight_prefix, ignore_mismatched_sizes=ignore_mismatched_sizes, tf_to_pt_weight_rename=tf_to_pt_weight_rename, ) elif safetensors_from_pt: from .modeling_tf_pytorch_utils import load_sharded_pytorch_safetensors_in_tf2_model return load_sharded_pytorch_safetensors_in_tf2_model( model, resolved_archive_file, tf_inputs=False, allow_missing_keys=True, output_loading_info=output_loading_info, _prefix=load_weight_prefix, ignore_mismatched_sizes=ignore_mismatched_sizes, tf_to_pt_weight_rename=tf_to_pt_weight_rename, ) # 'by_name' allow us to do transfer learning by skipping/adding layers # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 try: if is_sharded: for file in resolved_archive_file: os.path.isfile(file), f"Error retrieving files {file}" if filename == SAFE_WEIGHTS_INDEX_NAME: missing_keys, unexpected_keys, mismatched_keys = load_tf_sharded_weights_from_safetensors( model, resolved_archive_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=load_weight_prefix, ) else: missing_keys, unexpected_keys, mismatched_keys = load_tf_sharded_weights( model, resolved_archive_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=load_weight_prefix, ) else: # Handles both H5 and safetensors missing_keys, unexpected_keys, mismatched_keys = load_tf_weights( model, resolved_archive_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=load_weight_prefix, ) except OSError as e: try: with open(resolved_archive_file) as f: if f.read().startswith("version"): raise OSError( "You seem to have cloned a repository without having git-lfs installed. Please install " "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " "you cloned." ) else: raise ValueError from e except (UnicodeDecodeError, ValueError): raise OSError( "Unable to load weights from h5 file. " "If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True. " ) if cls._keys_to_ignore_on_load_missing is not None: for pat in cls._keys_to_ignore_on_load_missing: missing_keys = [k for k in missing_keys if re.search(pat, k) is None] if cls._keys_to_ignore_on_load_unexpected is not None: for pat in cls._keys_to_ignore_on_load_unexpected: unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None] if len(unexpected_keys) > 0: logger.warning( f"Some layers from the model checkpoint at {pretrained_model_name_or_path} were not used when" f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are" f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task or" " with another architecture (e.g. initializing a BertForSequenceClassification model from a" " BertForPreTraining model).\n- This IS NOT expected if you are initializing" f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly identical" " (initializing a BertForSequenceClassification model from a BertForSequenceClassification model)." ) else: logger.warning(f"All model checkpoint layers were used when initializing {model.__class__.__name__}.\n") if len(missing_keys) > 0: logger.warning( f"Some layers of {model.__class__.__name__} were not initialized from the model checkpoint at" f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably" " TRAIN this model on a down-stream task to be able to use it for predictions and inference." ) elif len(mismatched_keys) == 0: logger.warning( f"All the layers of {model.__class__.__name__} were initialized from the model checkpoint at" f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the checkpoint" f" was trained on, you can already use {model.__class__.__name__} for predictions without further" " training." ) if len(mismatched_keys) > 0: mismatched_warning = "\n".join( [ f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated" for key, shape1, shape2 in mismatched_keys ] ) logger.warning( f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not" f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be able" " to use it for predictions and inference." ) # If it is a model with generation capabilities, attempt to load the generation config if model.can_generate(): try: model.generation_config = GenerationConfig.from_pretrained( pretrained_model_name_or_path, cache_dir=cache_dir, force_download=force_download, resume_download=resume_download, proxies=proxies, local_files_only=local_files_only, token=token, revision=revision, subfolder=subfolder, _from_auto=from_auto_class, _from_pipeline=from_pipeline, **kwargs, ) except OSError: logger.info( "Generation config file not found, using a generation config created from the model config." ) pass if output_loading_info: loading_info = { "missing_keys": missing_keys, "unexpected_keys": unexpected_keys, "mismatched_keys": mismatched_keys, } return model, loading_info return model def push_to_hub( self, repo_id: str, use_temp_dir: Optional[bool] = None, commit_message: Optional[str] = None, private: Optional[bool] = None, max_shard_size: Optional[Union[int, str]] = "10GB", token: Optional[Union[bool, str]] = None, # (`use_auth_token` is deprecated: we have to keep it here as we don't have **kwargs) use_auth_token: Optional[Union[bool, str]] = None, create_pr: bool = False, **base_model_card_args, ) -> str: """ Upload the model files to the 🀗 Model Hub while synchronizing a local clone of the repo in `repo_path_or_name`. Parameters: repo_id (`str`): The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization. use_temp_dir (`bool`, *optional*): Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to `True` if there is no directory named like `repo_id`, `False` otherwise. commit_message (`str`, *optional*): Message to commit while pushing. Will default to `"Upload model"`. private (`bool`, *optional*): Whether or not the repository created should be private. token (`bool` or `str`, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). Will default to `True` if `repo_url` is not specified. max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). create_pr (`bool`, *optional*, defaults to `False`): Whether or not to create a PR with the uploaded files or directly commit. Examples: ```python from transformers import TFAutoModel model = TFAutoModel.from_pretrained("google-bert/bert-base-cased") # Push the model to your namespace with the name "my-finetuned-bert". model.push_to_hub("my-finetuned-bert") # Push the model to an organization with the name "my-finetuned-bert". model.push_to_hub("huggingface/my-finetuned-bert") ``` """ if use_auth_token is not None: warnings.warn( "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.", FutureWarning, ) if token is not None: raise ValueError( "`token` and `use_auth_token` are both specified. Please set only the argument `token`." ) token = use_auth_token if "repo_path_or_name" in base_model_card_args: warnings.warn( "The `repo_path_or_name` argument is deprecated and will be removed in v5 of Transformers. Use " "`repo_id` instead." ) repo_id = base_model_card_args.pop("repo_path_or_name") # Deprecation warning will be sent after for repo_url and organization repo_url = base_model_card_args.pop("repo_url", None) organization = base_model_card_args.pop("organization", None) if os.path.isdir(repo_id): working_dir = repo_id repo_id = repo_id.split(os.path.sep)[-1] else: working_dir = repo_id.split("/")[-1] repo_id = self._create_repo( repo_id, private=private, token=token, repo_url=repo_url, organization=organization ) if use_temp_dir is None: use_temp_dir = not os.path.isdir(working_dir) with working_or_temp_dir(working_dir=working_dir, use_temp_dir=use_temp_dir) as work_dir: files_timestamps = self._get_files_timestamps(work_dir) # Save all files. self.save_pretrained(work_dir, max_shard_size=max_shard_size) if hasattr(self, "history") and hasattr(self, "create_model_card"): # This is a Keras model and we might be able to fish out its History and make a model card out of it base_model_card_args = { "output_dir": work_dir, "model_name": Path(repo_id).name, } base_model_card_args.update(base_model_card_args) self.create_model_card(**base_model_card_args) self._upload_modified_files( work_dir, repo_id, files_timestamps, commit_message=commit_message, token=token, create_pr=create_pr, ) @classmethod def register_for_auto_class(cls, auto_class="TFAutoModel"): """ Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class. <Tip warning={true}> This API is experimental and may have some slight breaking changes in the next releases. </Tip> Args: auto_class (`str` or `type`, *optional*, defaults to `"TFAutoModel"`): The auto class to register this new model with. """ if not isinstance(auto_class, str): auto_class = auto_class.__name__ import transformers.models.auto as auto_module if not hasattr(auto_module, auto_class): raise ValueError(f"{auto_class} is not a valid auto class.") cls._auto_class = auto_class class TFConv1D(keras.layers.Layer): """ 1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2). Basically works like a linear layer but the weights are transposed. Args: nf (`int`): The number of output features. nx (`int`): The number of input features. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation to use to initialize the weights. kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the `__init__` of `keras.layers.Layer`. """ def __init__(self, nf, nx, initializer_range=0.02, **kwargs): super().__init__(**kwargs) self.nf = nf self.nx = nx self.initializer_range = initializer_range def build(self, input_shape): if self.built: return self.built = True self.weight = self.add_weight( "weight", shape=[self.nx, self.nf], initializer=get_initializer(self.initializer_range) ) self.bias = self.add_weight("bias", shape=[1, self.nf], initializer=tf.zeros_initializer()) def call(self, x): bz, sl = shape_list(x)[:2] x = tf.reshape(x, [-1, self.nx]) x = tf.matmul(x, self.weight) + self.bias x = tf.reshape(x, [bz, sl, self.nf]) return x class TFSharedEmbeddings(keras.layers.Layer): r""" Construct shared token embeddings. The weights of the embedding layer is usually shared with the weights of the linear decoder when doing language modeling. Args: vocab_size (`int`): The size of the vocabulary, e.g., the number of unique tokens. hidden_size (`int`): The size of the embedding vectors. initializer_range (`float`, *optional*): The standard deviation to use when initializing the weights. If no value is provided, it will default to \\(1/\sqrt{hidden\_size}\\). kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the `__init__` of `keras.layers.Layer`. """ # TODO (joao): flagged for delection due to embeddings refactor def __init__(self, vocab_size: int, hidden_size: int, initializer_range: Optional[float] = None, **kwargs): super().__init__(**kwargs) self.vocab_size = vocab_size self.hidden_size = hidden_size self.initializer_range = hidden_size**-0.5 if initializer_range is None else initializer_range warnings.warn( "`TFSharedEmbeddings` is scheduled for deletion in v4.32, use `keras.layers.Embedding` instead.", DeprecationWarning, ) def build(self, input_shape): """ Build shared token embedding layer Shared weights logic adapted from https://github.com/tensorflow/models/blob/a009f4fb9d2fc4949e32192a944688925ef78659/official/transformer/v2/embedding_layer.py#L24 """ self.weight = self.add_weight( "weight", shape=[self.vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range) ) super().build(input_shape) def get_config(self): config = { "vocab_size": self.vocab_size, "hidden_size": self.hidden_size, "initializer_range": self.initializer_range, } base_config = super().get_config() return dict(list(base_config.items()) + list(config.items())) def call(self, inputs: tf.Tensor, mode: str = "embedding") -> tf.Tensor: """ Get token embeddings of inputs or decode final hidden state. Args: inputs (`tf.Tensor`): In embedding mode, should be an int64 tensor with shape `[batch_size, length]`. In linear mode, should be a float tensor with shape `[batch_size, length, hidden_size]`. mode (`str`, defaults to `"embedding"`): A valid value is either `"embedding"` or `"linear"`, the first one indicates that the layer should be used as an embedding layer, the second one that the layer should be used as a linear decoder. Returns: `tf.Tensor`: In embedding mode, the output is a float32 embedding tensor, with shape `[batch_size, length, embedding_size]`. In linear mode, the output is a float32 with shape `[batch_size, length, vocab_size]`. Raises: ValueError: if `mode` is not valid. Shared weights logic is adapted from [here](https://github.com/tensorflow/models/blob/a009f4fb9d2fc4949e32192a944688925ef78659/official/transformer/v2/embedding_layer.py#L24). """ if mode == "embedding": return self._embedding(inputs) elif mode == "linear": return self._linear(inputs) else: raise ValueError(f"mode {mode} is not valid.") def _embedding(self, input_ids): """Applies embedding based on inputs tensor.""" return tf.gather(self.weight, input_ids) def _linear(self, inputs): """ Computes logits by running inputs through a linear layer. Args: inputs: A float32 tensor with shape [..., hidden_size] Returns: float32 tensor with shape [..., vocab_size]. """ first_dims = shape_list(inputs)[:-1] x = tf.reshape(inputs, [-1, self.hidden_size]) logits = tf.matmul(x, self.weight, transpose_b=True) return tf.reshape(logits, first_dims + [self.vocab_size]) class TFSequenceSummary(keras.layers.Layer): """ Compute a single vector summary of a sequence hidden states. Args: config ([`PretrainedConfig`]): The config used by the model. Relevant arguments in the config class of the model are (refer to the actual config class of your model for the default values it uses): - **summary_type** (`str`) -- The method to use to make this summary. Accepted values are: - `"last"` -- Take the last token hidden state (like XLNet) - `"first"` -- Take the first token hidden state (like Bert) - `"mean"` -- Take the mean of all tokens hidden states - `"cls_index"` -- Supply a Tensor of classification token position (GPT/GPT-2) - `"attn"` -- Not implemented now, use multi-head attention - **summary_use_proj** (`bool`) -- Add a projection after the vector extraction. - **summary_proj_to_labels** (`bool`) -- If `True`, the projection outputs to `config.num_labels` classes (otherwise to `config.hidden_size`). - **summary_activation** (`Optional[str]`) -- Set to `"tanh"` to add a tanh activation to the output, another string or `None` will add no activation. - **summary_first_dropout** (`float`) -- Optional dropout probability before the projection and activation. - **summary_last_dropout** (`float`)-- Optional dropout probability after the projection and activation. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation to use to initialize the weights. kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the `__init__` of `keras.layers.Layer`. """ def __init__(self, config: PretrainedConfig, initializer_range: float = 0.02, **kwargs): super().__init__(**kwargs) self.summary_type = config.summary_type if hasattr(config, "summary_use_proj") else "last" if self.summary_type == "attn": # We should use a standard multi-head attention module with absolute positional embedding for that. # Cf. https://github.com/zihangdai/xlnet/blob/master/modeling.py#L253-L276 # We can probably just use the multi-head attention module of PyTorch >=1.1.0 raise NotImplementedError self.has_summary = hasattr(config, "summary_use_proj") and config.summary_use_proj if self.has_summary: if hasattr(config, "summary_proj_to_labels") and config.summary_proj_to_labels and config.num_labels > 0: num_classes = config.num_labels else: num_classes = config.hidden_size self.summary = keras.layers.Dense( num_classes, kernel_initializer=get_initializer(initializer_range), name="summary" ) self.has_activation = False activation_string = getattr(config, "summary_activation", None) if activation_string is not None: self.has_activation = True self.activation = get_tf_activation(activation_string) self.has_first_dropout = hasattr(config, "summary_first_dropout") and config.summary_first_dropout > 0 if self.has_first_dropout: self.first_dropout = keras.layers.Dropout(config.summary_first_dropout) self.has_last_dropout = hasattr(config, "summary_last_dropout") and config.summary_last_dropout > 0 if self.has_last_dropout: self.last_dropout = keras.layers.Dropout(config.summary_last_dropout) self.hidden_size = config.hidden_size def call(self, inputs, cls_index=None, training=False): if not isinstance(inputs, (dict, tuple, list)): hidden_states = inputs elif isinstance(inputs, (tuple, list)): hidden_states = inputs[0] cls_index = inputs[1] if len(inputs) > 1 else None assert len(inputs) <= 2, "Too many inputs." else: hidden_states = inputs.get("hidden_states") cls_index = inputs.get("cls_index", None) if self.summary_type == "last": output = hidden_states[:, -1] elif self.summary_type == "first": output = hidden_states[:, 0] elif self.summary_type == "mean": output = tf.reduce_mean(hidden_states, axis=1) elif self.summary_type == "cls_index": hidden_shape = shape_list(hidden_states) # e.g. [batch, num choices, seq length, hidden dims] if cls_index is None: cls_index = tf.fill( hidden_shape[:-2], hidden_shape[-2] - 1 ) # A tensor full of shape [batch] or [batch, num choices] full of sequence length cls_shape = shape_list(cls_index) if len(cls_shape) <= len(hidden_shape) - 2: cls_index = tf.expand_dims(cls_index, axis=-1) # else: # cls_index = cls_index[..., tf.newaxis] # cls_index = cls_index.expand((-1,) * (cls_index.dim()-1) + (hidden_states.size(-1),)) # shape of cls_index: (bsz, XX, 1, hidden_size) where XX are optional leading dim of hidden_states output = tf.gather(hidden_states, cls_index, batch_dims=len(hidden_shape) - 2) output = tf.squeeze( output, axis=len(hidden_shape) - 2 ) # shape of output: (batch, num choices, hidden_size) elif self.summary_type == "attn": raise NotImplementedError if self.has_first_dropout: output = self.first_dropout(output, training=training) if self.has_summary: output = self.summary(output) if self.has_activation: output = self.activation(output) if self.has_last_dropout: output = self.last_dropout(output, training=training) return output def build(self, input_shape): if self.built: return self.built = True if getattr(self, "summary", None) is not None: with tf.name_scope("summary"): self.summary.build(self.hidden_size) def get_initializer(initializer_range: float = 0.02) -> keras.initializers.TruncatedNormal: """ Creates a `keras.initializers.TruncatedNormal` with the given range. Args: initializer_range (*float*, defaults to 0.02): Standard deviation of the initializer range. Returns: `keras.initializers.TruncatedNormal`: The truncated normal initializer. """ return keras.initializers.TruncatedNormal(stddev=initializer_range)
transformers/src/transformers/modeling_tf_utils.py/0
{ "file_path": "transformers/src/transformers/modeling_tf_utils.py", "repo_id": "transformers", "token_count": 74525 }
329
# coding=utf-8 # Copyright 2018 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Auto Model class.""" import warnings from collections import OrderedDict from ...utils import logging from .auto_factory import _BaseAutoModelClass, _LazyAutoMapping, auto_class_update from .configuration_auto import CONFIG_MAPPING_NAMES logger = logging.get_logger(__name__) TF_MODEL_MAPPING_NAMES = OrderedDict( [ # Base model mapping ("albert", "TFAlbertModel"), ("bart", "TFBartModel"), ("bert", "TFBertModel"), ("blenderbot", "TFBlenderbotModel"), ("blenderbot-small", "TFBlenderbotSmallModel"), ("blip", "TFBlipModel"), ("camembert", "TFCamembertModel"), ("clip", "TFCLIPModel"), ("convbert", "TFConvBertModel"), ("convnext", "TFConvNextModel"), ("convnextv2", "TFConvNextV2Model"), ("ctrl", "TFCTRLModel"), ("cvt", "TFCvtModel"), ("data2vec-vision", "TFData2VecVisionModel"), ("deberta", "TFDebertaModel"), ("deberta-v2", "TFDebertaV2Model"), ("deit", "TFDeiTModel"), ("distilbert", "TFDistilBertModel"), ("dpr", "TFDPRQuestionEncoder"), ("efficientformer", "TFEfficientFormerModel"), ("electra", "TFElectraModel"), ("esm", "TFEsmModel"), ("flaubert", "TFFlaubertModel"), ("funnel", ("TFFunnelModel", "TFFunnelBaseModel")), ("gpt-sw3", "TFGPT2Model"), ("gpt2", "TFGPT2Model"), ("gptj", "TFGPTJModel"), ("groupvit", "TFGroupViTModel"), ("hubert", "TFHubertModel"), ("idefics", "TFIdeficsModel"), ("layoutlm", "TFLayoutLMModel"), ("layoutlmv3", "TFLayoutLMv3Model"), ("led", "TFLEDModel"), ("longformer", "TFLongformerModel"), ("lxmert", "TFLxmertModel"), ("marian", "TFMarianModel"), ("mbart", "TFMBartModel"), ("mistral", "TFMistralModel"), ("mobilebert", "TFMobileBertModel"), ("mobilevit", "TFMobileViTModel"), ("mpnet", "TFMPNetModel"), ("mt5", "TFMT5Model"), ("openai-gpt", "TFOpenAIGPTModel"), ("opt", "TFOPTModel"), ("pegasus", "TFPegasusModel"), ("regnet", "TFRegNetModel"), ("rembert", "TFRemBertModel"), ("resnet", "TFResNetModel"), ("roberta", "TFRobertaModel"), ("roberta-prelayernorm", "TFRobertaPreLayerNormModel"), ("roformer", "TFRoFormerModel"), ("sam", "TFSamModel"), ("segformer", "TFSegformerModel"), ("speech_to_text", "TFSpeech2TextModel"), ("swiftformer", "TFSwiftFormerModel"), ("swin", "TFSwinModel"), ("t5", "TFT5Model"), ("tapas", "TFTapasModel"), ("transfo-xl", "TFTransfoXLModel"), ("vision-text-dual-encoder", "TFVisionTextDualEncoderModel"), ("vit", "TFViTModel"), ("vit_mae", "TFViTMAEModel"), ("wav2vec2", "TFWav2Vec2Model"), ("whisper", "TFWhisperModel"), ("xglm", "TFXGLMModel"), ("xlm", "TFXLMModel"), ("xlm-roberta", "TFXLMRobertaModel"), ("xlnet", "TFXLNetModel"), ] ) TF_MODEL_FOR_PRETRAINING_MAPPING_NAMES = OrderedDict( [ # Model for pre-training mapping ("albert", "TFAlbertForPreTraining"), ("bart", "TFBartForConditionalGeneration"), ("bert", "TFBertForPreTraining"), ("camembert", "TFCamembertForMaskedLM"), ("ctrl", "TFCTRLLMHeadModel"), ("distilbert", "TFDistilBertForMaskedLM"), ("electra", "TFElectraForPreTraining"), ("flaubert", "TFFlaubertWithLMHeadModel"), ("funnel", "TFFunnelForPreTraining"), ("gpt-sw3", "TFGPT2LMHeadModel"), ("gpt2", "TFGPT2LMHeadModel"), ("idefics", "TFIdeficsForVisionText2Text"), ("layoutlm", "TFLayoutLMForMaskedLM"), ("lxmert", "TFLxmertForPreTraining"), ("mobilebert", "TFMobileBertForPreTraining"), ("mpnet", "TFMPNetForMaskedLM"), ("openai-gpt", "TFOpenAIGPTLMHeadModel"), ("roberta", "TFRobertaForMaskedLM"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForMaskedLM"), ("t5", "TFT5ForConditionalGeneration"), ("tapas", "TFTapasForMaskedLM"), ("transfo-xl", "TFTransfoXLLMHeadModel"), ("vit_mae", "TFViTMAEForPreTraining"), ("xlm", "TFXLMWithLMHeadModel"), ("xlm-roberta", "TFXLMRobertaForMaskedLM"), ("xlnet", "TFXLNetLMHeadModel"), ] ) TF_MODEL_WITH_LM_HEAD_MAPPING_NAMES = OrderedDict( [ # Model with LM heads mapping ("albert", "TFAlbertForMaskedLM"), ("bart", "TFBartForConditionalGeneration"), ("bert", "TFBertForMaskedLM"), ("camembert", "TFCamembertForMaskedLM"), ("convbert", "TFConvBertForMaskedLM"), ("ctrl", "TFCTRLLMHeadModel"), ("distilbert", "TFDistilBertForMaskedLM"), ("electra", "TFElectraForMaskedLM"), ("esm", "TFEsmForMaskedLM"), ("flaubert", "TFFlaubertWithLMHeadModel"), ("funnel", "TFFunnelForMaskedLM"), ("gpt-sw3", "TFGPT2LMHeadModel"), ("gpt2", "TFGPT2LMHeadModel"), ("gptj", "TFGPTJForCausalLM"), ("layoutlm", "TFLayoutLMForMaskedLM"), ("led", "TFLEDForConditionalGeneration"), ("longformer", "TFLongformerForMaskedLM"), ("marian", "TFMarianMTModel"), ("mobilebert", "TFMobileBertForMaskedLM"), ("mpnet", "TFMPNetForMaskedLM"), ("openai-gpt", "TFOpenAIGPTLMHeadModel"), ("rembert", "TFRemBertForMaskedLM"), ("roberta", "TFRobertaForMaskedLM"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForMaskedLM"), ("roformer", "TFRoFormerForMaskedLM"), ("speech_to_text", "TFSpeech2TextForConditionalGeneration"), ("t5", "TFT5ForConditionalGeneration"), ("tapas", "TFTapasForMaskedLM"), ("transfo-xl", "TFTransfoXLLMHeadModel"), ("whisper", "TFWhisperForConditionalGeneration"), ("xlm", "TFXLMWithLMHeadModel"), ("xlm-roberta", "TFXLMRobertaForMaskedLM"), ("xlnet", "TFXLNetLMHeadModel"), ] ) TF_MODEL_FOR_CAUSAL_LM_MAPPING_NAMES = OrderedDict( [ # Model for Causal LM mapping ("bert", "TFBertLMHeadModel"), ("camembert", "TFCamembertForCausalLM"), ("ctrl", "TFCTRLLMHeadModel"), ("gpt-sw3", "TFGPT2LMHeadModel"), ("gpt2", "TFGPT2LMHeadModel"), ("gptj", "TFGPTJForCausalLM"), ("mistral", "TFMistralForCausalLM"), ("openai-gpt", "TFOpenAIGPTLMHeadModel"), ("opt", "TFOPTForCausalLM"), ("rembert", "TFRemBertForCausalLM"), ("roberta", "TFRobertaForCausalLM"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForCausalLM"), ("roformer", "TFRoFormerForCausalLM"), ("transfo-xl", "TFTransfoXLLMHeadModel"), ("xglm", "TFXGLMForCausalLM"), ("xlm", "TFXLMWithLMHeadModel"), ("xlm-roberta", "TFXLMRobertaForCausalLM"), ("xlnet", "TFXLNetLMHeadModel"), ] ) TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING_NAMES = OrderedDict( [ ("deit", "TFDeiTForMaskedImageModeling"), ("swin", "TFSwinForMaskedImageModeling"), ] ) TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES = OrderedDict( [ # Model for Image-classsification ("convnext", "TFConvNextForImageClassification"), ("convnextv2", "TFConvNextV2ForImageClassification"), ("cvt", "TFCvtForImageClassification"), ("data2vec-vision", "TFData2VecVisionForImageClassification"), ("deit", ("TFDeiTForImageClassification", "TFDeiTForImageClassificationWithTeacher")), ( "efficientformer", ("TFEfficientFormerForImageClassification", "TFEfficientFormerForImageClassificationWithTeacher"), ), ("mobilevit", "TFMobileViTForImageClassification"), ("regnet", "TFRegNetForImageClassification"), ("resnet", "TFResNetForImageClassification"), ("segformer", "TFSegformerForImageClassification"), ("swiftformer", "TFSwiftFormerForImageClassification"), ("swin", "TFSwinForImageClassification"), ("vit", "TFViTForImageClassification"), ] ) TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES = OrderedDict( [ # Model for Zero Shot Image Classification mapping ("blip", "TFBlipModel"), ("clip", "TFCLIPModel"), ] ) TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES = OrderedDict( [ # Model for Semantic Segmentation mapping ("data2vec-vision", "TFData2VecVisionForSemanticSegmentation"), ("mobilevit", "TFMobileViTForSemanticSegmentation"), ("segformer", "TFSegformerForSemanticSegmentation"), ] ) TF_MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES = OrderedDict( [ ("blip", "TFBlipForConditionalGeneration"), ("vision-encoder-decoder", "TFVisionEncoderDecoderModel"), ] ) TF_MODEL_FOR_MASKED_LM_MAPPING_NAMES = OrderedDict( [ # Model for Masked LM mapping ("albert", "TFAlbertForMaskedLM"), ("bert", "TFBertForMaskedLM"), ("camembert", "TFCamembertForMaskedLM"), ("convbert", "TFConvBertForMaskedLM"), ("deberta", "TFDebertaForMaskedLM"), ("deberta-v2", "TFDebertaV2ForMaskedLM"), ("distilbert", "TFDistilBertForMaskedLM"), ("electra", "TFElectraForMaskedLM"), ("esm", "TFEsmForMaskedLM"), ("flaubert", "TFFlaubertWithLMHeadModel"), ("funnel", "TFFunnelForMaskedLM"), ("layoutlm", "TFLayoutLMForMaskedLM"), ("longformer", "TFLongformerForMaskedLM"), ("mobilebert", "TFMobileBertForMaskedLM"), ("mpnet", "TFMPNetForMaskedLM"), ("rembert", "TFRemBertForMaskedLM"), ("roberta", "TFRobertaForMaskedLM"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForMaskedLM"), ("roformer", "TFRoFormerForMaskedLM"), ("tapas", "TFTapasForMaskedLM"), ("xlm", "TFXLMWithLMHeadModel"), ("xlm-roberta", "TFXLMRobertaForMaskedLM"), ] ) TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES = OrderedDict( [ # Model for Seq2Seq Causal LM mapping ("bart", "TFBartForConditionalGeneration"), ("blenderbot", "TFBlenderbotForConditionalGeneration"), ("blenderbot-small", "TFBlenderbotSmallForConditionalGeneration"), ("encoder-decoder", "TFEncoderDecoderModel"), ("led", "TFLEDForConditionalGeneration"), ("marian", "TFMarianMTModel"), ("mbart", "TFMBartForConditionalGeneration"), ("mt5", "TFMT5ForConditionalGeneration"), ("pegasus", "TFPegasusForConditionalGeneration"), ("t5", "TFT5ForConditionalGeneration"), ] ) TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES = OrderedDict( [ ("speech_to_text", "TFSpeech2TextForConditionalGeneration"), ("whisper", "TFWhisperForConditionalGeneration"), ] ) TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES = OrderedDict( [ # Model for Sequence Classification mapping ("albert", "TFAlbertForSequenceClassification"), ("bart", "TFBartForSequenceClassification"), ("bert", "TFBertForSequenceClassification"), ("camembert", "TFCamembertForSequenceClassification"), ("convbert", "TFConvBertForSequenceClassification"), ("ctrl", "TFCTRLForSequenceClassification"), ("deberta", "TFDebertaForSequenceClassification"), ("deberta-v2", "TFDebertaV2ForSequenceClassification"), ("distilbert", "TFDistilBertForSequenceClassification"), ("electra", "TFElectraForSequenceClassification"), ("esm", "TFEsmForSequenceClassification"), ("flaubert", "TFFlaubertForSequenceClassification"), ("funnel", "TFFunnelForSequenceClassification"), ("gpt-sw3", "TFGPT2ForSequenceClassification"), ("gpt2", "TFGPT2ForSequenceClassification"), ("gptj", "TFGPTJForSequenceClassification"), ("layoutlm", "TFLayoutLMForSequenceClassification"), ("layoutlmv3", "TFLayoutLMv3ForSequenceClassification"), ("longformer", "TFLongformerForSequenceClassification"), ("mistral", "TFMistralForSequenceClassification"), ("mobilebert", "TFMobileBertForSequenceClassification"), ("mpnet", "TFMPNetForSequenceClassification"), ("openai-gpt", "TFOpenAIGPTForSequenceClassification"), ("rembert", "TFRemBertForSequenceClassification"), ("roberta", "TFRobertaForSequenceClassification"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForSequenceClassification"), ("roformer", "TFRoFormerForSequenceClassification"), ("tapas", "TFTapasForSequenceClassification"), ("transfo-xl", "TFTransfoXLForSequenceClassification"), ("xlm", "TFXLMForSequenceClassification"), ("xlm-roberta", "TFXLMRobertaForSequenceClassification"), ("xlnet", "TFXLNetForSequenceClassification"), ] ) TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict( [ # Model for Question Answering mapping ("albert", "TFAlbertForQuestionAnswering"), ("bert", "TFBertForQuestionAnswering"), ("camembert", "TFCamembertForQuestionAnswering"), ("convbert", "TFConvBertForQuestionAnswering"), ("deberta", "TFDebertaForQuestionAnswering"), ("deberta-v2", "TFDebertaV2ForQuestionAnswering"), ("distilbert", "TFDistilBertForQuestionAnswering"), ("electra", "TFElectraForQuestionAnswering"), ("flaubert", "TFFlaubertForQuestionAnsweringSimple"), ("funnel", "TFFunnelForQuestionAnswering"), ("gptj", "TFGPTJForQuestionAnswering"), ("layoutlmv3", "TFLayoutLMv3ForQuestionAnswering"), ("longformer", "TFLongformerForQuestionAnswering"), ("mobilebert", "TFMobileBertForQuestionAnswering"), ("mpnet", "TFMPNetForQuestionAnswering"), ("rembert", "TFRemBertForQuestionAnswering"), ("roberta", "TFRobertaForQuestionAnswering"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForQuestionAnswering"), ("roformer", "TFRoFormerForQuestionAnswering"), ("xlm", "TFXLMForQuestionAnsweringSimple"), ("xlm-roberta", "TFXLMRobertaForQuestionAnswering"), ("xlnet", "TFXLNetForQuestionAnsweringSimple"), ] ) TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES = OrderedDict([("wav2vec2", "TFWav2Vec2ForSequenceClassification")]) TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict( [ ("layoutlm", "TFLayoutLMForQuestionAnswering"), ("layoutlmv3", "TFLayoutLMv3ForQuestionAnswering"), ] ) TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict( [ # Model for Table Question Answering mapping ("tapas", "TFTapasForQuestionAnswering"), ] ) TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES = OrderedDict( [ # Model for Token Classification mapping ("albert", "TFAlbertForTokenClassification"), ("bert", "TFBertForTokenClassification"), ("camembert", "TFCamembertForTokenClassification"), ("convbert", "TFConvBertForTokenClassification"), ("deberta", "TFDebertaForTokenClassification"), ("deberta-v2", "TFDebertaV2ForTokenClassification"), ("distilbert", "TFDistilBertForTokenClassification"), ("electra", "TFElectraForTokenClassification"), ("esm", "TFEsmForTokenClassification"), ("flaubert", "TFFlaubertForTokenClassification"), ("funnel", "TFFunnelForTokenClassification"), ("layoutlm", "TFLayoutLMForTokenClassification"), ("layoutlmv3", "TFLayoutLMv3ForTokenClassification"), ("longformer", "TFLongformerForTokenClassification"), ("mobilebert", "TFMobileBertForTokenClassification"), ("mpnet", "TFMPNetForTokenClassification"), ("rembert", "TFRemBertForTokenClassification"), ("roberta", "TFRobertaForTokenClassification"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForTokenClassification"), ("roformer", "TFRoFormerForTokenClassification"), ("xlm", "TFXLMForTokenClassification"), ("xlm-roberta", "TFXLMRobertaForTokenClassification"), ("xlnet", "TFXLNetForTokenClassification"), ] ) TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES = OrderedDict( [ # Model for Multiple Choice mapping ("albert", "TFAlbertForMultipleChoice"), ("bert", "TFBertForMultipleChoice"), ("camembert", "TFCamembertForMultipleChoice"), ("convbert", "TFConvBertForMultipleChoice"), ("deberta-v2", "TFDebertaV2ForMultipleChoice"), ("distilbert", "TFDistilBertForMultipleChoice"), ("electra", "TFElectraForMultipleChoice"), ("flaubert", "TFFlaubertForMultipleChoice"), ("funnel", "TFFunnelForMultipleChoice"), ("longformer", "TFLongformerForMultipleChoice"), ("mobilebert", "TFMobileBertForMultipleChoice"), ("mpnet", "TFMPNetForMultipleChoice"), ("rembert", "TFRemBertForMultipleChoice"), ("roberta", "TFRobertaForMultipleChoice"), ("roberta-prelayernorm", "TFRobertaPreLayerNormForMultipleChoice"), ("roformer", "TFRoFormerForMultipleChoice"), ("xlm", "TFXLMForMultipleChoice"), ("xlm-roberta", "TFXLMRobertaForMultipleChoice"), ("xlnet", "TFXLNetForMultipleChoice"), ] ) TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES = OrderedDict( [ ("bert", "TFBertForNextSentencePrediction"), ("mobilebert", "TFMobileBertForNextSentencePrediction"), ] ) TF_MODEL_FOR_MASK_GENERATION_MAPPING_NAMES = OrderedDict( [ ("sam", "TFSamModel"), ] ) TF_MODEL_FOR_TEXT_ENCODING_MAPPING_NAMES = OrderedDict( [ ("albert", "TFAlbertModel"), ("bert", "TFBertModel"), ("convbert", "TFConvBertModel"), ("deberta", "TFDebertaModel"), ("deberta-v2", "TFDebertaV2Model"), ("distilbert", "TFDistilBertModel"), ("electra", "TFElectraModel"), ("flaubert", "TFFlaubertModel"), ("longformer", "TFLongformerModel"), ("mobilebert", "TFMobileBertModel"), ("mt5", "TFMT5EncoderModel"), ("rembert", "TFRemBertModel"), ("roberta", "TFRobertaModel"), ("roberta-prelayernorm", "TFRobertaPreLayerNormModel"), ("roformer", "TFRoFormerModel"), ("t5", "TFT5EncoderModel"), ("xlm", "TFXLMModel"), ("xlm-roberta", "TFXLMRobertaModel"), ] ) TF_MODEL_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_MAPPING_NAMES) TF_MODEL_FOR_PRETRAINING_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_FOR_PRETRAINING_MAPPING_NAMES) TF_MODEL_WITH_LM_HEAD_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_WITH_LM_HEAD_MAPPING_NAMES) TF_MODEL_FOR_CAUSAL_LM_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_FOR_CAUSAL_LM_MAPPING_NAMES) TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING_NAMES ) TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES ) TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES ) TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES ) TF_MODEL_FOR_VISION_2_SEQ_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES) TF_MODEL_FOR_MASKED_LM_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_FOR_MASKED_LM_MAPPING_NAMES) TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES ) TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES ) TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES ) TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES ) TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES ) TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES ) TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES ) TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES ) TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES ) TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES ) TF_MODEL_FOR_MASK_GENERATION_MAPPING = _LazyAutoMapping( CONFIG_MAPPING_NAMES, TF_MODEL_FOR_MASK_GENERATION_MAPPING_NAMES ) TF_MODEL_FOR_TEXT_ENCODING_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, TF_MODEL_FOR_TEXT_ENCODING_MAPPING_NAMES) class TFAutoModelForMaskGeneration(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_MASK_GENERATION_MAPPING class TFAutoModelForTextEncoding(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_TEXT_ENCODING_MAPPING class TFAutoModel(_BaseAutoModelClass): _model_mapping = TF_MODEL_MAPPING TFAutoModel = auto_class_update(TFAutoModel) class TFAutoModelForAudioClassification(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING TFAutoModelForAudioClassification = auto_class_update( TFAutoModelForAudioClassification, head_doc="audio classification" ) class TFAutoModelForPreTraining(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_PRETRAINING_MAPPING TFAutoModelForPreTraining = auto_class_update(TFAutoModelForPreTraining, head_doc="pretraining") # Private on purpose, the public class will add the deprecation warnings. class _TFAutoModelWithLMHead(_BaseAutoModelClass): _model_mapping = TF_MODEL_WITH_LM_HEAD_MAPPING _TFAutoModelWithLMHead = auto_class_update(_TFAutoModelWithLMHead, head_doc="language modeling") class TFAutoModelForCausalLM(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_CAUSAL_LM_MAPPING TFAutoModelForCausalLM = auto_class_update(TFAutoModelForCausalLM, head_doc="causal language modeling") class TFAutoModelForMaskedImageModeling(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING TFAutoModelForMaskedImageModeling = auto_class_update( TFAutoModelForMaskedImageModeling, head_doc="masked image modeling" ) class TFAutoModelForImageClassification(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING TFAutoModelForImageClassification = auto_class_update( TFAutoModelForImageClassification, head_doc="image classification" ) class TFAutoModelForZeroShotImageClassification(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING TFAutoModelForZeroShotImageClassification = auto_class_update( TFAutoModelForZeroShotImageClassification, head_doc="zero-shot image classification" ) class TFAutoModelForSemanticSegmentation(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING TFAutoModelForSemanticSegmentation = auto_class_update( TFAutoModelForSemanticSegmentation, head_doc="semantic segmentation" ) class TFAutoModelForVision2Seq(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_VISION_2_SEQ_MAPPING TFAutoModelForVision2Seq = auto_class_update(TFAutoModelForVision2Seq, head_doc="vision-to-text modeling") class TFAutoModelForMaskedLM(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_MASKED_LM_MAPPING TFAutoModelForMaskedLM = auto_class_update(TFAutoModelForMaskedLM, head_doc="masked language modeling") class TFAutoModelForSeq2SeqLM(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING TFAutoModelForSeq2SeqLM = auto_class_update( TFAutoModelForSeq2SeqLM, head_doc="sequence-to-sequence language modeling", checkpoint_for_example="google-t5/t5-base", ) class TFAutoModelForSequenceClassification(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING TFAutoModelForSequenceClassification = auto_class_update( TFAutoModelForSequenceClassification, head_doc="sequence classification" ) class TFAutoModelForQuestionAnswering(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING TFAutoModelForQuestionAnswering = auto_class_update(TFAutoModelForQuestionAnswering, head_doc="question answering") class TFAutoModelForDocumentQuestionAnswering(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING TFAutoModelForDocumentQuestionAnswering = auto_class_update( TFAutoModelForDocumentQuestionAnswering, head_doc="document question answering", checkpoint_for_example='impira/layoutlm-document-qa", revision="52e01b3', ) class TFAutoModelForTableQuestionAnswering(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING TFAutoModelForTableQuestionAnswering = auto_class_update( TFAutoModelForTableQuestionAnswering, head_doc="table question answering", checkpoint_for_example="google/tapas-base-finetuned-wtq", ) class TFAutoModelForTokenClassification(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING TFAutoModelForTokenClassification = auto_class_update( TFAutoModelForTokenClassification, head_doc="token classification" ) class TFAutoModelForMultipleChoice(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING TFAutoModelForMultipleChoice = auto_class_update(TFAutoModelForMultipleChoice, head_doc="multiple choice") class TFAutoModelForNextSentencePrediction(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING TFAutoModelForNextSentencePrediction = auto_class_update( TFAutoModelForNextSentencePrediction, head_doc="next sentence prediction" ) class TFAutoModelForSpeechSeq2Seq(_BaseAutoModelClass): _model_mapping = TF_MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING TFAutoModelForSpeechSeq2Seq = auto_class_update( TFAutoModelForSpeechSeq2Seq, head_doc="sequence-to-sequence speech-to-text modeling" ) class TFAutoModelWithLMHead(_TFAutoModelWithLMHead): @classmethod def from_config(cls, config): warnings.warn( "The class `TFAutoModelWithLMHead` is deprecated and will be removed in a future version. Please use" " `TFAutoModelForCausalLM` for causal language models, `TFAutoModelForMaskedLM` for masked language models" " and `TFAutoModelForSeq2SeqLM` for encoder-decoder models.", FutureWarning, ) return super().from_config(config) @classmethod def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): warnings.warn( "The class `TFAutoModelWithLMHead` is deprecated and will be removed in a future version. Please use" " `TFAutoModelForCausalLM` for causal language models, `TFAutoModelForMaskedLM` for masked language models" " and `TFAutoModelForSeq2SeqLM` for encoder-decoder models.", FutureWarning, ) return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
transformers/src/transformers/models/auto/modeling_tf_auto.py/0
{ "file_path": "transformers/src/transformers/models/auto/modeling_tf_auto.py", "repo_id": "transformers", "token_count": 12310 }
330
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING from ...utils import ( OptionalDependencyNotAvailable, _LazyModule, is_flax_available, is_tensorflow_text_available, is_tf_available, is_tokenizers_available, is_torch_available, ) _import_structure = { "configuration_bert": ["BertConfig", "BertOnnxConfig"], "tokenization_bert": ["BasicTokenizer", "BertTokenizer", "WordpieceTokenizer"], } try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["tokenization_bert_fast"] = ["BertTokenizerFast"] try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_bert"] = [ "BertForMaskedLM", "BertForMultipleChoice", "BertForNextSentencePrediction", "BertForPreTraining", "BertForQuestionAnswering", "BertForSequenceClassification", "BertForTokenClassification", "BertLayer", "BertLMHeadModel", "BertModel", "BertPreTrainedModel", "load_tf_weights_in_bert", ] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_bert"] = [ "TFBertEmbeddings", "TFBertForMaskedLM", "TFBertForMultipleChoice", "TFBertForNextSentencePrediction", "TFBertForPreTraining", "TFBertForQuestionAnswering", "TFBertForSequenceClassification", "TFBertForTokenClassification", "TFBertLMHeadModel", "TFBertMainLayer", "TFBertModel", "TFBertPreTrainedModel", ] try: if not is_tensorflow_text_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["tokenization_bert_tf"] = ["TFBertTokenizer"] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_bert"] = [ "FlaxBertForCausalLM", "FlaxBertForMaskedLM", "FlaxBertForMultipleChoice", "FlaxBertForNextSentencePrediction", "FlaxBertForPreTraining", "FlaxBertForQuestionAnswering", "FlaxBertForSequenceClassification", "FlaxBertForTokenClassification", "FlaxBertModel", "FlaxBertPreTrainedModel", ] if TYPE_CHECKING: from .configuration_bert import BertConfig, BertOnnxConfig from .tokenization_bert import BasicTokenizer, BertTokenizer, WordpieceTokenizer try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .tokenization_bert_fast import BertTokenizerFast try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_bert import ( BertForMaskedLM, BertForMultipleChoice, BertForNextSentencePrediction, BertForPreTraining, BertForQuestionAnswering, BertForSequenceClassification, BertForTokenClassification, BertLayer, BertLMHeadModel, BertModel, BertPreTrainedModel, load_tf_weights_in_bert, ) try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_bert import ( TFBertEmbeddings, TFBertForMaskedLM, TFBertForMultipleChoice, TFBertForNextSentencePrediction, TFBertForPreTraining, TFBertForQuestionAnswering, TFBertForSequenceClassification, TFBertForTokenClassification, TFBertLMHeadModel, TFBertMainLayer, TFBertModel, TFBertPreTrainedModel, ) try: if not is_tensorflow_text_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .tokenization_bert_tf import TFBertTokenizer try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_bert import ( FlaxBertForCausalLM, FlaxBertForMaskedLM, FlaxBertForMultipleChoice, FlaxBertForNextSentencePrediction, FlaxBertForPreTraining, FlaxBertForQuestionAnswering, FlaxBertForSequenceClassification, FlaxBertForTokenClassification, FlaxBertModel, FlaxBertPreTrainedModel, ) else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
transformers/src/transformers/models/bert/__init__.py/0
{ "file_path": "transformers/src/transformers/models/bert/__init__.py", "repo_id": "transformers", "token_count": 2490 }
331
# coding=utf-8 # Copyright 2022 The HuggingFace Team and Microsoft Research AI4Science All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """BioGPT model configuration""" from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class BioGptConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`BioGptModel`]. It is used to instantiate an BioGPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BioGPT [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 42384): Vocabulary size of the BioGPT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`BioGptModel`]. hidden_size (`int`, *optional*, defaults to 1024): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 4096): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. scale_embedding (`bool`, *optional*, defaults to `True`): Scale embeddings by diving by sqrt(d_model). use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. layerdrop (`float`, *optional*, defaults to 0.0): Please refer to the paper about LayerDrop: https://arxiv.org/abs/1909.11556 for further details activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. pad_token_id (`int`, *optional*, defaults to 1): Padding token id. bos_token_id (`int`, *optional*, defaults to 0): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 2): End of stream token id. Example: ```python >>> from transformers import BioGptModel, BioGptConfig >>> # Initializing a BioGPT microsoft/biogpt style configuration >>> configuration = BioGptConfig() >>> # Initializing a model from the microsoft/biogpt style configuration >>> model = BioGptModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "biogpt" def __init__( self, vocab_size=42384, hidden_size=1024, num_hidden_layers=24, num_attention_heads=16, intermediate_size=4096, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=1024, initializer_range=0.02, layer_norm_eps=1e-12, scale_embedding=True, use_cache=True, layerdrop=0.0, activation_dropout=0.0, pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs, ): self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.initializer_range = initializer_range self.layer_norm_eps = layer_norm_eps self.scale_embedding = scale_embedding self.use_cache = use_cache self.layerdrop = layerdrop self.activation_dropout = activation_dropout super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
transformers/src/transformers/models/biogpt/configuration_biogpt.py/0
{ "file_path": "transformers/src/transformers/models/biogpt/configuration_biogpt.py", "repo_id": "transformers", "token_count": 2300 }
332
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Processor class for Blip. """ from typing import List, Optional, Union from ...image_utils import ImageInput from ...processing_utils import ProcessorMixin from ...tokenization_utils_base import BatchEncoding, PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy from ...utils import TensorType class BlipProcessor(ProcessorMixin): r""" Constructs a BLIP processor which wraps a BERT tokenizer and BLIP image processor into a single processor. [`BlipProcessor`] offers all the functionalities of [`BlipImageProcessor`] and [`BertTokenizerFast`]. See the docstring of [`~BlipProcessor.__call__`] and [`~BlipProcessor.decode`] for more information. Args: image_processor (`BlipImageProcessor`): An instance of [`BlipImageProcessor`]. The image processor is a required input. tokenizer (`BertTokenizerFast`): An instance of ['BertTokenizerFast`]. The tokenizer is a required input. """ attributes = ["image_processor", "tokenizer"] valid_kwargs = [] image_processor_class = "BlipImageProcessor" tokenizer_class = ("BertTokenizer", "BertTokenizerFast") def __init__(self, image_processor, tokenizer, **kwargs): tokenizer.return_token_type_ids = False super().__init__(image_processor, tokenizer) self.current_processor = self.image_processor def __call__( self, images: ImageInput = None, text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_token_type_ids: bool = False, return_length: bool = False, verbose: bool = True, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs, ) -> BatchEncoding: """ This method uses [`BlipImageProcessor.__call__`] method to prepare image(s) for the model, and [`BertTokenizerFast.__call__`] to prepare text for the model. Please refer to the docstring of the above two methods for more information. """ if images is None and text is None: raise ValueError("You have to specify either images or text.") # Get only text if images is None: self.current_processor = self.tokenizer text_encoding = self.tokenizer( text=text, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_token_type_ids=return_token_type_ids, return_length=return_length, verbose=verbose, return_tensors=return_tensors, **kwargs, ) return text_encoding # add pixel_values encoding_image_processor = self.image_processor(images, return_tensors=return_tensors) if text is not None: text_encoding = self.tokenizer( text=text, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_token_type_ids=return_token_type_ids, return_length=return_length, verbose=verbose, return_tensors=return_tensors, **kwargs, ) else: text_encoding = None if text_encoding is not None: encoding_image_processor.update(text_encoding) return encoding_image_processor def batch_decode(self, *args, **kwargs): """ This method forwards all its arguments to BertTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please refer to the docstring of this method for more information. """ return self.tokenizer.batch_decode(*args, **kwargs) def decode(self, *args, **kwargs): """ This method forwards all its arguments to BertTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to the docstring of this method for more information. """ return self.tokenizer.decode(*args, **kwargs) @property def model_input_names(self): tokenizer_input_names = self.tokenizer.model_input_names image_processor_input_names = self.image_processor.model_input_names return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
transformers/src/transformers/models/blip/processing_blip.py/0
{ "file_path": "transformers/src/transformers/models/blip/processing_blip.py", "repo_id": "transformers", "token_count": 2629 }
333
# coding=utf-8 # Copyright Google AI and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """CANINE model configuration""" from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class CanineConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`CanineModel`]. It is used to instantiate an CANINE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CANINE [google/canine-s](https://huggingface.co/google/canine-s) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the deep Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoders. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoders. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoders, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 16384): The maximum sequence length that this model might ever be used with. type_vocab_size (`int`, *optional*, defaults to 16): The vocabulary size of the `token_type_ids` passed when calling [`CanineModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. bos_token_id (`int`, *optional*, defaults to 57344): Beginning of stream token id. eos_token_id (`int`, *optional*, defaults to 57345): End of stream token id. downsampling_rate (`int`, *optional*, defaults to 4): The rate at which to downsample the original character sequence length before applying the deep Transformer encoder. upsampling_kernel_size (`int`, *optional*, defaults to 4): The kernel size (i.e. the number of characters in each window) of the convolutional projection layer when projecting back from `hidden_size`*2 to `hidden_size`. num_hash_functions (`int`, *optional*, defaults to 8): The number of hash functions to use. Each hash function has its own embedding matrix. num_hash_buckets (`int`, *optional*, defaults to 16384): The number of hash buckets to use. local_transformer_stride (`int`, *optional*, defaults to 128): The stride of the local attention of the first shallow Transformer encoder. Defaults to 128 for good TPU/XLA memory alignment. Example: ```python >>> from transformers import CanineConfig, CanineModel >>> # Initializing a CANINE google/canine-s style configuration >>> configuration = CanineConfig() >>> # Initializing a model (with random weights) from the google/canine-s style configuration >>> model = CanineModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "canine" def __init__( self, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=16384, type_vocab_size=16, initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=0, bos_token_id=0xE000, eos_token_id=0xE001, downsampling_rate=4, upsampling_kernel_size=4, num_hash_functions=8, num_hash_buckets=16384, local_transformer_stride=128, # Good TPU/XLA memory alignment. **kwargs, ): super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.initializer_range = initializer_range self.type_vocab_size = type_vocab_size self.layer_norm_eps = layer_norm_eps # Character config: self.downsampling_rate = downsampling_rate self.upsampling_kernel_size = upsampling_kernel_size self.num_hash_functions = num_hash_functions self.num_hash_buckets = num_hash_buckets self.local_transformer_stride = local_transformer_stride
transformers/src/transformers/models/canine/configuration_canine.py/0
{ "file_path": "transformers/src/transformers/models/canine/configuration_canine.py", "repo_id": "transformers", "token_count": 2446 }
334
# coding=utf-8 # Copyright 2021 The Open AI Team Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tokenization classes for CLIP.""" import json import os import unicodedata from functools import lru_cache from typing import List, Optional, Tuple import regex as re from ...tokenization_utils import AddedToken, PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace from ...utils import logging logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = { "vocab_file": "vocab.json", "merges_file": "merges.txt", } @lru_cache() def bytes_to_unicode(): """ Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control characters the bpe code barfs on. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. """ bs = ( list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1)) ) cs = bs[:] n = 0 for b in range(2**8): if b not in bs: bs.append(b) cs.append(2**8 + n) n += 1 cs = [chr(n) for n in cs] return dict(zip(bs, cs)) def get_pairs(word): """ Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length strings). """ pairs = set() prev_char = word[0] for char in word[1:]: pairs.add((prev_char, char)) prev_char = char return pairs def whitespace_clean(text): text = re.sub(r"\s+", " ", text) text = text.strip() return text # Copied from transformers.models.bert.tokenization_bert.whitespace_tokenize def whitespace_tokenize(text): """Runs basic whitespace cleaning and splitting on a piece of text.""" text = text.strip() if not text: return [] tokens = text.split() return tokens # Copied from transformers.models.bert.tokenization_bert.BasicTokenizer class BasicTokenizer: """ Constructs a BasicTokenizer that will run basic tokenization (punctuation splitting, lower casing, etc.). Args: do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). do_split_on_punc (`bool`, *optional*, defaults to `True`): In some instances we want to skip the basic punctuation splitting so that later tokenization can capture the full context of the words, such as contractions. """ def __init__( self, do_lower_case=True, never_split=None, tokenize_chinese_chars=True, strip_accents=None, do_split_on_punc=True, ): if never_split is None: never_split = [] self.do_lower_case = do_lower_case self.never_split = set(never_split) self.tokenize_chinese_chars = tokenize_chinese_chars self.strip_accents = strip_accents self.do_split_on_punc = do_split_on_punc def tokenize(self, text, never_split=None): """ Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer. Args: never_split (`List[str]`, *optional*) Kept for backward compatibility purposes. Now implemented directly at the base class level (see [`PreTrainedTokenizer.tokenize`]) List of token not to split. """ # union() returns a new set by concatenating the two sets. never_split = self.never_split.union(set(never_split)) if never_split else self.never_split text = self._clean_text(text) # This was added on November 1st, 2018 for the multilingual and Chinese # models. This is also applied to the English models now, but it doesn't # matter since the English models were not trained on any Chinese data # and generally don't have any Chinese data in them (there are Chinese # characters in the vocabulary because Wikipedia does have some Chinese # words in the English Wikipedia.). if self.tokenize_chinese_chars: text = self._tokenize_chinese_chars(text) # prevents treating the same character with different unicode codepoints as different characters unicode_normalized_text = unicodedata.normalize("NFC", text) orig_tokens = whitespace_tokenize(unicode_normalized_text) split_tokens = [] for token in orig_tokens: if token not in never_split: if self.do_lower_case: token = token.lower() if self.strip_accents is not False: token = self._run_strip_accents(token) elif self.strip_accents: token = self._run_strip_accents(token) split_tokens.extend(self._run_split_on_punc(token, never_split)) output_tokens = whitespace_tokenize(" ".join(split_tokens)) return output_tokens def _run_strip_accents(self, text): """Strips accents from a piece of text.""" text = unicodedata.normalize("NFD", text) output = [] for char in text: cat = unicodedata.category(char) if cat == "Mn": continue output.append(char) return "".join(output) def _run_split_on_punc(self, text, never_split=None): """Splits punctuation on a piece of text.""" if not self.do_split_on_punc or (never_split is not None and text in never_split): return [text] chars = list(text) i = 0 start_new_word = True output = [] while i < len(chars): char = chars[i] if _is_punctuation(char): output.append([char]) start_new_word = True else: if start_new_word: output.append([]) start_new_word = False output[-1].append(char) i += 1 return ["".join(x) for x in output] def _tokenize_chinese_chars(self, text): """Adds whitespace around any CJK character.""" output = [] for char in text: cp = ord(char) if self._is_chinese_char(cp): output.append(" ") output.append(char) output.append(" ") else: output.append(char) return "".join(output) def _is_chinese_char(self, cp): """Checks whether CP is the codepoint of a CJK character.""" # This defines a "chinese character" as anything in the CJK Unicode block: # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) # # Note that the CJK Unicode block is NOT all Japanese and Korean characters, # despite its name. The modern Korean Hangul alphabet is a different block, # as is Japanese Hiragana and Katakana. Those alphabets are used to write # space-separated words, so they are not treated specially and handled # like the all of the other languages. if ( (cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) # or (cp >= 0x20000 and cp <= 0x2A6DF) # or (cp >= 0x2A700 and cp <= 0x2B73F) # or (cp >= 0x2B740 and cp <= 0x2B81F) # or (cp >= 0x2B820 and cp <= 0x2CEAF) # or (cp >= 0xF900 and cp <= 0xFAFF) or (cp >= 0x2F800 and cp <= 0x2FA1F) # ): # return True return False def _clean_text(self, text): """Performs invalid character removal and whitespace cleanup on text.""" output = [] for char in text: cp = ord(char) if cp == 0 or cp == 0xFFFD or _is_control(char): continue if _is_whitespace(char): output.append(" ") else: output.append(char) return "".join(output) class CLIPTokenizer(PreTrainedTokenizer): """ Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`): The beginning of sequence token. eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The token used for padding, for example when batching sequences of different lengths. """ vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] def __init__( self, vocab_file, merges_file, errors="replace", unk_token="<|endoftext|>", bos_token="<|startoftext|>", eos_token="<|endoftext|>", pad_token="<|endoftext|>", # hack to enable padding **kwargs, ): bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token try: import ftfy self.fix_text = ftfy.fix_text except ImportError: logger.info("ftfy or spacy is not installed using custom BasicTokenizer instead of ftfy.") self.nlp = BasicTokenizer(strip_accents=False, do_split_on_punc=False) self.fix_text = None with open(vocab_file, encoding="utf-8") as vocab_handle: self.encoder = json.load(vocab_handle) self.decoder = {v: k for k, v in self.encoder.items()} self.errors = errors # how to handle errors in decoding self.byte_encoder = bytes_to_unicode() self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} with open(merges_file, encoding="utf-8") as merges_handle: bpe_merges = merges_handle.read().strip().split("\n")[1 : 49152 - 256 - 2 + 1] bpe_merges = [tuple(merge.split()) for merge in bpe_merges] self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) self.cache = {"<|startoftext|>": "<|startoftext|>", "<|endoftext|>": "<|endoftext|>"} self.pat = re.compile( r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE, ) super().__init__( errors=errors, unk_token=unk_token, bos_token=bos_token, eos_token=eos_token, pad_token=pad_token, **kwargs, ) @property def vocab_size(self): return len(self.encoder) def get_vocab(self): return dict(self.encoder, **self.added_tokens_encoder) def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CLIP sequence has the following format: - single sequence: `<|startoftext|> X <|endoftext|>` Pairs of sequences are not the expected use case, but they will be handled without a separator. Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ bos_token = [self.bos_token_id] eos_token = [self.eos_token_id] if token_ids_1 is None: return bos_token + token_ids_0 + eos_token return bos_token + token_ids_0 + eos_token + eos_token + token_ids_1 + eos_token def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) if token_ids_1 is None: return [1] + ([0] * len(token_ids_0)) + [1] return [1] + ([0] * len(token_ids_0)) + [1] + [1] + ([0] * len(token_ids_1)) + [1] def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Create a mask from the two sequences passed. CLIP does not make use of token type ids, therefore a list of zeros is returned. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros. """ bos_token = [self.bos_token_id] eos_token = [self.eos_token_id] if token_ids_1 is None: return len(bos_token + token_ids_0 + eos_token) * [0] return len(bos_token + token_ids_0 + eos_token + eos_token + token_ids_1 + eos_token) * [0] def bpe(self, token): if token in self.cache: return self.cache[token] word = tuple(token[:-1]) + (token[-1] + "</w>",) pairs = get_pairs(word) if not pairs: return token + "</w>" while True: bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) if bigram not in self.bpe_ranks: break first, second = bigram new_word = [] i = 0 while i < len(word): try: j = word.index(first, i) except ValueError: new_word.extend(word[i:]) break else: new_word.extend(word[i:j]) i = j if word[i] == first and i < len(word) - 1 and word[i + 1] == second: new_word.append(first + second) i += 2 else: new_word.append(word[i]) i += 1 new_word = tuple(new_word) word = new_word if len(word) == 1: break else: pairs = get_pairs(word) word = " ".join(word) self.cache[token] = word return word def _tokenize(self, text): """Tokenize a string.""" bpe_tokens = [] if self.fix_text is None: text = " ".join(self.nlp.tokenize(text)) else: text = whitespace_clean(self.fix_text(text)).lower() for token in re.findall(self.pat, text): token = "".join( self.byte_encoder[b] for b in token.encode("utf-8") ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case) bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" ")) return bpe_tokens def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.encoder.get(token, self.encoder.get(self.unk_token)) def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" return self.decoder.get(index) def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" text = "".join(tokens) byte_array = bytearray([self.byte_decoder[c] for c in text]) text = byte_array.decode("utf-8", errors=self.errors).replace("</w>", " ").strip() return text def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error("Vocabulary path ({}) should be a directory".format(save_directory)) return vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) merge_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] ) with open(vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") index = 0 with open(merge_file, "w", encoding="utf-8") as writer: writer.write("#version: 0.2\n") for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): if index != token_index: logger.warning( "Saving vocabulary to {}: BPE merge indices are not consecutive." " Please check that the tokenizer is not corrupted!".format(merge_file) ) index = token_index writer.write(" ".join(bpe_tokens) + "\n") index += 1 return vocab_file, merge_file
transformers/src/transformers/models/clip/tokenization_clip.py/0
{ "file_path": "transformers/src/transformers/models/clip/tokenization_clip.py", "repo_id": "transformers", "token_count": 9211 }
335
# coding=utf-8 # Copyright 2023 MetaAI and the HuggingFace Inc. team. All rights reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tokenization classes for Code LLaMA.""" import os from shutil import copyfile from typing import Any, Dict, List, Optional, Tuple import sentencepiece as spm from ...convert_slow_tokenizer import import_protobuf from ...tokenization_utils import AddedToken, PreTrainedTokenizer from ...utils import logging, requires_backends logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"} SPIECE_UNDERLINE = "▁" B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" # fmt: off DEFAULT_SYSTEM_PROMPT = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \ answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\ that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \ correct. If you don't know the answer to a question, please don't share false information.""" # fmt: on class CodeLlamaTokenizer(PreTrainedTokenizer): """ Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is no padding token in the original model. The default configuration match that of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json) which supports prompt infilling. Args: vocab_file (`str`): Path to the vocabulary file. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> prefix_token (`str`, *optional*, defaults to `"▁<PRE>"`): Prefix token used for infilling. middle_token (`str`, *optional*, defaults to `"▁<MID>"`): Middle token used for infilling. suffix_token (`str`, *optional*, defaults to `"▁<SUF>"`): Suffix token used for infilling. eot_token (`str`, *optional*, defaults to `"▁<EOT>"`): End of text token used for infilling. fill_token (`str`, *optional*, defaults to `"<FILL_ME>"`): The token used to split the input between the prefix and suffix. suffix_first (`bool`, *optional*, defaults to `False`): Whether the input prompt and suffix should be formatted with the suffix first. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. add_bos_token (`bool`, *optional*, defaults to `True`): Whether to add a beginning of sequence token at the start of sequences. add_eos_token (`bool`, *optional*, defaults to `False`): Whether to add an end of sequence token at the end of sequences. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`): Whether or not to clean up the tokenization spaces. additional_special_tokens (`List[str]`, *optional*): Additional special tokens used by the tokenizer. use_default_system_prompt (`bool`, *optional*, defaults to `False`): Whether or not the default system prompt for Llama should be used. """ vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] def __init__( self, vocab_file, unk_token="<unk>", bos_token="<s>", eos_token="</s>", prefix_token="▁<PRE>", middle_token="▁<MID>", suffix_token="▁<SUF>", eot_token="▁<EOT>", fill_token="<FILL_ME>", suffix_first=False, sp_model_kwargs: Optional[Dict[str, Any]] = None, add_bos_token=True, add_eos_token=False, clean_up_tokenization_spaces=False, additional_special_tokens=None, use_default_system_prompt=False, **kwargs, ): requires_backends(self, "protobuf") self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs bos_token = AddedToken(bos_token, normalized=False, special=True) if isinstance(bos_token, str) else bos_token eos_token = AddedToken(eos_token, normalized=False, special=True) if isinstance(eos_token, str) else eos_token unk_token = AddedToken(unk_token, normalized=False, special=True) if isinstance(unk_token, str) else unk_token self.use_default_system_prompt = use_default_system_prompt # mark tokens special to skip them additional_special_tokens = additional_special_tokens or [] for token in [prefix_token, middle_token, suffix_token, eot_token]: additional_special_tokens += [token] if token is not None else [] self.vocab_file = vocab_file self.add_bos_token = add_bos_token self.add_eos_token = add_eos_token self._prefix_token = prefix_token self._middle_token = middle_token self._suffix_token = suffix_token self._eot_token = eot_token self.fill_token = fill_token self.suffix_first = suffix_first self.sp_model = self.get_spm_processor() super().__init__( bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, add_bos_token=add_bos_token, add_eos_token=add_eos_token, prefix_token=prefix_token, middle_token=middle_token, suffix_token=suffix_token, eot_token=eot_token, fill_token=fill_token, sp_model_kwargs=self.sp_model_kwargs, suffix_first=suffix_first, clean_up_tokenization_spaces=clean_up_tokenization_spaces, additional_special_tokens=additional_special_tokens, use_default_system_prompt=use_default_system_prompt, **kwargs, ) @property def unk_token_length(self): return len(self.sp_model.encode(str(self.unk_token))) def get_spm_processor(self): tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs) with open(self.vocab_file, "rb") as f: sp_model = f.read() model_pb2 = import_protobuf() model = model_pb2.ModelProto.FromString(sp_model) normalizer_spec = model_pb2.NormalizerSpec() normalizer_spec.add_dummy_prefix = False model.normalizer_spec.MergeFrom(normalizer_spec) sp_model = model.SerializeToString() tokenizer.LoadFromSerializedProto(sp_model) return tokenizer @property def prefix_token(self): return self._prefix_token @property def prefix_id(self): if self._prefix_token is None: return None return self.convert_tokens_to_ids(self.prefix_token) @property def middle_token(self): return self._middle_token @property def middle_id(self): if self._middle_token is None: return None return self.convert_tokens_to_ids(self.middle_token) @property def suffix_token(self): return self._suffix_token @property def suffix_id(self): if self._suffix_token is None: return None return self.convert_tokens_to_ids(self.suffix_token) @property def eot_token(self): return self._eot_token @property def eot_id(self): if self._eot_token is None: return None return self.convert_tokens_to_ids(self.eot_token) @property def vocab_size(self): """Returns vocab size""" return self.sp_model.get_piece_size() # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.get_vocab def get_vocab(self): """Returns vocab as a dict""" vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} vocab.update(self.added_tokens_encoder) return vocab def tokenize(self, prefix, suffix=None, suffix_first=False, **kwargs) -> List[int]: # add a prefix space to `prefix` if self.fill_token is not None and self.fill_token in prefix and suffix is None: prefix, suffix = prefix.split(self.fill_token) if len(prefix) > 0: prefix = SPIECE_UNDERLINE + prefix.replace(SPIECE_UNDERLINE, " ") if suffix is None or len(suffix) < 1: tokens = super().tokenize(prefix, **kwargs) if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens: tokens = tokens[1:] return tokens prefix_tokens = self._tokenize(prefix) # prefix has an extra `SPIECE_UNDERLINE` if None in (self.prefix_id, self.middle_id, self.suffix_id): raise ValueError( "The input either includes a `prefix` and a `suffix` used for the infilling task," f" or can be split on the {self.fill_token} token, creating a suffix and prefix," " but the model does not support `infilling`." ) suffix_tokens = self._tokenize(suffix) # make sure CodeLlama sp model does not mess up suffix_first = suffix_first if suffix_first is not None else self.suffix_first if suffix_first: # format as " <PRE> <SUF>{suf} <MID> {pre}" return [self.prefix_token, self.suffix_token] + suffix_tokens + [self.middle_token] + prefix_tokens else: # format as " <PRE> {pre} <SUF>{suf} <MID>" return [self.prefix_token] + prefix_tokens + [self.suffix_token] + suffix_tokens + [self.middle_token] def _tokenize(self, text, **kwargs): """ Returns a tokenized string. We de-activated the `add_dummy_prefix` option, thus the sentencepiece internals will always strip any SPIECE_UNDERLINE. For example: `self.sp_model.encode(f"{SPIECE_UNDERLINE}Hey", out_type = str)` will give `['H', 'e', 'y']` instead of `['▁He', 'y']`. Thus we always encode `f"{unk_token}text"` and strip the `unk_token`. Here is an example with `unk_token = "<unk>"` and `unk_token_length = 4`. `self.tokenizer.sp_model.encode("<unk> Hey", out_type = str)[4:]`. """ tokens = self.sp_model.encode(text, out_type=str) if not text.startswith((SPIECE_UNDERLINE, " ")): return tokens # 1. Encode string + prefix ex: "<unk> Hey" tokens = self.sp_model.encode(self.unk_token + text, out_type=str) # 2. Remove self.unk_token from ['<','unk','>', '▁Hey'] return tokens[self.unk_token_length :] if len(tokens) >= self.unk_token_length else tokens # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer._convert_token_to_id def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.sp_model.piece_to_id(token) # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer._convert_id_to_token def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" token = self.sp_model.IdToPiece(index) return token def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" # since we manually add the prefix space, we have to remove it when decoding if tokens[0].startswith(SPIECE_UNDERLINE): tokens[0] = tokens[0][1:] current_sub_tokens = [] out_string = "" for _, token in enumerate(tokens): # make sure that special tokens are not decoded using sentencepiece model if token in self.all_special_tokens: out_string += self.sp_model.decode(current_sub_tokens) + token current_sub_tokens = [] else: current_sub_tokens.append(token) out_string += self.sp_model.decode(current_sub_tokens) return out_string # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.save_vocabulary def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]: """ Save the vocabulary and special tokens file to a directory. Args: save_directory (`str`): The directory in which to save the vocabulary. Returns: `Tuple(str)`: Paths to the files saved. """ if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return out_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): copyfile(self.vocab_file, out_vocab_file) elif not os.path.isfile(self.vocab_file): with open(out_vocab_file, "wb") as fi: content_spiece_model = self.sp_model.serialized_model_proto() fi.write(content_spiece_model) return (out_vocab_file,) # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.build_inputs_with_special_tokens def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): bos_token_id = [self.bos_token_id] if self.add_bos_token else [] eos_token_id = [self.eos_token_id] if self.add_eos_token else [] output = bos_token_id + token_ids_0 + eos_token_id if token_ids_1 is not None: output = output + bos_token_id + token_ids_1 + eos_token_id return output # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.get_special_tokens_mask def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) bos_token_id = [1] if self.add_bos_token else [] eos_token_id = [1] if self.add_eos_token else [] if token_ids_1 is None: return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id return ( bos_token_id + ([0] * len(token_ids_0)) + eos_token_id + bos_token_id + ([0] * len(token_ids_1)) + eos_token_id ) # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.create_token_type_ids_from_sequences def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` if token_ids_1 is None, only returns the first portion of the mask (0s). Args: token_ids_0 (`List[int]`): List of ids. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). """ bos_token_id = [self.bos_token_id] if self.add_bos_token else [] eos_token_id = [self.eos_token_id] if self.add_eos_token else [] output = [0] * len(bos_token_id + token_ids_0 + eos_token_id) if token_ids_1 is not None: output += [1] * len(bos_token_id + token_ids_1 + eos_token_id) return output def __getstate__(self): state = self.__dict__.copy() state["sp_model"] = None state["sp_model_proto"] = self.sp_model.serialized_model_proto() return state def __setstate__(self, d): self.__dict__ = d self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.LoadFromSerializedProto(self.sp_model_proto)
transformers/src/transformers/models/code_llama/tokenization_code_llama.py/0
{ "file_path": "transformers/src/transformers/models/code_llama/tokenization_code_llama.py", "repo_id": "transformers", "token_count": 8236 }
336
#!/usr/bin/env python3 import argparse import json import torch from huggingface_hub import hf_hub_download from PIL import Image from timm.models import create_model from transformers import ( BeitImageProcessor, Data2VecVisionConfig, Data2VecVisionForImageClassification, Data2VecVisionModel, ) def create_rename_keys(config, has_lm_head=False, is_semantic=False, hf_prefix="data2vec."): prefix = "backbone." if is_semantic else "" rename_keys = [] for i in range(config.num_hidden_layers): # encoder layers: output projection, 2 feedforward neural networks and 2 layernorms rename_keys.append( (f"{prefix}blocks.{i}.norm1.weight", f"{hf_prefix}encoder.layer.{i}.layernorm_before.weight") ) rename_keys.append((f"{prefix}blocks.{i}.norm1.bias", f"{hf_prefix}encoder.layer.{i}.layernorm_before.bias")) rename_keys.append( (f"{prefix}blocks.{i}.attn.proj.weight", f"{hf_prefix}encoder.layer.{i}.attention.output.dense.weight") ) rename_keys.append( (f"{prefix}blocks.{i}.attn.proj.bias", f"{hf_prefix}encoder.layer.{i}.attention.output.dense.bias") ) rename_keys.append( (f"{prefix}blocks.{i}.norm2.weight", f"{hf_prefix}encoder.layer.{i}.layernorm_after.weight") ) rename_keys.append((f"{prefix}blocks.{i}.norm2.bias", f"{hf_prefix}encoder.layer.{i}.layernorm_after.bias")) rename_keys.append( (f"{prefix}blocks.{i}.mlp.fc1.weight", f"{hf_prefix}encoder.layer.{i}.intermediate.dense.weight") ) rename_keys.append( (f"{prefix}blocks.{i}.mlp.fc1.bias", f"{hf_prefix}encoder.layer.{i}.intermediate.dense.bias") ) rename_keys.append((f"{prefix}blocks.{i}.mlp.fc2.weight", f"{hf_prefix}encoder.layer.{i}.output.dense.weight")) rename_keys.append((f"{prefix}blocks.{i}.mlp.fc2.bias", f"{hf_prefix}encoder.layer.{i}.output.dense.bias")) # projection layer + position embeddings rename_keys.extend( [ (f"{prefix}cls_token", f"{hf_prefix}embeddings.cls_token"), (f"{prefix}patch_embed.proj.weight", f"{hf_prefix}embeddings.patch_embeddings.projection.weight"), (f"{prefix}patch_embed.proj.bias", f"{hf_prefix}embeddings.patch_embeddings.projection.bias"), ] ) if has_lm_head: # mask token + shared relative position bias + layernorm rename_keys.extend( [ ("mask_token", f"{hf_prefix}embeddings.mask_token"), ( "rel_pos_bias.relative_position_bias_table", f"{hf_prefix}encoder.relative_position_bias.relative_position_bias_table", ), ( "rel_pos_bias.relative_position_index", f"{hf_prefix}encoder.relative_position_bias.relative_position_index", ), ("norm.weight", "layernorm.weight"), ("norm.bias", "layernorm.bias"), ] ) elif is_semantic: # semantic segmentation classification heads rename_keys.extend( [ ("decode_head.conv_seg.weight", "decode_head.classifier.weight"), ("decode_head.conv_seg.bias", "decode_head.classifier.bias"), ("auxiliary_head.conv_seg.weight", "auxiliary_head.classifier.weight"), ("auxiliary_head.conv_seg.bias", "auxiliary_head.classifier.bias"), ] ) else: # layernorm + classification head rename_keys.extend( [ ("fc_norm.weight", f"{hf_prefix}pooler.layernorm.weight"), ("fc_norm.bias", f"{hf_prefix}pooler.layernorm.bias"), ("head.weight", "classifier.weight"), ("head.bias", "classifier.bias"), ] ) return rename_keys def read_in_q_k_v(state_dict, config, has_lm_head=False, is_semantic=False, hf_prefix="data2vec_vision."): for i in range(config.num_hidden_layers): prefix = "backbone." if is_semantic else "" # queries, keys and values in_proj_weight = state_dict.pop(f"{prefix}blocks.{i}.attn.qkv.weight") q_bias = state_dict.pop(f"{prefix}blocks.{i}.attn.q_bias") v_bias = state_dict.pop(f"{prefix}blocks.{i}.attn.v_bias") state_dict[f"{hf_prefix}encoder.layer.{i}.attention.attention.query.weight"] = in_proj_weight[ : config.hidden_size, : ] state_dict[f"{hf_prefix}encoder.layer.{i}.attention.attention.query.bias"] = q_bias state_dict[f"{hf_prefix}encoder.layer.{i}.attention.attention.key.weight"] = in_proj_weight[ config.hidden_size : config.hidden_size * 2, : ] state_dict[f"{hf_prefix}encoder.layer.{i}.attention.attention.value.weight"] = in_proj_weight[ -config.hidden_size :, : ] state_dict[f"{hf_prefix}encoder.layer.{i}.attention.attention.value.bias"] = v_bias # gamma_1 and gamma_2 # we call them lambda because otherwise they are renamed when using .from_pretrained gamma_1 = state_dict.pop(f"{prefix}blocks.{i}.gamma_1") gamma_2 = state_dict.pop(f"{prefix}blocks.{i}.gamma_2") state_dict[f"{hf_prefix}encoder.layer.{i}.lambda_1"] = gamma_1 state_dict[f"{hf_prefix}encoder.layer.{i}.lambda_2"] = gamma_2 # relative_position bias table + index if not has_lm_head: # each layer has its own relative position bias table = state_dict.pop(f"{prefix}blocks.{i}.attn.relative_position_bias_table") index = state_dict.pop(f"{prefix}blocks.{i}.attn.relative_position_index") state_dict[ f"{hf_prefix}encoder.layer.{i}.attention.attention.relative_position_bias.relative_position_bias_table" ] = table state_dict[ f"{hf_prefix}encoder.layer.{i}.attention.attention.relative_position_bias.relative_position_index" ] = index def get_args(): parser = argparse.ArgumentParser( "Convert Data2VecVision to HF for image classification and pretraining", add_help=False ) parser.add_argument("--hf_checkpoint_name", type=str) parser.add_argument("--input_size", default=224, type=int, help="images input size") parser.add_argument("--beit_checkpoint", default="", help="beit checkpoint") return parser.parse_args() def load_beit_model(args, is_finetuned, is_large): def load_state_dict(model, state_dict, prefix="", ignore_missing="relative_position_index"): missing_keys = [] unexpected_keys = [] error_msgs = [] # copy state_dict so _load_from_state_dict can modify it metadata = getattr(state_dict, "_metadata", None) state_dict = state_dict.copy() if metadata is not None: state_dict._metadata = metadata def load(module, prefix=""): local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {}) module._load_from_state_dict( state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs ) for name, child in module._modules.items(): if child is not None: load(child, prefix + name + ".") load(model, prefix=prefix) warn_missing_keys = [] ignore_missing_keys = [] for key in missing_keys: keep_flag = True for ignore_key in ignore_missing.split("|"): if ignore_key in key: keep_flag = False break if keep_flag: warn_missing_keys.append(key) else: ignore_missing_keys.append(key) missing_keys = warn_missing_keys if len(missing_keys) > 0: print( "Weights of {} not initialized from pretrained model: {}".format( model.__class__.__name__, missing_keys ) ) if len(unexpected_keys) > 0: print("Weights from pretrained model not used in {}: {}".format(model.__class__.__name__, unexpected_keys)) if len(ignore_missing_keys) > 0: print( "Ignored weights of {} not initialized from pretrained model: {}".format( model.__class__.__name__, ignore_missing_keys ) ) if len(error_msgs) > 0: print("\n".join(error_msgs)) model_kwargs = { "pretrained": False, "use_shared_rel_pos_bias": True, "use_abs_pos_emb": False, "init_values": 0.1, } if is_finetuned: model_kwargs.update( { "num_classes": 1000, "use_mean_pooling": True, "init_scale": 0.001, "use_rel_pos_bias": True, } ) model = create_model( "beit_large_patch16_224" if is_large else "beit_base_patch16_224", **model_kwargs, ) patch_size = model.patch_embed.patch_size args.window_size = (args.input_size // patch_size[0], args.input_size // patch_size[1]) checkpoint = torch.load(args.beit_checkpoint, map_location="cpu") print(f"Load ckpt from {args.beit_checkpoint}") checkpoint_model = None for model_key in ("model", "module"): if model_key in checkpoint: checkpoint_model = checkpoint[model_key] print(f"Load state_dict by model_key = {model_key}") break all_keys = list(checkpoint_model.keys()) for key in all_keys: if "relative_position_index" in key: checkpoint_model.pop(key) if "relative_position_bias_table" in key: rel_pos_bias = checkpoint_model[key] src_num_pos, num_attn_heads = rel_pos_bias.size() dst_num_pos, _ = model.state_dict()[key].size() dst_patch_shape = model.patch_embed.patch_shape if dst_patch_shape[0] != dst_patch_shape[1]: raise NotImplementedError() load_state_dict(model, checkpoint_model, prefix="") return model def main(): args = get_args() is_finetuned = "ft1k" in args.hf_checkpoint_name is_large = "large" in args.hf_checkpoint_name if is_finetuned: # To convert Beit's data2vec_vision to HF you need to copy # https://github.com/facebookresearch/data2vec_vision/blob/main/beit/modeling_finetune.py # into this folder. import modeling_finetune # noqa: F401 else: # To convert Beit's data2vec_vision to HF you need to copy # https://github.com/facebookresearch/data2vec_vision/blob/main/beit/modeling_cyclical.py # into this folder # IMPORTANT: Note that for now we've only converted the down-stream # model and not the full pretrained model. This means for the integration # test you need to add a `return x` after the following line: # https://github.com/facebookresearch/data2vec_vision/blob/af9a36349aaed59ae66e69b5dabeef2d62fdc5da/beit/modeling_cyclical.py#L197 # to make the integration test pass. import modeling_cyclical # noqa: F401 # 1. Create model config config = Data2VecVisionConfig() if is_finetuned: config.use_relative_position_bias = True config.use_shared_relative_position_bias = False config.use_mean_pooling = True config.num_labels = 1000 repo_id = "huggingface/label-files" filename = "imagenet-1k-id2label.json" id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) id2label = {int(k): v for k, v in id2label.items()} config.id2label = id2label config.label2id = {v: k for k, v in id2label.items()} else: config.use_relative_position_bias = False config.use_shared_relative_position_bias = True config.use_mean_pooling = False if is_large: config.hidden_size = 1024 config.intermediate_size = 4096 config.num_hidden_layers = 24 config.num_attention_heads = 16 # 2. Load Beit model orig_model = load_beit_model(args, is_finetuned, is_large) orig_model.eval() # 3. Forward Beit model image_processor = BeitImageProcessor(size=config.image_size, do_center_crop=False) image = Image.open("../../../../tests/fixtures/tests_samples/COCO/000000039769.png") encoding = image_processor(images=image, return_tensors="pt") pixel_values = encoding["pixel_values"] orig_args = (pixel_values,) if is_finetuned else (pixel_values, None) with torch.no_grad(): orig_model_output = orig_model(*orig_args) # 4. Load HF Data2VecVision model if is_finetuned: hf_model = Data2VecVisionForImageClassification(config) hf_model.eval() has_lm_head = False hf_prefix = "data2vec_vision." else: hf_model = Data2VecVisionModel(config) hf_model.eval() has_lm_head = True hf_prefix = "" rename_keys = create_rename_keys(config, hf_prefix=hf_prefix, has_lm_head=has_lm_head) state_dict = orig_model.state_dict() for src, dest in rename_keys: val = state_dict.pop(src) state_dict[dest] = val read_in_q_k_v(state_dict, config, hf_prefix=hf_prefix, has_lm_head=has_lm_head) missing_keys, unexpected_keys = hf_model.load_state_dict(state_dict, strict=False) print("HF missing", missing_keys) print("HF unexpected_keys", unexpected_keys) # 5. Forward HF Data2VecVision model with torch.no_grad(): hf_model_output = hf_model(pixel_values) hf_output = hf_model_output.logits if is_finetuned else hf_model_output.last_hidden_state # 6. Compare max_absolute_diff = torch.max(torch.abs(hf_output - orig_model_output)).item() print(f"max_absolute_diff = {max_absolute_diff}") success = torch.allclose(hf_output, orig_model_output, atol=1e-3) print("Do both models output the same tensors?", "🔥" if success else "💩") if not success: raise Exception("Something went wRoNg") # 7. Save print(f"Saving to {args.hf_checkpoint_name}") hf_model.save_pretrained(args.hf_checkpoint_name) image_processor.save_pretrained(args.hf_checkpoint_name) if __name__ == "__main__": main() # Run the following to convert checkpoints # python ./convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py \ # --beit_checkpoint ./pretrained_base.pt \ # --hf_checkpoint_name "./data2vec-vision-base" # python ./convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py \ # --beit_checkpoint ./finetuned_base.pt \ # --hf_checkpoint_name "./data2vec-vision-base-ft1k" # python ./convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py \ # --beit_checkpoint ./pretrained_large.pt \ # --hf_checkpoint_name "./data2vec-vision-large" # python ./convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py \ # --beit_checkpoint ./finetuned_large.pt \ # --hf_checkpoint_name "./data2vec-vision-large-ft1k"
transformers/src/transformers/models/data2vec/convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/data2vec/convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py", "repo_id": "transformers", "token_count": 7103 }
337
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert DeiT distilled checkpoints from the timm library.""" import argparse import json from pathlib import Path import requests import timm import torch from huggingface_hub import hf_hub_download from PIL import Image from transformers import DeiTConfig, DeiTForImageClassificationWithTeacher, DeiTImageProcessor from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(__name__) # here we list all keys to be renamed (original name on the left, our name on the right) def create_rename_keys(config, base_model=False): rename_keys = [] for i in range(config.num_hidden_layers): # encoder layers: output projection, 2 feedforward neural networks and 2 layernorms rename_keys.append((f"blocks.{i}.norm1.weight", f"deit.encoder.layer.{i}.layernorm_before.weight")) rename_keys.append((f"blocks.{i}.norm1.bias", f"deit.encoder.layer.{i}.layernorm_before.bias")) rename_keys.append((f"blocks.{i}.attn.proj.weight", f"deit.encoder.layer.{i}.attention.output.dense.weight")) rename_keys.append((f"blocks.{i}.attn.proj.bias", f"deit.encoder.layer.{i}.attention.output.dense.bias")) rename_keys.append((f"blocks.{i}.norm2.weight", f"deit.encoder.layer.{i}.layernorm_after.weight")) rename_keys.append((f"blocks.{i}.norm2.bias", f"deit.encoder.layer.{i}.layernorm_after.bias")) rename_keys.append((f"blocks.{i}.mlp.fc1.weight", f"deit.encoder.layer.{i}.intermediate.dense.weight")) rename_keys.append((f"blocks.{i}.mlp.fc1.bias", f"deit.encoder.layer.{i}.intermediate.dense.bias")) rename_keys.append((f"blocks.{i}.mlp.fc2.weight", f"deit.encoder.layer.{i}.output.dense.weight")) rename_keys.append((f"blocks.{i}.mlp.fc2.bias", f"deit.encoder.layer.{i}.output.dense.bias")) # projection layer + position embeddings rename_keys.extend( [ ("cls_token", "deit.embeddings.cls_token"), ("dist_token", "deit.embeddings.distillation_token"), ("patch_embed.proj.weight", "deit.embeddings.patch_embeddings.projection.weight"), ("patch_embed.proj.bias", "deit.embeddings.patch_embeddings.projection.bias"), ("pos_embed", "deit.embeddings.position_embeddings"), ] ) if base_model: # layernorm + pooler rename_keys.extend( [ ("norm.weight", "layernorm.weight"), ("norm.bias", "layernorm.bias"), ("pre_logits.fc.weight", "pooler.dense.weight"), ("pre_logits.fc.bias", "pooler.dense.bias"), ] ) # if just the base model, we should remove "deit" from all keys that start with "deit" rename_keys = [(pair[0], pair[1][4:]) if pair[1].startswith("deit") else pair for pair in rename_keys] else: # layernorm + classification heads rename_keys.extend( [ ("norm.weight", "deit.layernorm.weight"), ("norm.bias", "deit.layernorm.bias"), ("head.weight", "cls_classifier.weight"), ("head.bias", "cls_classifier.bias"), ("head_dist.weight", "distillation_classifier.weight"), ("head_dist.bias", "distillation_classifier.bias"), ] ) return rename_keys # we split up the matrix of each encoder layer into queries, keys and values def read_in_q_k_v(state_dict, config, base_model=False): for i in range(config.num_hidden_layers): if base_model: prefix = "" else: prefix = "deit." # read in weights + bias of input projection layer (in timm, this is a single matrix + bias) in_proj_weight = state_dict.pop(f"blocks.{i}.attn.qkv.weight") in_proj_bias = state_dict.pop(f"blocks.{i}.attn.qkv.bias") # next, add query, keys and values (in that order) to the state dict state_dict[f"{prefix}encoder.layer.{i}.attention.attention.query.weight"] = in_proj_weight[ : config.hidden_size, : ] state_dict[f"{prefix}encoder.layer.{i}.attention.attention.query.bias"] = in_proj_bias[: config.hidden_size] state_dict[f"{prefix}encoder.layer.{i}.attention.attention.key.weight"] = in_proj_weight[ config.hidden_size : config.hidden_size * 2, : ] state_dict[f"{prefix}encoder.layer.{i}.attention.attention.key.bias"] = in_proj_bias[ config.hidden_size : config.hidden_size * 2 ] state_dict[f"{prefix}encoder.layer.{i}.attention.attention.value.weight"] = in_proj_weight[ -config.hidden_size :, : ] state_dict[f"{prefix}encoder.layer.{i}.attention.attention.value.bias"] = in_proj_bias[-config.hidden_size :] def rename_key(dct, old, new): val = dct.pop(old) dct[new] = val # We will verify our results on an image of cute cats def prepare_img(): url = "http://images.cocodataset.org/val2017/000000039769.jpg" im = Image.open(requests.get(url, stream=True).raw) return im @torch.no_grad() def convert_deit_checkpoint(deit_name, pytorch_dump_folder_path): """ Copy/paste/tweak model's weights to our DeiT structure. """ # define default DeiT configuration config = DeiTConfig() # all deit models have fine-tuned heads base_model = False # dataset (fine-tuned on ImageNet 2012), patch_size and image_size config.num_labels = 1000 repo_id = "huggingface/label-files" filename = "imagenet-1k-id2label.json" id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) id2label = {int(k): v for k, v in id2label.items()} config.id2label = id2label config.label2id = {v: k for k, v in id2label.items()} config.patch_size = int(deit_name[-6:-4]) config.image_size = int(deit_name[-3:]) # size of the architecture if deit_name[9:].startswith("tiny"): config.hidden_size = 192 config.intermediate_size = 768 config.num_hidden_layers = 12 config.num_attention_heads = 3 elif deit_name[9:].startswith("small"): config.hidden_size = 384 config.intermediate_size = 1536 config.num_hidden_layers = 12 config.num_attention_heads = 6 if deit_name[9:].startswith("base"): pass elif deit_name[4:].startswith("large"): config.hidden_size = 1024 config.intermediate_size = 4096 config.num_hidden_layers = 24 config.num_attention_heads = 16 # load original model from timm timm_model = timm.create_model(deit_name, pretrained=True) timm_model.eval() # load state_dict of original model, remove and rename some keys state_dict = timm_model.state_dict() rename_keys = create_rename_keys(config, base_model) for src, dest in rename_keys: rename_key(state_dict, src, dest) read_in_q_k_v(state_dict, config, base_model) # load HuggingFace model model = DeiTForImageClassificationWithTeacher(config).eval() model.load_state_dict(state_dict) # Check outputs on an image, prepared by DeiTImageProcessor size = int( (256 / 224) * config.image_size ) # to maintain same ratio w.r.t. 224 images, see https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L103 image_processor = DeiTImageProcessor(size=size, crop_size=config.image_size) encoding = image_processor(images=prepare_img(), return_tensors="pt") pixel_values = encoding["pixel_values"] outputs = model(pixel_values) timm_logits = timm_model(pixel_values) assert timm_logits.shape == outputs.logits.shape assert torch.allclose(timm_logits, outputs.logits, atol=1e-3) Path(pytorch_dump_folder_path).mkdir(exist_ok=True) print(f"Saving model {deit_name} to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) print(f"Saving image processor to {pytorch_dump_folder_path}") image_processor.save_pretrained(pytorch_dump_folder_path) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--deit_name", default="vit_deit_base_distilled_patch16_224", type=str, help="Name of the DeiT timm model you'd like to convert.", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory." ) args = parser.parse_args() convert_deit_checkpoint(args.deit_name, args.pytorch_dump_folder_path)
transformers/src/transformers/models/deit/convert_deit_timm_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/deit/convert_deit_timm_to_pytorch.py", "repo_id": "transformers", "token_count": 3875 }
338
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert EfficientFormer checkpoints from the original repository. URL: https://github.com/snap-research/EfficientFormer """ import argparse import re from pathlib import Path import requests import torch from PIL import Image from torchvision.transforms import CenterCrop, Compose, Normalize, Resize, ToTensor from transformers import ( EfficientFormerConfig, EfficientFormerForImageClassificationWithTeacher, EfficientFormerImageProcessor, ) from transformers.image_utils import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, PILImageResampling def rename_key(old_name, num_meta4D_last_stage): new_name = old_name if "patch_embed" in old_name: _, layer, param = old_name.split(".") if layer == "0": new_name = old_name.replace("0", "convolution1") elif layer == "1": new_name = old_name.replace("1", "batchnorm_before") elif layer == "3": new_name = old_name.replace("3", "convolution2") else: new_name = old_name.replace("4", "batchnorm_after") if "network" in old_name and re.search(r"\d\.\d", old_name): two_digit_num = r"\b\d{2}\b" if bool(re.search(two_digit_num, old_name)): match = re.search(r"\d\.\d\d.", old_name).group() else: match = re.search(r"\d\.\d.", old_name).group() if int(match[0]) < 6: trimmed_name = old_name.replace(match, "") trimmed_name = trimmed_name.replace("network", match[0] + ".meta4D_layers.blocks." + match[2:-1]) new_name = "intermediate_stages." + trimmed_name else: trimmed_name = old_name.replace(match, "") if int(match[2]) < num_meta4D_last_stage: trimmed_name = trimmed_name.replace("network", "meta4D_layers.blocks." + match[2]) else: layer_index = str(int(match[2]) - num_meta4D_last_stage) trimmed_name = trimmed_name.replace("network", "meta3D_layers.blocks." + layer_index) if "norm1" in old_name: trimmed_name = trimmed_name.replace("norm1", "layernorm1") elif "norm2" in old_name: trimmed_name = trimmed_name.replace("norm2", "layernorm2") elif "fc1" in old_name: trimmed_name = trimmed_name.replace("fc1", "linear_in") elif "fc2" in old_name: trimmed_name = trimmed_name.replace("fc2", "linear_out") new_name = "last_stage." + trimmed_name elif "network" in old_name and re.search(r".\d.", old_name): new_name = old_name.replace("network", "intermediate_stages") if "fc" in new_name: new_name = new_name.replace("fc", "convolution") elif ("norm1" in new_name) and ("layernorm1" not in new_name): new_name = new_name.replace("norm1", "batchnorm_before") elif ("norm2" in new_name) and ("layernorm2" not in new_name): new_name = new_name.replace("norm2", "batchnorm_after") if "proj" in new_name: new_name = new_name.replace("proj", "projection") if "dist_head" in new_name: new_name = new_name.replace("dist_head", "distillation_classifier") elif "head" in new_name: new_name = new_name.replace("head", "classifier") elif "patch_embed" in new_name: new_name = "efficientformer." + new_name elif new_name == "norm.weight" or new_name == "norm.bias": new_name = new_name.replace("norm", "layernorm") new_name = "efficientformer." + new_name else: new_name = "efficientformer.encoder." + new_name return new_name def convert_torch_checkpoint(checkpoint, num_meta4D_last_stage): for key in checkpoint.copy().keys(): val = checkpoint.pop(key) checkpoint[rename_key(key, num_meta4D_last_stage)] = val return checkpoint # We will verify our results on a COCO image def prepare_img(): url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) return image def convert_efficientformer_checkpoint( checkpoint_path: Path, efficientformer_config_file: Path, pytorch_dump_path: Path, push_to_hub: bool ): orig_state_dict = torch.load(checkpoint_path, map_location="cpu")["model"] config = EfficientFormerConfig.from_json_file(efficientformer_config_file) model = EfficientFormerForImageClassificationWithTeacher(config) model_name = "_".join(checkpoint_path.split("/")[-1].split(".")[0].split("_")[:-1]) num_meta4D_last_stage = config.depths[-1] - config.num_meta3d_blocks + 1 new_state_dict = convert_torch_checkpoint(orig_state_dict, num_meta4D_last_stage) model.load_state_dict(new_state_dict) model.eval() pillow_resamplings = { "bilinear": PILImageResampling.BILINEAR, "bicubic": PILImageResampling.BICUBIC, "nearest": PILImageResampling.NEAREST, } # prepare image image = prepare_img() image_size = 256 crop_size = 224 processor = EfficientFormerImageProcessor( size={"shortest_edge": image_size}, crop_size={"height": crop_size, "width": crop_size}, resample=pillow_resamplings["bicubic"], ) pixel_values = processor(images=image, return_tensors="pt").pixel_values # original processing pipeline image_transforms = Compose( [ Resize(image_size, interpolation=pillow_resamplings["bicubic"]), CenterCrop(crop_size), ToTensor(), Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD), ] ) original_pixel_values = image_transforms(image).unsqueeze(0) assert torch.allclose(original_pixel_values, pixel_values) outputs = model(pixel_values) logits = outputs.logits expected_shape = (1, 1000) if "l1" in model_name: expected_logits = torch.Tensor( [-0.1312, 0.4353, -1.0499, -0.5124, 0.4183, -0.6793, -1.3777, -0.0893, -0.7358, -2.4328] ) assert torch.allclose(logits[0, :10], expected_logits, atol=1e-3) assert logits.shape == expected_shape elif "l3" in model_name: expected_logits = torch.Tensor( [-1.3150, -1.5456, -1.2556, -0.8496, -0.7127, -0.7897, -0.9728, -0.3052, 0.3751, -0.3127] ) assert torch.allclose(logits[0, :10], expected_logits, atol=1e-3) assert logits.shape == expected_shape elif "l7" in model_name: expected_logits = torch.Tensor( [-1.0283, -1.4131, -0.5644, -1.3115, -0.5785, -1.2049, -0.7528, 0.1992, -0.3822, -0.0878] ) assert logits.shape == expected_shape else: raise ValueError( f"Unknown model checkpoint: {checkpoint_path}. Supported version of efficientformer are l1, l3 and l7" ) # Save Checkpoints Path(pytorch_dump_path).mkdir(exist_ok=True) model.save_pretrained(pytorch_dump_path) print(f"Checkpoint successfuly converted. Model saved at {pytorch_dump_path}") processor.save_pretrained(pytorch_dump_path) print(f"Processor successfuly saved at {pytorch_dump_path}") if push_to_hub: print("Pushing model to the hub...") model.push_to_hub( repo_id=f"Bearnardd/{pytorch_dump_path}", commit_message="Add model", use_temp_dir=True, ) processor.push_to_hub( repo_id=f"Bearnardd/{pytorch_dump_path}", commit_message="Add image processor", use_temp_dir=True, ) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--pytorch_model_path", default=None, type=str, required=True, help="Path to EfficientFormer pytorch checkpoint.", ) parser.add_argument( "--config_file", default=None, type=str, required=True, help="The json file for EfficientFormer model config.", ) parser.add_argument( "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model." ) parser.add_argument("--push_to_hub", action="store_true", help="Push model and image processor to the hub") parser.add_argument( "--no-push_to_hub", dest="push_to_hub", action="store_false", help="Do not push model and image processor to the hub", ) parser.set_defaults(push_to_hub=True) args = parser.parse_args() convert_efficientformer_checkpoint( checkpoint_path=args.pytorch_model_path, efficientformer_config_file=args.config_file, pytorch_dump_path=args.pytorch_dump_path, push_to_hub=args.push_to_hub, )
transformers/src/transformers/models/deprecated/efficientformer/convert_efficientformer_original_pytorch_checkpoint_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/deprecated/efficientformer/convert_efficientformer_original_pytorch_checkpoint_to_pytorch.py", "repo_id": "transformers", "token_count": 4066 }
339
# coding=utf-8 # Copyright 2022 Microsoft, clefourrier and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Graphormer model configuration""" from ....configuration_utils import PretrainedConfig from ....utils import logging logger = logging.get_logger(__name__) class GraphormerConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`~GraphormerModel`]. It is used to instantiate an Graphormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Graphormer [graphormer-base-pcqm4mv1](https://huggingface.co/graphormer-base-pcqm4mv1) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_classes (`int`, *optional*, defaults to 1): Number of target classes or labels, set to n for binary classification of n tasks. num_atoms (`int`, *optional*, defaults to 512*9): Number of node types in the graphs. num_edges (`int`, *optional*, defaults to 512*3): Number of edges types in the graph. num_in_degree (`int`, *optional*, defaults to 512): Number of in degrees types in the input graphs. num_out_degree (`int`, *optional*, defaults to 512): Number of out degrees types in the input graphs. num_edge_dis (`int`, *optional*, defaults to 128): Number of edge dis in the input graphs. multi_hop_max_dist (`int`, *optional*, defaults to 20): Maximum distance of multi hop edges between two nodes. spatial_pos_max (`int`, *optional*, defaults to 1024): Maximum distance between nodes in the graph attention bias matrices, used during preprocessing and collation. edge_type (`str`, *optional*, defaults to multihop): Type of edge relation chosen. max_nodes (`int`, *optional*, defaults to 512): Maximum number of nodes which can be parsed for the input graphs. share_input_output_embed (`bool`, *optional*, defaults to `False`): Shares the embedding layer between encoder and decoder - careful, True is not implemented. num_layers (`int`, *optional*, defaults to 12): Number of layers. embedding_dim (`int`, *optional*, defaults to 768): Dimension of the embedding layer in encoder. ffn_embedding_dim (`int`, *optional*, defaults to 768): Dimension of the "intermediate" (often named feed-forward) layer in encoder. num_attention_heads (`int`, *optional*, defaults to 32): Number of attention heads in the encoder. self_attention (`bool`, *optional*, defaults to `True`): Model is self attentive (False not implemented). activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the attention weights. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the activation of the linear transformer layer. layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. bias (`bool`, *optional*, defaults to `True`): Uses bias in the attention module - unsupported at the moment. embed_scale(`float`, *optional*, defaults to None): Scaling factor for the node embeddings. num_trans_layers_to_freeze (`int`, *optional*, defaults to 0): Number of transformer layers to freeze. encoder_normalize_before (`bool`, *optional*, defaults to `False`): Normalize features before encoding the graph. pre_layernorm (`bool`, *optional*, defaults to `False`): Apply layernorm before self attention and the feed forward network. Without this, post layernorm will be used. apply_graphormer_init (`bool`, *optional*, defaults to `False`): Apply a custom graphormer initialisation to the model before training. freeze_embeddings (`bool`, *optional*, defaults to `False`): Freeze the embedding layer, or train it along the model. encoder_normalize_before (`bool`, *optional*, defaults to `False`): Apply the layer norm before each encoder block. q_noise (`float`, *optional*, defaults to 0.0): Amount of quantization noise (see "Training with Quantization Noise for Extreme Model Compression"). (For more detail, see fairseq's documentation on quant_noise). qn_block_size (`int`, *optional*, defaults to 8): Size of the blocks for subsequent quantization with iPQ (see q_noise). kdim (`int`, *optional*, defaults to None): Dimension of the key in the attention, if different from the other values. vdim (`int`, *optional*, defaults to None): Dimension of the value in the attention, if different from the other values. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). traceable (`bool`, *optional*, defaults to `False`): Changes return value of the encoder's inner_state to stacked tensors. Example: ```python >>> from transformers import GraphormerForGraphClassification, GraphormerConfig >>> # Initializing a Graphormer graphormer-base-pcqm4mv2 style configuration >>> configuration = GraphormerConfig() >>> # Initializing a model from the graphormer-base-pcqm4mv1 style configuration >>> model = GraphormerForGraphClassification(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` """ model_type = "graphormer" keys_to_ignore_at_inference = ["past_key_values"] def __init__( self, num_classes: int = 1, num_atoms: int = 512 * 9, num_edges: int = 512 * 3, num_in_degree: int = 512, num_out_degree: int = 512, num_spatial: int = 512, num_edge_dis: int = 128, multi_hop_max_dist: int = 5, # sometimes is 20 spatial_pos_max: int = 1024, edge_type: str = "multi_hop", max_nodes: int = 512, share_input_output_embed: bool = False, num_hidden_layers: int = 12, embedding_dim: int = 768, ffn_embedding_dim: int = 768, num_attention_heads: int = 32, dropout: float = 0.1, attention_dropout: float = 0.1, activation_dropout: float = 0.1, layerdrop: float = 0.0, encoder_normalize_before: bool = False, pre_layernorm: bool = False, apply_graphormer_init: bool = False, activation_fn: str = "gelu", embed_scale: float = None, freeze_embeddings: bool = False, num_trans_layers_to_freeze: int = 0, traceable: bool = False, q_noise: float = 0.0, qn_block_size: int = 8, kdim: int = None, vdim: int = None, bias: bool = True, self_attention: bool = True, pad_token_id=0, bos_token_id=1, eos_token_id=2, **kwargs, ): self.num_classes = num_classes self.num_atoms = num_atoms self.num_in_degree = num_in_degree self.num_out_degree = num_out_degree self.num_edges = num_edges self.num_spatial = num_spatial self.num_edge_dis = num_edge_dis self.edge_type = edge_type self.multi_hop_max_dist = multi_hop_max_dist self.spatial_pos_max = spatial_pos_max self.max_nodes = max_nodes self.num_hidden_layers = num_hidden_layers self.embedding_dim = embedding_dim self.hidden_size = embedding_dim self.ffn_embedding_dim = ffn_embedding_dim self.num_attention_heads = num_attention_heads self.dropout = dropout self.attention_dropout = attention_dropout self.activation_dropout = activation_dropout self.layerdrop = layerdrop self.encoder_normalize_before = encoder_normalize_before self.pre_layernorm = pre_layernorm self.apply_graphormer_init = apply_graphormer_init self.activation_fn = activation_fn self.embed_scale = embed_scale self.freeze_embeddings = freeze_embeddings self.num_trans_layers_to_freeze = num_trans_layers_to_freeze self.share_input_output_embed = share_input_output_embed self.traceable = traceable self.q_noise = q_noise self.qn_block_size = qn_block_size # These parameters are here for future extensions # atm, the model only supports self attention self.kdim = kdim self.vdim = vdim self.self_attention = self_attention self.bias = bias super().__init__( pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs, )
transformers/src/transformers/models/deprecated/graphormer/configuration_graphormer.py/0
{ "file_path": "transformers/src/transformers/models/deprecated/graphormer/configuration_graphormer.py", "repo_id": "transformers", "token_count": 4097 }
340
# coding=utf-8 # Copyright 2022 The REALM authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """REALM model configuration.""" from ....configuration_utils import PretrainedConfig from ....utils import logging logger = logging.get_logger(__name__) class RealmConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of 1. [`RealmEmbedder`] 2. [`RealmScorer`] 3. [`RealmKnowledgeAugEncoder`] 4. [`RealmRetriever`] 5. [`RealmReader`] 6. [`RealmForOpenQA`] It is used to instantiate an REALM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the REALM [google/realm-cc-news-pretrained-embedder](https://huggingface.co/google/realm-cc-news-pretrained-embedder) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the REALM model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`RealmEmbedder`], [`RealmScorer`], [`RealmKnowledgeAugEncoder`], or [`RealmReader`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. retriever_proj_size (`int`, *optional*, defaults to 128): Dimension of the retriever(embedder) projection. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_candidates (`int`, *optional*, defaults to 8): Number of candidates inputted to the RealmScorer or RealmKnowledgeAugEncoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`RealmEmbedder`], [`RealmScorer`], [`RealmKnowledgeAugEncoder`], or [`RealmReader`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. span_hidden_size (`int`, *optional*, defaults to 256): Dimension of the reader's spans. max_span_width (`int`, *optional*, defaults to 10): Max span width of the reader. reader_layer_norm_eps (`float`, *optional*, defaults to 1e-3): The epsilon used by the reader's layer normalization layers. reader_beam_size (`int`, *optional*, defaults to 5): Beam size of the reader. reader_seq_len (`int`, *optional*, defaults to 288+32): Maximum sequence length of the reader. num_block_records (`int`, *optional*, defaults to 13353718): Number of block records. searcher_beam_size (`int`, *optional*, defaults to 5000): Beam size of the searcher. Note that when eval mode is enabled, *searcher_beam_size* will be the same as *reader_beam_size*. Example: ```python >>> from transformers import RealmConfig, RealmEmbedder >>> # Initializing a REALM realm-cc-news-pretrained-* style configuration >>> configuration = RealmConfig() >>> # Initializing a model (with random weights) from the google/realm-cc-news-pretrained-embedder style configuration >>> model = RealmEmbedder(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "realm" def __init__( self, vocab_size=30522, hidden_size=768, retriever_proj_size=128, num_hidden_layers=12, num_attention_heads=12, num_candidates=8, intermediate_size=3072, hidden_act="gelu_new", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, span_hidden_size=256, max_span_width=10, reader_layer_norm_eps=1e-3, reader_beam_size=5, reader_seq_len=320, # 288 + 32 num_block_records=13353718, searcher_beam_size=5000, pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs, ): super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) # Common config self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.retriever_proj_size = retriever_proj_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_candidates = num_candidates self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.initializer_range = initializer_range self.type_vocab_size = type_vocab_size self.layer_norm_eps = layer_norm_eps # Reader config self.span_hidden_size = span_hidden_size self.max_span_width = max_span_width self.reader_layer_norm_eps = reader_layer_norm_eps self.reader_beam_size = reader_beam_size self.reader_seq_len = reader_seq_len # Retrieval config self.num_block_records = num_block_records self.searcher_beam_size = searcher_beam_size
transformers/src/transformers/models/deprecated/realm/configuration_realm.py/0
{ "file_path": "transformers/src/transformers/models/deprecated/realm/configuration_realm.py", "repo_id": "transformers", "token_count": 2947 }
341
# coding=utf-8 # Copyright 2022 Microsoft Research and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tokenization classes for TAPEX.""" import json import os import random from functools import lru_cache from typing import Dict, List, Optional, Tuple, Union import regex as re from ....file_utils import ExplicitEnum, PaddingStrategy, TensorType, add_end_docstrings, is_pandas_available from ....tokenization_utils import AddedToken, PreTrainedTokenizer from ....tokenization_utils_base import ENCODE_KWARGS_DOCSTRING, BatchEncoding, TextInput, TruncationStrategy from ....utils import logging if is_pandas_available(): import pandas as pd logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = {"vocab_file": "vocab.json", "merges_file": "merges.txt"} class TapexTruncationStrategy(ExplicitEnum): """ Possible values for the `truncation` argument in [`~TapasTokenizer.__call__`]. Useful for tab-completion in an IDE. """ DROP_ROWS_TO_FIT = "drop_rows_to_fit" TAPEX_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r""" add_special_tokens (`bool`, *optional*, defaults to `True`): Whether or not to encode the sequences with the special tokens relative to their model. padding (`bool`, `str` or [`~file_utils.PaddingStrategy`], *optional*, defaults to `False`): Activates and controls padding. Accepts the following values: - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). truncation (`bool`, `str`, [`TapexTruncationStrategy`] or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `False`): Activates and controls truncation. Accepts the following values: - `'drop_rows_to_fit'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will truncate row by row, removing rows from the table. - `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). max_length (`int`, *optional*): Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to `None`, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. stride (`int`, *optional*, defaults to 0): If set to a number along with `max_length`, the overflowing tokens returned when `return_overflowing_tokens=True` will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta). return_tensors (`str` or [`~file_utils.TensorType`], *optional*): If set, will return tensors instead of list of python integers. Acceptable values are: - `'tf'`: Return TensorFlow `tf.constant` objects. - `'pt'`: Return PyTorch `torch.Tensor` objects. - `'np'`: Return Numpy `np.ndarray` objects. """ @lru_cache() def bytes_to_unicode(): """ Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control characters the bpe code barfs on. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. """ bs = ( list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1)) ) cs = bs[:] n = 0 for b in range(2**8): if b not in bs: bs.append(b) cs.append(2**8 + n) n += 1 cs = [chr(n) for n in cs] return dict(zip(bs, cs)) def get_pairs(word): """ Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length strings). """ pairs = set() prev_char = word[0] for char in word[1:]: pairs.add((prev_char, char)) prev_char = char return pairs class IndexedRowTableLinearize: """ FORMAT: col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ... """ def process_table(self, table_content: Dict): """ Given a table, TableLinearize aims at converting it into a flatten sequence with special symbols. """ assert "header" in table_content and "rows" in table_content, self.PROMPT_MESSAGE # process header table_str = self.process_header(table_content["header"]) + " " # process rows for i, row_example in enumerate(table_content["rows"]): # NOTE: the row should start from row 1 instead of 0 table_str += self.process_row(row_example, row_index=i + 1) + " " return table_str.strip() def process_header(self, headers: List): """ Given a list of headers, TableLinearize aims at converting it into a flatten sequence with special symbols. """ return "col : " + " | ".join(headers) def process_row(self, row: List, row_index: int): """ Given a row, TableLinearize aims at converting it into a flatten sequence with special symbols. """ row_str = "" row_cell_values = [] for cell_value in row: if isinstance(cell_value, int): row_cell_values.append(str(cell_value)) else: row_cell_values.append(cell_value) row_str += " | ".join(row_cell_values) return "row " + str(row_index) + " : " + row_str class TapexTokenizer(PreTrainedTokenizer): r""" Construct a TAPEX tokenizer. Based on byte-level Byte-Pair-Encoding (BPE). This tokenizer can be used to flatten one or more table(s) and concatenate them with one or more related sentences to be used by TAPEX models. The format that the TAPEX tokenizer creates is the following: sentence col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ... The tokenizer supports a single table + single query, a single table and multiple queries (in which case the table will be duplicated for every query), a single query and multiple tables (in which case the query will be duplicated for every table), and multiple tables and queries. In other words, you can provide a batch of tables + questions to the tokenizer for instance to prepare them for the model. Tokenization itself is based on the BPE algorithm. It is identical to the one used by BART, RoBERTa and GPT-2. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. merges_file (`str`): Path to the merges file. do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. errors (`str`, *optional*, defaults to `"replace"`): Paradigm to follow when decoding bytes to UTF-8. See [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (`bool`, *optional*, defaults to `False`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (BART tokenizer detect beginning of words by the preceding space). max_cell_length (`int`, *optional*, defaults to 15): Maximum number of characters per cell when linearizing a table. If this number is exceeded, truncation takes place. """ vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] def __init__( self, vocab_file, merges_file, do_lower_case=True, errors="replace", bos_token="<s>", eos_token="</s>", sep_token="</s>", cls_token="<s>", unk_token="<unk>", pad_token="<pad>", mask_token="<mask>", add_prefix_space=False, max_cell_length=15, **kwargs, ): bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token # Mask token behave like a normal word, i.e. include the space before it mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token with open(vocab_file, encoding="utf-8") as vocab_handle: self.encoder = json.load(vocab_handle) self.decoder = {v: k for k, v in self.encoder.items()} self.errors = errors # how to handle errors in decoding self.byte_encoder = bytes_to_unicode() self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} with open(merges_file, encoding="utf-8") as merges_handle: bpe_merges = merges_handle.read().split("\n")[1:-1] bpe_merges = [tuple(merge.split()) for merge in bpe_merges] self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) self.cache = {} self.add_prefix_space = add_prefix_space self.do_lower_case = do_lower_case # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""") # additional properties super().__init__( vocab_file=vocab_file, merges_file=merges_file, do_lower_case=do_lower_case, errors=errors, bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, sep_token=sep_token, cls_token=cls_token, pad_token=pad_token, mask_token=mask_token, add_prefix_space=add_prefix_space, max_cell_length=max_cell_length, **kwargs, ) self.max_cell_length = max_cell_length self.table_linearize = IndexedRowTableLinearize() def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A TAPEX sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s></s> B </s>` Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ if token_ids_1 is None: return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] cls = [self.cls_token_id] sep = [self.sep_token_id] return cls + token_ids_0 + sep + sep + token_ids_1 + sep def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Args: Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) if token_ids_1 is None: return [1] + ([0] * len(token_ids_0)) + [1] return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1] def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Args: Create a mask from the two sequences passed to be used in a sequence-pair classification task. TAPEX does not: make use of token type ids, therefore a list of zeros is returned. token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros. """ sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs): add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space) if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()): text = " " + text return (text, kwargs) @property def vocab_size(self): return len(self.encoder) def get_vocab(self): return dict(self.encoder, **self.added_tokens_encoder) def bpe(self, token): if token in self.cache: return self.cache[token] word = tuple(token) pairs = get_pairs(word) if not pairs: return token while True: bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) if bigram not in self.bpe_ranks: break first, second = bigram new_word = [] i = 0 while i < len(word): try: j = word.index(first, i) except ValueError: new_word.extend(word[i:]) break else: new_word.extend(word[i:j]) i = j if word[i] == first and i < len(word) - 1 and word[i + 1] == second: new_word.append(first + second) i += 2 else: new_word.append(word[i]) i += 1 new_word = tuple(new_word) word = new_word if len(word) == 1: break else: pairs = get_pairs(word) word = " ".join(word) self.cache[token] = word return word def _tokenize(self, text): """Tokenize a string.""" bpe_tokens = [] for token in re.findall(self.pat, text): token = "".join( self.byte_encoder[b] for b in token.encode("utf-8") ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case) bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" ")) return bpe_tokens def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.encoder.get(token, self.encoder.get(self.unk_token)) def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" return self.decoder.get(index) def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" text = "".join(tokens) text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) return text def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) merge_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] ) with open(vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") index = 0 with open(merge_file, "w", encoding="utf-8") as writer: writer.write("#version: 0.2\n") for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): if index != token_index: logger.warning( f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive." " Please check that the tokenizer is not corrupted!" ) index = token_index writer.write(" ".join(bpe_tokens) + "\n") index += 1 return vocab_file, merge_file @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, TAPEX_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) def __call__( self, table: Union["pd.DataFrame", List["pd.DataFrame"]] = None, query: Optional[Union[TextInput, List[TextInput]]] = None, answer: Union[str, List[str]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ Main method to tokenize and prepare for the model one or several table-sequence pair(s). Args: table (`pd.DataFrame`, `List[pd.DataFrame]`): Table(s) containing tabular data. query (`str` or `List[str]`, *optional*): Sentence or batch of sentences related to one or more table(s) to be encoded. Note that the number of sentences must match the number of tables. answer (`str` or `List[str]`, *optional*): Optionally, the corresponding answer to the questions as supervision. """ if table is not None: return self.source_call_func( table=table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) elif answer is not None: return self.target_call_func( answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) else: raise ValueError("You need to provide either a `table` or an `answer`.") def source_call_func( self, table: Union["pd.DataFrame", List["pd.DataFrame"]], query: Optional[Union[TextInput, List[TextInput]]] = None, answer: Union[str, List[str]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: # Input type checking for clearer error valid_table = False valid_query = False # Check that table have a valid type if isinstance(table, pd.DataFrame): valid_table = True elif isinstance(table, (list, tuple)) and isinstance(table[0], pd.DataFrame): valid_table = True # Check that query have a valid type if query is None or isinstance(query, str): valid_query = True elif isinstance(query, (list, tuple)): if len(query) == 0 or isinstance(query[0], str): valid_query = True if not valid_table: raise ValueError( "table input must of type `pd.DataFrame` (single example), `List[pd.DataFrame]` (batch of examples). " ) if not valid_query: raise ValueError("query input must of type `str` (single example), `List[str]` (batch of examples). ") is_batched = isinstance(table, (list, tuple)) or isinstance(query, (list, tuple)) if is_batched: return self.batch_encode_plus( table=table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) else: return self.encode_plus( table=table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, TAPEX_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) def batch_encode_plus( self, table: Union["pd.DataFrame", List["pd.DataFrame"]], query: Optional[List[TextInput]] = None, answer: List[str] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str] = None, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ <Tip warning={true}> This method is deprecated, `__call__` should be used instead. </Tip> """ # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) return self._batch_encode_plus( table=table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def _batch_encode_plus( self, table: Union["pd.DataFrame", List["pd.DataFrame"]], query: Optional[List[TextInput]] = None, answer: Optional[List[str]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers. " "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast." ) if isinstance(table, pd.DataFrame) and isinstance(query, (list, tuple)): # single table, many queries case # duplicate table for every query table = [table] * len(query) if isinstance(table, (list, tuple)) and isinstance(query, str): # many tables, single query case # duplicate query for every table query = [query] * len(table) batch_outputs = self._batch_prepare_for_model( table=table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=return_tensors, verbose=verbose, ) return BatchEncoding(batch_outputs) @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, TAPEX_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) def _batch_prepare_for_model( self, table: Union["pd.DataFrame", List["pd.DataFrame"]], query: Optional[Union[TextInput, List[TextInput]]] = None, answer: Optional[Union[str, List[str]]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_length: bool = False, verbose: bool = True, ) -> BatchEncoding: """ This method adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. """ batch_outputs = {} if answer is None: answer = [None] * len(table) for _table, _query, _answer in zip(table, query, answer): text = self.prepare_table_query( _table, _query, _answer, truncation_strategy=truncation_strategy, max_length=max_length ) if self.do_lower_case: text = text.lower() tokens = self.tokenize(text) outputs = self.prepare_for_model( ids=self.convert_tokens_to_ids(tokens), add_special_tokens=add_special_tokens, padding=PaddingStrategy.DO_NOT_PAD.value, # we pad in batch afterwards truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=None, # we pad in batch afterwards return_attention_mask=False, # we pad in batch afterwards return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=None, # We convert the whole batch to tensors at the end prepend_batch_axis=False, verbose=verbose, ) for key, value in outputs.items(): if key not in batch_outputs: batch_outputs[key] = [] batch_outputs[key].append(value) batch_outputs = self.pad( batch_outputs, padding=padding_strategy.value, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, ) batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) return batch_outputs @add_end_docstrings(ENCODE_KWARGS_DOCSTRING) def encode( self, table: "pd.DataFrame", query: Optional[TextInput] = None, answer: Optional[str] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy, TapexTruncationStrategy] = None, max_length: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs, ) -> List[int]: """ Prepare a table, a string and possible answer for the model. This method does not return token type IDs, attention masks, etc. which are necessary for the model to work correctly. Use this method if you want to build your processing on your own, otherwise refer to `__call__`. """ encoded_inputs = self.encode_plus( table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, return_tensors=return_tensors, **kwargs, ) return encoded_inputs["input_ids"] @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, TAPEX_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) def encode_plus( self, table: "pd.DataFrame", query: Optional[TextInput] = None, answer: Optional[str] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str] = None, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) return self._encode_plus( table=table, query=query, answer=answer, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def _encode_plus( self, table: "pd.DataFrame", query: Optional[TextInput] = None, answer: Optional[str] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers. " "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast. " "More information on available tokenizers at " "https://github.com/huggingface/transformers/pull/2674" ) text = self.prepare_table_query( table, query, answer, truncation_strategy=truncation_strategy, max_length=max_length ) # if necessary, perform lower case if self.do_lower_case: text = text.lower() tokens = self.tokenize(text) return self.prepare_for_model( ids=self.convert_tokens_to_ids(tokens), add_special_tokens=add_special_tokens, padding=padding_strategy.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, prepend_batch_axis=True, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, verbose=verbose, ) def target_call_func( self, answer: Union[str, List[str]], add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ The method tokenizes and prepares the answer label for the model. Args: answer (`str` or `List[str]`): Corresponding answer supervision to the queries for training the model. """ is_batched = isinstance(answer, (list, tuple)) if is_batched: return self.target_batch_encode_plus( answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) else: return self.target_encode_plus( answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def target_batch_encode_plus( self, answer: List[str], add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str] = None, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ Prepare answer strings for the model. Args: answer `List[str]`: Corresponding answer supervision to the queries for training the model. """ # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) return self._target_batch_encode_plus( answer=answer, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def _target_batch_encode_plus( self, answer: List[str], add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: batch_outputs = {} for text in answer: if self.do_lower_case: text = text.lower() tokens = self.tokenize(text) outputs = self.prepare_for_model( ids=self.convert_tokens_to_ids(tokens), add_special_tokens=add_special_tokens, padding=PaddingStrategy.DO_NOT_PAD.value, # we pad in batch afterwards truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=None, # we pad in batch afterwards return_attention_mask=False, # we pad in batch afterwards return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=None, # We convert the whole batch to tensors at the end prepend_batch_axis=False, verbose=verbose, ) for key, value in outputs.items(): if key not in batch_outputs: batch_outputs[key] = [] batch_outputs[key].append(value) batch_outputs = self.pad( batch_outputs, padding=padding_strategy.value, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, ) batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) return BatchEncoding(batch_outputs) def target_encode( self, answer: str, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy, TapexTruncationStrategy] = None, max_length: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs, ) -> List[int]: """ Prepare the answer string for the model. This method does not return token type IDs, attention masks, etc. which are necessary for the model to work correctly. Use this method if you want to build your processing on your own, otherwise refer to `__call__`. Args: answer `str`: Corresponding answer supervision to the queries for training the model """ encoded_outputs = self.target_encode_plus( answer=answer, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, return_tensors=return_tensors, **kwargs, ) return encoded_outputs["input_ids"] def target_encode_plus( self, answer: str, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str] = None, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ Prepare a answer string for the model. Args: answer `str`: Corresponding answer supervision to the queries for training the model. """ # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) return self._target_encode_plus( answer=answer, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def _target_encode_plus( self, answer: str, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers. " "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast. " "More information on available tokenizers at " "https://github.com/huggingface/transformers/pull/2674" ) text = answer # if necessary, perform lower case if self.do_lower_case: text = text.lower() tokens = self.tokenize(text) return self.prepare_for_model( ids=self.convert_tokens_to_ids(tokens), add_special_tokens=add_special_tokens, padding=padding_strategy.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, prepend_batch_axis=True, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, verbose=verbose, ) def prepare_table_query( self, table, query, answer=None, truncation_strategy=Union[str, TruncationStrategy, TapexTruncationStrategy], max_length=None, ): """ This method can be used to linearize a table and add a corresponding query. Optionally, it also handles truncation of the table (cells). An answer can be provided for more precise truncation. """ if not table.empty: # step 1: create table dictionary table_content = {"header": list(table.columns), "rows": [list(row.values) for i, row in table.iterrows()]} # step 2: modify table internally # always truncate table cells based on self.max_cell_length # optionally truncate rows if truncation_strategy is set to it self.truncate_table_cells(table_content, query, answer) if truncation_strategy == TapexTruncationStrategy.DROP_ROWS_TO_FIT: self.truncate_table_rows(table_content, query, answer, max_length=max_length) # step 3: linearize table linear_table = self.table_linearize.process_table(table_content) else: linear_table = "" if linear_table == "": logger.warning( "You provide an empty table, or all cells contain much tokens (e.g., >= 1024 tokens). " + f"Please carefully check the corresponding table with the query : {query}." ) if query == "": logger.warning("You provide nothing to query with respect to the table.") # step 4: concatenate query with linear_table separator = " " if query and linear_table else "" joint_input = (query + separator + linear_table) if query else linear_table return joint_input def truncate_table_cells(self, table_content: Dict, question: str, answer: List): # TODO (Qian): is it possible to revert the original cell if it is in the final answer? cell_mapping = {} for row in table_content["rows"]: for i, cell in enumerate(row): truncate_cell = self.truncate_cell(cell) if truncate_cell is not None: cell_mapping[cell] = truncate_cell row[i] = truncate_cell # modify the answer list if answer is not None: for i, case in enumerate(answer): if case in cell_mapping.keys(): answer[i] = cell_mapping[case] def truncate_cell(self, cell_value): # do not process on these cases if isinstance(cell_value, int) or isinstance(cell_value, float): return cell_value if cell_value.strip() != "": try_tokens = self.tokenize(cell_value) if len(try_tokens) >= self.max_cell_length: retain_tokens = try_tokens[: self.max_cell_length] retain_cell_value = self.convert_tokens_to_string(retain_tokens) return retain_cell_value else: return None else: return cell_value def truncate_table_rows( self, table_content: Dict, question: str, answer: Optional[Union[str, List[str]]] = None, max_length=None ): """ Args: table_content: {"header": xxx, "rows": xxx, "id" (Optionally): xxx} question: natural language sentence answer: if for training, is the supervision; otherwise will be empty """ delete_ratio, remain_token_len = self.estimate_delete_ratio(table_content, question, max_length) # randomly delete unrelated rows self.delete_unrelated_rows(table_content, question, answer, delete_ratio) # guarantee the result < max_length maximum_keep_rows = 0 for ind, row_example in enumerate(table_content["rows"]): value_string = self.table_linearize.process_row(row_example, ind + 1) value_token_len = len(self.tokenize(value_string)) # over the size limit, and take action if value_token_len > remain_token_len: break remain_token_len -= value_token_len maximum_keep_rows += 1 del table_content["rows"][maximum_keep_rows:] def estimate_delete_ratio(self, table_content: Dict, question: str, max_length=None): if "header" not in table_content or "rows" not in table_content: raise ValueError("The table content should contain both 'header' and 'rows' keys.") # calculate the tokens of header, special tokens will only be pre-prepended into question question_tokens = self.tokenize(question, add_special_tokens=True) # calculate the tokens of header header_string = self.table_linearize.process_header(table_content["header"]) header_tokens = self.tokenize(header_string, add_special_tokens=False) # split all cell values into tokens and see how many can be accommodated used_token_len = len(question_tokens) + len(header_tokens) # remaining token space for rows remain_token_len = max_length - used_token_len value_string = "" for _, row_example in enumerate(table_content["rows"]): # use a general index to roughly estimate the overall token len value_string += self.table_linearize.process_row(row_example, 100) + " " value_token_len = len(self.tokenize(value_string)) if value_token_len < remain_token_len: # no row will be deleted return 0.0, remain_token_len else: # calc a roughly delete rate return 1.0 - remain_token_len / value_token_len, remain_token_len def delete_unrelated_rows(self, table_content: Dict, question: str, answer: List, delete_ratio: float): """ The argument answer is used only during training. """ truncated_unrelated_indices = [] related_indices = [] if answer is None or len(answer) == 0: answer_set = set() else: answer_set = {ans_ex.lower() for ans_ex in answer} # add question key words into answer set if question is not None: answer_set.update(question.split()) question_set = set(question.strip("?!.,").split(" ")) row_max_len = len(table_content["rows"]) for _row_idx, row in enumerate(table_content["rows"]): lower_row = {str(cell).lower() for cell in row} if len(lower_row & answer_set) == 0 and len(lower_row & question_set) == 0: truncated_unrelated_indices.append(_row_idx) else: # add neighbours to preserve information aggressively related_indices.extend([_row_idx - 2, _row_idx - 1, _row_idx, _row_idx + 1, _row_idx + 2]) # remove the neighbours truncated_unrelated_indices = [ _row_idx for _row_idx in truncated_unrelated_indices if _row_idx not in related_indices ] # select some cases to drop drop_items = min(len(truncated_unrelated_indices), int(len(table_content["rows"]) * delete_ratio)) drop_row_indices = random.choices(truncated_unrelated_indices, k=drop_items) for _row_idx in reversed(range(row_max_len)): if _row_idx in drop_row_indices: del table_content["rows"][_row_idx] # only when the drop ratio is too large, logging for warning. if "id" in table_content and len(drop_row_indices) > 0: logger.warning("Delete {:.2f} rows in table {}".format(len(drop_row_indices), table_content["id"]))
transformers/src/transformers/models/deprecated/tapex/tokenization_tapex.py/0
{ "file_path": "transformers/src/transformers/models/deprecated/tapex/tokenization_tapex.py", "repo_id": "transformers", "token_count": 29313 }
342
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING from ...utils import ( OptionalDependencyNotAvailable, _LazyModule, is_flax_available, is_torch_available, ) _import_structure = {"configuration_dinov2": ["Dinov2Config", "Dinov2OnnxConfig"]} try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_dinov2"] = [ "Dinov2ForImageClassification", "Dinov2Model", "Dinov2PreTrainedModel", "Dinov2Backbone", ] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_dinov2"] = [ "FlaxDinov2ForImageClassification", "FlaxDinov2Model", "FlaxDinov2PreTrainedModel", ] if TYPE_CHECKING: from .configuration_dinov2 import Dinov2Config, Dinov2OnnxConfig try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_dinov2 import ( Dinov2Backbone, Dinov2ForImageClassification, Dinov2Model, Dinov2PreTrainedModel, ) try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_dinov2 import ( FlaxDinov2ForImageClassification, FlaxDinov2Model, FlaxDinov2PreTrainedModel, ) else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
transformers/src/transformers/models/dinov2/__init__.py/0
{ "file_path": "transformers/src/transformers/models/dinov2/__init__.py", "repo_id": "transformers", "token_count": 950 }
343
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert Donut checkpoints using the original `donut-python` library. URL: https://github.com/clovaai/donut""" import argparse import torch from datasets import load_dataset from donut import DonutModel from transformers import ( DonutImageProcessor, DonutProcessor, DonutSwinConfig, DonutSwinModel, MBartConfig, MBartForCausalLM, VisionEncoderDecoderModel, XLMRobertaTokenizerFast, ) def get_configs(model): original_config = model.config encoder_config = DonutSwinConfig( image_size=original_config.input_size, patch_size=4, depths=original_config.encoder_layer, num_heads=[4, 8, 16, 32], window_size=original_config.window_size, embed_dim=128, ) decoder_config = MBartConfig( is_decoder=True, is_encoder_decoder=False, add_cross_attention=True, decoder_layers=original_config.decoder_layer, max_position_embeddings=original_config.max_position_embeddings, vocab_size=len( model.decoder.tokenizer ), # several special tokens are added to the vocab of XLMRobertaTokenizer, see repo on the hub (added_tokens.json) scale_embedding=True, add_final_layer_norm=True, ) return encoder_config, decoder_config def rename_key(name): if "encoder.model" in name: name = name.replace("encoder.model", "encoder") if "decoder.model" in name: name = name.replace("decoder.model", "decoder") if "patch_embed.proj" in name: name = name.replace("patch_embed.proj", "embeddings.patch_embeddings.projection") if "patch_embed.norm" in name: name = name.replace("patch_embed.norm", "embeddings.norm") if name.startswith("encoder"): if "layers" in name: name = "encoder." + name if "attn.proj" in name: name = name.replace("attn.proj", "attention.output.dense") if "attn" in name and "mask" not in name: name = name.replace("attn", "attention.self") if "norm1" in name: name = name.replace("norm1", "layernorm_before") if "norm2" in name: name = name.replace("norm2", "layernorm_after") if "mlp.fc1" in name: name = name.replace("mlp.fc1", "intermediate.dense") if "mlp.fc2" in name: name = name.replace("mlp.fc2", "output.dense") if name == "encoder.norm.weight": name = "encoder.layernorm.weight" if name == "encoder.norm.bias": name = "encoder.layernorm.bias" return name def convert_state_dict(orig_state_dict, model): for key in orig_state_dict.copy().keys(): val = orig_state_dict.pop(key) if "qkv" in key: key_split = key.split(".") layer_num = int(key_split[3]) block_num = int(key_split[5]) dim = model.encoder.encoder.layers[layer_num].blocks[block_num].attention.self.all_head_size if "weight" in key: orig_state_dict[ f"encoder.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.query.weight" ] = val[:dim, :] orig_state_dict[f"encoder.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.key.weight"] = ( val[dim : dim * 2, :] ) orig_state_dict[ f"encoder.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.value.weight" ] = val[-dim:, :] else: orig_state_dict[f"encoder.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.query.bias"] = ( val[:dim] ) orig_state_dict[f"encoder.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.key.bias"] = ( val[dim : dim * 2] ) orig_state_dict[f"encoder.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.value.bias"] = ( val[-dim:] ) elif "attn_mask" in key or key in ["encoder.model.norm.weight", "encoder.model.norm.bias"]: # HuggingFace implementation doesn't use attn_mask buffer # and model doesn't use final LayerNorms for the encoder pass else: orig_state_dict[rename_key(key)] = val return orig_state_dict def convert_donut_checkpoint(model_name, pytorch_dump_folder_path=None, push_to_hub=False): # load original model original_model = DonutModel.from_pretrained(model_name).eval() # load HuggingFace model encoder_config, decoder_config = get_configs(original_model) encoder = DonutSwinModel(encoder_config) decoder = MBartForCausalLM(decoder_config) model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder) model.eval() state_dict = original_model.state_dict() new_state_dict = convert_state_dict(state_dict, model) model.load_state_dict(new_state_dict) # verify results on scanned document dataset = load_dataset("hf-internal-testing/example-documents") # no-script image = dataset["test"][0]["image"].convert("RGB") tokenizer = XLMRobertaTokenizerFast.from_pretrained(model_name, from_slow=True) image_processor = DonutImageProcessor( do_align_long_axis=original_model.config.align_long_axis, size=original_model.config.input_size[::-1] ) processor = DonutProcessor(image_processor, tokenizer) pixel_values = processor(image, return_tensors="pt").pixel_values if model_name == "naver-clova-ix/donut-base-finetuned-docvqa": task_prompt = "<s_docvqa><s_question>{user_input}</s_question><s_answer>" question = "When is the coffee break?" task_prompt = task_prompt.replace("{user_input}", question) elif model_name == "naver-clova-ix/donut-base-finetuned-rvlcdip": task_prompt = "<s_rvlcdip>" elif model_name in [ "naver-clova-ix/donut-base-finetuned-cord-v1", "naver-clova-ix/donut-base-finetuned-cord-v1-2560", ]: task_prompt = "<s_cord>" elif model_name == "naver-clova-ix/donut-base-finetuned-cord-v2": task_prompt = "s_cord-v2>" elif model_name == "naver-clova-ix/donut-base-finetuned-zhtrainticket": task_prompt = "<s_zhtrainticket>" elif model_name in ["naver-clova-ix/donut-proto", "naver-clova-ix/donut-base"]: # use a random prompt task_prompt = "hello world" else: raise ValueError("Model name not supported") prompt_tensors = original_model.decoder.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt")[ "input_ids" ] original_patch_embed = original_model.encoder.model.patch_embed(pixel_values) patch_embeddings, _ = model.encoder.embeddings(pixel_values) assert torch.allclose(original_patch_embed, patch_embeddings, atol=1e-3) # verify encoder hidden states original_last_hidden_state = original_model.encoder(pixel_values) last_hidden_state = model.encoder(pixel_values).last_hidden_state assert torch.allclose(original_last_hidden_state, last_hidden_state, atol=1e-2) # verify decoder hidden states original_logits = original_model(pixel_values, prompt_tensors, None).logits logits = model(pixel_values, decoder_input_ids=prompt_tensors).logits assert torch.allclose(original_logits, logits, atol=1e-3) print("Looks ok!") if pytorch_dump_folder_path is not None: print(f"Saving model and processor to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) processor.save_pretrained(pytorch_dump_folder_path) if push_to_hub: model.push_to_hub("nielsr/" + model_name.split("/")[-1], commit_message="Update model") processor.push_to_hub("nielsr/" + model_name.split("/")[-1], commit_message="Update model") if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--model_name", default="naver-clova-ix/donut-base-finetuned-docvqa", required=False, type=str, help="Name of the original model you'd like to convert.", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, required=False, type=str, help="Path to the output PyTorch model directory.", ) parser.add_argument( "--push_to_hub", action="store_true", help="Whether or not to push the converted model and processor to the 🀗 hub.", ) args = parser.parse_args() convert_donut_checkpoint(args.model_name, args.pytorch_dump_folder_path, args.push_to_hub)
transformers/src/transformers/models/donut/convert_donut_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/donut/convert_donut_to_pytorch.py", "repo_id": "transformers", "token_count": 4051 }
344
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert DPT checkpoints from the original repository. URL: https://github.com/isl-org/DPT""" import argparse import json from pathlib import Path import requests import torch from huggingface_hub import hf_hub_download from PIL import Image from transformers import DPTConfig, DPTForDepthEstimation, DPTForSemanticSegmentation, DPTImageProcessor from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(__name__) def get_dpt_config(checkpoint_url): config = DPTConfig(embedding_type="hybrid") if "large" in checkpoint_url: config.hidden_size = 1024 config.intermediate_size = 4096 config.num_hidden_layers = 24 config.num_attention_heads = 16 config.backbone_out_indices = [5, 11, 17, 23] config.neck_hidden_sizes = [256, 512, 1024, 1024] expected_shape = (1, 384, 384) if "nyu" in checkpoint_url or "midas" in checkpoint_url: config.hidden_size = 768 config.reassemble_factors = [1, 1, 1, 0.5] config.neck_hidden_sizes = [256, 512, 768, 768] config.num_labels = 150 config.patch_size = 16 expected_shape = (1, 384, 384) config.use_batch_norm_in_fusion_residual = False config.readout_type = "project" if "ade" in checkpoint_url: config.use_batch_norm_in_fusion_residual = True config.hidden_size = 768 config.reassemble_stage = [1, 1, 1, 0.5] config.num_labels = 150 config.patch_size = 16 repo_id = "huggingface/label-files" filename = "ade20k-id2label.json" id2label = json.loads(Path(hf_hub_download(repo_id, filename, repo_type="dataset")).read_text()) id2label = {int(k): v for k, v in id2label.items()} config.id2label = id2label config.label2id = {v: k for k, v in id2label.items()} expected_shape = [1, 150, 480, 480] return config, expected_shape def remove_ignore_keys_(state_dict): ignore_keys = ["pretrained.model.head.weight", "pretrained.model.head.bias"] for k in ignore_keys: state_dict.pop(k, None) def rename_key(name): if ( "pretrained.model" in name and "cls_token" not in name and "pos_embed" not in name and "patch_embed" not in name ): name = name.replace("pretrained.model", "dpt.encoder") if "pretrained.model" in name: name = name.replace("pretrained.model", "dpt.embeddings") if "patch_embed" in name: name = name.replace("patch_embed", "") if "pos_embed" in name: name = name.replace("pos_embed", "position_embeddings") if "attn.proj" in name: name = name.replace("attn.proj", "attention.output.dense") if "proj" in name and "project" not in name: name = name.replace("proj", "projection") if "blocks" in name: name = name.replace("blocks", "layer") if "mlp.fc1" in name: name = name.replace("mlp.fc1", "intermediate.dense") if "mlp.fc2" in name: name = name.replace("mlp.fc2", "output.dense") if "norm1" in name and "backbone" not in name: name = name.replace("norm1", "layernorm_before") if "norm2" in name and "backbone" not in name: name = name.replace("norm2", "layernorm_after") if "scratch.output_conv" in name: name = name.replace("scratch.output_conv", "head") if "scratch" in name: name = name.replace("scratch", "neck") if "layer1_rn" in name: name = name.replace("layer1_rn", "convs.0") if "layer2_rn" in name: name = name.replace("layer2_rn", "convs.1") if "layer3_rn" in name: name = name.replace("layer3_rn", "convs.2") if "layer4_rn" in name: name = name.replace("layer4_rn", "convs.3") if "refinenet" in name: layer_idx = int(name[len("neck.refinenet") : len("neck.refinenet") + 1]) # tricky here: we need to map 4 to 0, 3 to 1, 2 to 2 and 1 to 3 name = name.replace(f"refinenet{layer_idx}", f"fusion_stage.layers.{abs(layer_idx-4)}") if "out_conv" in name: name = name.replace("out_conv", "projection") if "resConfUnit1" in name: name = name.replace("resConfUnit1", "residual_layer1") if "resConfUnit2" in name: name = name.replace("resConfUnit2", "residual_layer2") if "conv1" in name: name = name.replace("conv1", "convolution1") if "conv2" in name: name = name.replace("conv2", "convolution2") # readout blocks if "pretrained.act_postprocess1.0.project.0" in name: name = name.replace("pretrained.act_postprocess1.0.project.0", "neck.reassemble_stage.readout_projects.0.0") if "pretrained.act_postprocess2.0.project.0" in name: name = name.replace("pretrained.act_postprocess2.0.project.0", "neck.reassemble_stage.readout_projects.1.0") if "pretrained.act_postprocess3.0.project.0" in name: name = name.replace("pretrained.act_postprocess3.0.project.0", "neck.reassemble_stage.readout_projects.2.0") if "pretrained.act_postprocess4.0.project.0" in name: name = name.replace("pretrained.act_postprocess4.0.project.0", "neck.reassemble_stage.readout_projects.3.0") # resize blocks if "pretrained.act_postprocess1.3" in name: name = name.replace("pretrained.act_postprocess1.3", "neck.reassemble_stage.layers.0.projection") if "pretrained.act_postprocess1.4" in name: name = name.replace("pretrained.act_postprocess1.4", "neck.reassemble_stage.layers.0.resize") if "pretrained.act_postprocess2.3" in name: name = name.replace("pretrained.act_postprocess2.3", "neck.reassemble_stage.layers.1.projection") if "pretrained.act_postprocess2.4" in name: name = name.replace("pretrained.act_postprocess2.4", "neck.reassemble_stage.layers.1.resize") if "pretrained.act_postprocess3.3" in name: name = name.replace("pretrained.act_postprocess3.3", "neck.reassemble_stage.layers.2.projection") if "pretrained.act_postprocess4.3" in name: name = name.replace("pretrained.act_postprocess4.3", "neck.reassemble_stage.layers.3.projection") if "pretrained.act_postprocess4.4" in name: name = name.replace("pretrained.act_postprocess4.4", "neck.reassemble_stage.layers.3.resize") if "pretrained" in name: name = name.replace("pretrained", "dpt") if "bn" in name: name = name.replace("bn", "batch_norm") if "head" in name: name = name.replace("head", "head.head") if "encoder.norm" in name: name = name.replace("encoder.norm", "layernorm") if "auxlayer" in name: name = name.replace("auxlayer", "auxiliary_head.head") if "backbone" in name: name = name.replace("backbone", "backbone.bit.encoder") if ".." in name: name = name.replace("..", ".") if "stem.conv" in name: name = name.replace("stem.conv", "bit.embedder.convolution") if "blocks" in name: name = name.replace("blocks", "layers") if "convolution" in name and "backbone" in name: name = name.replace("convolution", "conv") if "layer" in name and "backbone" in name: name = name.replace("layer", "layers") if "backbone.bit.encoder.bit" in name: name = name.replace("backbone.bit.encoder.bit", "backbone.bit") if "embedder.conv" in name: name = name.replace("embedder.conv", "embedder.convolution") if "backbone.bit.encoder.stem.norm" in name: name = name.replace("backbone.bit.encoder.stem.norm", "backbone.bit.embedder.norm") return name # we split up the matrix of each encoder layer into queries, keys and values def read_in_q_k_v(state_dict, config): for i in range(config.num_hidden_layers): # read in weights + bias of input projection layer (in timm, this is a single matrix + bias) in_proj_weight = state_dict.pop(f"dpt.encoder.layer.{i}.attn.qkv.weight") in_proj_bias = state_dict.pop(f"dpt.encoder.layer.{i}.attn.qkv.bias") # next, add query, keys and values (in that order) to the state dict state_dict[f"dpt.encoder.layer.{i}.attention.attention.query.weight"] = in_proj_weight[: config.hidden_size, :] state_dict[f"dpt.encoder.layer.{i}.attention.attention.query.bias"] = in_proj_bias[: config.hidden_size] state_dict[f"dpt.encoder.layer.{i}.attention.attention.key.weight"] = in_proj_weight[ config.hidden_size : config.hidden_size * 2, : ] state_dict[f"dpt.encoder.layer.{i}.attention.attention.key.bias"] = in_proj_bias[ config.hidden_size : config.hidden_size * 2 ] state_dict[f"dpt.encoder.layer.{i}.attention.attention.value.weight"] = in_proj_weight[ -config.hidden_size :, : ] state_dict[f"dpt.encoder.layer.{i}.attention.attention.value.bias"] = in_proj_bias[-config.hidden_size :] # We will verify our results on an image of cute cats def prepare_img(): url = "http://images.cocodataset.org/val2017/000000039769.jpg" im = Image.open(requests.get(url, stream=True).raw) return im @torch.no_grad() def convert_dpt_checkpoint(checkpoint_url, pytorch_dump_folder_path, push_to_hub, model_name, show_prediction): """ Copy/paste/tweak model's weights to our DPT structure. """ # define DPT configuration based on URL config, expected_shape = get_dpt_config(checkpoint_url) # load original state_dict from URL # state_dict = torch.hub.load_state_dict_from_url(checkpoint_url, map_location="cpu") state_dict = torch.load(checkpoint_url, map_location="cpu") # remove certain keys remove_ignore_keys_(state_dict) # rename keys for key in state_dict.copy().keys(): val = state_dict.pop(key) state_dict[rename_key(key)] = val # read in qkv matrices read_in_q_k_v(state_dict, config) # load HuggingFace model model = DPTForSemanticSegmentation(config) if "ade" in checkpoint_url else DPTForDepthEstimation(config) model.load_state_dict(state_dict) model.eval() # Check outputs on an image size = 480 if "ade" in checkpoint_url else 384 image_processor = DPTImageProcessor(size=size) image = prepare_img() encoding = image_processor(image, return_tensors="pt") # forward pass outputs = model(**encoding).logits if "ade" in checkpoint_url else model(**encoding).predicted_depth if show_prediction: prediction = ( torch.nn.functional.interpolate( outputs.unsqueeze(1), size=(image.size[1], image.size[0]), mode="bicubic", align_corners=False, ) .squeeze() .cpu() .numpy() ) Image.fromarray((prediction / prediction.max()) * 255).show() if pytorch_dump_folder_path is not None: Path(pytorch_dump_folder_path).mkdir(exist_ok=True) print(f"Saving model to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) print(f"Saving image processor to {pytorch_dump_folder_path}") image_processor.save_pretrained(pytorch_dump_folder_path) if push_to_hub: model.push_to_hub("ybelkada/dpt-hybrid-midas") image_processor.push_to_hub("ybelkada/dpt-hybrid-midas") if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--checkpoint_url", default="https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", type=str, help="URL of the original DPT checkpoint you'd like to convert.", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, type=str, required=False, help="Path to the output PyTorch model directory.", ) parser.add_argument( "--push_to_hub", action="store_true", ) parser.add_argument( "--model_name", default="dpt-large", type=str, help="Name of the model, in case you're pushing to the hub.", ) parser.add_argument( "--show_prediction", action="store_true", ) args = parser.parse_args() convert_dpt_checkpoint( args.checkpoint_url, args.pytorch_dump_folder_path, args.push_to_hub, args.model_name, args.show_prediction )
transformers/src/transformers/models/dpt/convert_dpt_hybrid_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/dpt/convert_dpt_hybrid_to_pytorch.py", "repo_id": "transformers", "token_count": 5463 }
345
# coding=utf-8 # Copyright 2019 The Google AI Language Team Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """TF Electra model.""" from __future__ import annotations import math import warnings from dataclasses import dataclass from typing import Optional, Tuple, Union import numpy as np import tensorflow as tf from ...activations_tf import get_tf_activation from ...modeling_tf_outputs import ( TFBaseModelOutputWithPastAndCrossAttentions, TFMaskedLMOutput, TFMultipleChoiceModelOutput, TFQuestionAnsweringModelOutput, TFSequenceClassifierOutput, TFTokenClassifierOutput, ) from ...modeling_tf_utils import ( TFMaskedLanguageModelingLoss, TFModelInputType, TFMultipleChoiceLoss, TFPreTrainedModel, TFQuestionAnsweringLoss, TFSequenceClassificationLoss, TFSequenceSummary, TFTokenClassificationLoss, get_initializer, keras, keras_serializable, unpack_inputs, ) from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax from ...utils import ( ModelOutput, add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings, ) from .configuration_electra import ElectraConfig logger = logging.get_logger(__name__) _CHECKPOINT_FOR_DOC = "google/electra-small-discriminator" _CONFIG_FOR_DOC = "ElectraConfig" # Copied from transformers.models.bert.modeling_tf_bert.TFBertSelfAttention with Bert->Electra class TFElectraSelfAttention(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) if config.hidden_size % config.num_attention_heads != 0: raise ValueError( f"The hidden size ({config.hidden_size}) is not a multiple of the number " f"of attention heads ({config.num_attention_heads})" ) self.num_attention_heads = config.num_attention_heads self.attention_head_size = int(config.hidden_size / config.num_attention_heads) self.all_head_size = self.num_attention_heads * self.attention_head_size self.sqrt_att_head_size = math.sqrt(self.attention_head_size) self.query = keras.layers.Dense( units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" ) self.key = keras.layers.Dense( units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" ) self.value = keras.layers.Dense( units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" ) self.dropout = keras.layers.Dropout(rate=config.attention_probs_dropout_prob) self.is_decoder = config.is_decoder self.config = config def transpose_for_scores(self, tensor: tf.Tensor, batch_size: int) -> tf.Tensor: # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] tensor = tf.reshape(tensor=tensor, shape=(batch_size, -1, self.num_attention_heads, self.attention_head_size)) # Transpose the tensor from [batch_size, seq_length, num_attention_heads, attention_head_size] to [batch_size, num_attention_heads, seq_length, attention_head_size] return tf.transpose(tensor, perm=[0, 2, 1, 3]) def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, encoder_hidden_states: tf.Tensor, encoder_attention_mask: tf.Tensor, past_key_value: Tuple[tf.Tensor], output_attentions: bool, training: bool = False, ) -> Tuple[tf.Tensor]: batch_size = shape_list(hidden_states)[0] mixed_query_layer = self.query(inputs=hidden_states) # If this is instantiated as a cross-attention module, the keys # and values come from an encoder; the attention mask needs to be # such that the encoder's padding tokens are not attended to. is_cross_attention = encoder_hidden_states is not None if is_cross_attention and past_key_value is not None: # reuse k,v, cross_attentions key_layer = past_key_value[0] value_layer = past_key_value[1] attention_mask = encoder_attention_mask elif is_cross_attention: key_layer = self.transpose_for_scores(self.key(inputs=encoder_hidden_states), batch_size) value_layer = self.transpose_for_scores(self.value(inputs=encoder_hidden_states), batch_size) attention_mask = encoder_attention_mask elif past_key_value is not None: key_layer = self.transpose_for_scores(self.key(inputs=hidden_states), batch_size) value_layer = self.transpose_for_scores(self.value(inputs=hidden_states), batch_size) key_layer = tf.concat([past_key_value[0], key_layer], axis=2) value_layer = tf.concat([past_key_value[1], value_layer], axis=2) else: key_layer = self.transpose_for_scores(self.key(inputs=hidden_states), batch_size) value_layer = self.transpose_for_scores(self.value(inputs=hidden_states), batch_size) query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) if self.is_decoder: # if cross_attention save Tuple(tf.Tensor, tf.Tensor) of all cross attention key/value_states. # Further calls to cross_attention layer can then reuse all cross-attention # key/value_states (first "if" case) # if uni-directional self-attention (decoder) save Tuple(tf.Tensor, tf.Tensor) of # all previous decoder key/value_states. Further calls to uni-directional self-attention # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) # if encoder bi-directional self-attention `past_key_value` is always `None` past_key_value = (key_layer, value_layer) # Take the dot product between "query" and "key" to get the raw attention scores. # (batch size, num_heads, seq_len_q, seq_len_k) attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) attention_scores = tf.divide(attention_scores, dk) if attention_mask is not None: # Apply the attention mask is (precomputed for all layers in TFElectraModel call() function) attention_scores = tf.add(attention_scores, attention_mask) # Normalize the attention scores to probabilities. attention_probs = stable_softmax(logits=attention_scores, axis=-1) # This is actually dropping out entire tokens to attend to, which might # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(inputs=attention_probs, training=training) # Mask heads if we want to if head_mask is not None: attention_probs = tf.multiply(attention_probs, head_mask) attention_output = tf.matmul(attention_probs, value_layer) attention_output = tf.transpose(attention_output, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, all_head_size) attention_output = tf.reshape(tensor=attention_output, shape=(batch_size, -1, self.all_head_size)) outputs = (attention_output, attention_probs) if output_attentions else (attention_output,) if self.is_decoder: outputs = outputs + (past_key_value,) return outputs def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "query", None) is not None: with tf.name_scope(self.query.name): self.query.build([None, None, self.config.hidden_size]) if getattr(self, "key", None) is not None: with tf.name_scope(self.key.name): self.key.build([None, None, self.config.hidden_size]) if getattr(self, "value", None) is not None: with tf.name_scope(self.value.name): self.value.build([None, None, self.config.hidden_size]) # Copied from transformers.models.bert.modeling_tf_bert.TFBertSelfOutput with Bert->Electra class TFElectraSelfOutput(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.dense = keras.layers.Dense( units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) self.LayerNorm = keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dropout = keras.layers.Dropout(rate=config.hidden_dropout_prob) self.config = config def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: hidden_states = self.dense(inputs=hidden_states) hidden_states = self.dropout(inputs=hidden_states, training=training) hidden_states = self.LayerNorm(inputs=hidden_states + input_tensor) return hidden_states def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.hidden_size]) if getattr(self, "LayerNorm", None) is not None: with tf.name_scope(self.LayerNorm.name): self.LayerNorm.build([None, None, self.config.hidden_size]) # Copied from transformers.models.bert.modeling_tf_bert.TFBertAttention with Bert->Electra class TFElectraAttention(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.self_attention = TFElectraSelfAttention(config, name="self") self.dense_output = TFElectraSelfOutput(config, name="output") def prune_heads(self, heads): raise NotImplementedError def call( self, input_tensor: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, encoder_hidden_states: tf.Tensor, encoder_attention_mask: tf.Tensor, past_key_value: Tuple[tf.Tensor], output_attentions: bool, training: bool = False, ) -> Tuple[tf.Tensor]: self_outputs = self.self_attention( hidden_states=input_tensor, attention_mask=attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_value, output_attentions=output_attentions, training=training, ) attention_output = self.dense_output( hidden_states=self_outputs[0], input_tensor=input_tensor, training=training ) # add attentions (possibly with past_key_value) if we output them outputs = (attention_output,) + self_outputs[1:] return outputs def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "self_attention", None) is not None: with tf.name_scope(self.self_attention.name): self.self_attention.build(None) if getattr(self, "dense_output", None) is not None: with tf.name_scope(self.dense_output.name): self.dense_output.build(None) # Copied from transformers.models.bert.modeling_tf_bert.TFBertIntermediate with Bert->Electra class TFElectraIntermediate(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.dense = keras.layers.Dense( units=config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) if isinstance(config.hidden_act, str): self.intermediate_act_fn = get_tf_activation(config.hidden_act) else: self.intermediate_act_fn = config.hidden_act self.config = config def call(self, hidden_states: tf.Tensor) -> tf.Tensor: hidden_states = self.dense(inputs=hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) return hidden_states def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.hidden_size]) # Copied from transformers.models.bert.modeling_tf_bert.TFBertOutput with Bert->Electra class TFElectraOutput(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.dense = keras.layers.Dense( units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) self.LayerNorm = keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dropout = keras.layers.Dropout(rate=config.hidden_dropout_prob) self.config = config def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: hidden_states = self.dense(inputs=hidden_states) hidden_states = self.dropout(inputs=hidden_states, training=training) hidden_states = self.LayerNorm(inputs=hidden_states + input_tensor) return hidden_states def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.intermediate_size]) if getattr(self, "LayerNorm", None) is not None: with tf.name_scope(self.LayerNorm.name): self.LayerNorm.build([None, None, self.config.hidden_size]) # Copied from transformers.models.bert.modeling_tf_bert.TFBertLayer with Bert->Electra class TFElectraLayer(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.attention = TFElectraAttention(config, name="attention") self.is_decoder = config.is_decoder self.add_cross_attention = config.add_cross_attention if self.add_cross_attention: if not self.is_decoder: raise ValueError(f"{self} should be used as a decoder model if cross attention is added") self.crossattention = TFElectraAttention(config, name="crossattention") self.intermediate = TFElectraIntermediate(config, name="intermediate") self.bert_output = TFElectraOutput(config, name="output") def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, encoder_hidden_states: tf.Tensor | None, encoder_attention_mask: tf.Tensor | None, past_key_value: Tuple[tf.Tensor] | None, output_attentions: bool, training: bool = False, ) -> Tuple[tf.Tensor]: # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None self_attention_outputs = self.attention( input_tensor=hidden_states, attention_mask=attention_mask, head_mask=head_mask, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=self_attn_past_key_value, output_attentions=output_attentions, training=training, ) attention_output = self_attention_outputs[0] # if decoder, the last output is tuple of self-attn cache if self.is_decoder: outputs = self_attention_outputs[1:-1] present_key_value = self_attention_outputs[-1] else: outputs = self_attention_outputs[1:] # add self attentions if we output attention weights cross_attn_present_key_value = None if self.is_decoder and encoder_hidden_states is not None: if not hasattr(self, "crossattention"): raise ValueError( f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers" " by setting `config.add_cross_attention=True`" ) # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None cross_attention_outputs = self.crossattention( input_tensor=attention_output, attention_mask=attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_value=cross_attn_past_key_value, output_attentions=output_attentions, training=training, ) attention_output = cross_attention_outputs[0] outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights # add cross-attn cache to positions 3,4 of present_key_value tuple cross_attn_present_key_value = cross_attention_outputs[-1] present_key_value = present_key_value + cross_attn_present_key_value intermediate_output = self.intermediate(hidden_states=attention_output) layer_output = self.bert_output( hidden_states=intermediate_output, input_tensor=attention_output, training=training ) outputs = (layer_output,) + outputs # add attentions if we output them # if decoder, return the attn key/values as the last output if self.is_decoder: outputs = outputs + (present_key_value,) return outputs def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "attention", None) is not None: with tf.name_scope(self.attention.name): self.attention.build(None) if getattr(self, "intermediate", None) is not None: with tf.name_scope(self.intermediate.name): self.intermediate.build(None) if getattr(self, "bert_output", None) is not None: with tf.name_scope(self.bert_output.name): self.bert_output.build(None) if getattr(self, "crossattention", None) is not None: with tf.name_scope(self.crossattention.name): self.crossattention.build(None) # Copied from transformers.models.bert.modeling_tf_bert.TFBertEncoder with Bert->Electra class TFElectraEncoder(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.config = config self.layer = [TFElectraLayer(config, name=f"layer_._{i}") for i in range(config.num_hidden_layers)] def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, encoder_hidden_states: tf.Tensor | None, encoder_attention_mask: tf.Tensor | None, past_key_values: Tuple[Tuple[tf.Tensor]] | None, use_cache: Optional[bool], output_attentions: bool, output_hidden_states: bool, return_dict: bool, training: bool = False, ) -> Union[TFBaseModelOutputWithPastAndCrossAttentions, Tuple[tf.Tensor]]: all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None next_decoder_cache = () if use_cache else None for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) past_key_value = past_key_values[i] if past_key_values is not None else None layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, head_mask=head_mask[i], encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_value, output_attentions=output_attentions, training=training, ) hidden_states = layer_outputs[0] if use_cache: next_decoder_cache += (layer_outputs[-1],) if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) if self.config.add_cross_attention and encoder_hidden_states is not None: all_cross_attentions = all_cross_attentions + (layer_outputs[2],) # Add last layer if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [hidden_states, all_hidden_states, all_attentions, all_cross_attentions] if v is not None ) return TFBaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_decoder_cache, hidden_states=all_hidden_states, attentions=all_attentions, cross_attentions=all_cross_attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "layer", None) is not None: for layer in self.layer: with tf.name_scope(layer.name): layer.build(None) # Copied from transformers.models.bert.modeling_tf_bert.TFBertPooler with Bert->Electra class TFElectraPooler(keras.layers.Layer): def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.dense = keras.layers.Dense( units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), activation="tanh", name="dense", ) self.config = config def call(self, hidden_states: tf.Tensor) -> tf.Tensor: # We "pool" the model by simply taking the hidden state corresponding # to the first token. first_token_tensor = hidden_states[:, 0] pooled_output = self.dense(inputs=first_token_tensor) return pooled_output def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.hidden_size]) # Copied from transformers.models.albert.modeling_tf_albert.TFAlbertEmbeddings with Albert->Electra class TFElectraEmbeddings(keras.layers.Layer): """Construct the embeddings from word, position and token_type embeddings.""" def __init__(self, config: ElectraConfig, **kwargs): super().__init__(**kwargs) self.config = config self.embedding_size = config.embedding_size self.max_position_embeddings = config.max_position_embeddings self.initializer_range = config.initializer_range self.LayerNorm = keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dropout = keras.layers.Dropout(rate=config.hidden_dropout_prob) def build(self, input_shape=None): with tf.name_scope("word_embeddings"): self.weight = self.add_weight( name="weight", shape=[self.config.vocab_size, self.embedding_size], initializer=get_initializer(self.initializer_range), ) with tf.name_scope("token_type_embeddings"): self.token_type_embeddings = self.add_weight( name="embeddings", shape=[self.config.type_vocab_size, self.embedding_size], initializer=get_initializer(self.initializer_range), ) with tf.name_scope("position_embeddings"): self.position_embeddings = self.add_weight( name="embeddings", shape=[self.max_position_embeddings, self.embedding_size], initializer=get_initializer(self.initializer_range), ) if self.built: return self.built = True if getattr(self, "LayerNorm", None) is not None: with tf.name_scope(self.LayerNorm.name): self.LayerNorm.build([None, None, self.config.embedding_size]) # Copied from transformers.models.bert.modeling_tf_bert.TFBertEmbeddings.call def call( self, input_ids: tf.Tensor = None, position_ids: tf.Tensor = None, token_type_ids: tf.Tensor = None, inputs_embeds: tf.Tensor = None, past_key_values_length=0, training: bool = False, ) -> tf.Tensor: """ Applies embedding based on inputs tensor. Returns: final_embeddings (`tf.Tensor`): output embedding tensor. """ if input_ids is None and inputs_embeds is None: raise ValueError("Need to provide either `input_ids` or `input_embeds`.") if input_ids is not None: check_embeddings_within_bounds(input_ids, self.config.vocab_size) inputs_embeds = tf.gather(params=self.weight, indices=input_ids) input_shape = shape_list(inputs_embeds)[:-1] if token_type_ids is None: token_type_ids = tf.fill(dims=input_shape, value=0) if position_ids is None: position_ids = tf.expand_dims( tf.range(start=past_key_values_length, limit=input_shape[1] + past_key_values_length), axis=0 ) position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids) token_type_embeds = tf.gather(params=self.token_type_embeddings, indices=token_type_ids) final_embeddings = inputs_embeds + position_embeds + token_type_embeds final_embeddings = self.LayerNorm(inputs=final_embeddings) final_embeddings = self.dropout(inputs=final_embeddings, training=training) return final_embeddings class TFElectraDiscriminatorPredictions(keras.layers.Layer): def __init__(self, config, **kwargs): super().__init__(**kwargs) self.dense = keras.layers.Dense(config.hidden_size, name="dense") self.dense_prediction = keras.layers.Dense(1, name="dense_prediction") self.config = config def call(self, discriminator_hidden_states, training=False): hidden_states = self.dense(discriminator_hidden_states) hidden_states = get_tf_activation(self.config.hidden_act)(hidden_states) logits = tf.squeeze(self.dense_prediction(hidden_states), -1) return logits def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.hidden_size]) if getattr(self, "dense_prediction", None) is not None: with tf.name_scope(self.dense_prediction.name): self.dense_prediction.build([None, None, self.config.hidden_size]) class TFElectraGeneratorPredictions(keras.layers.Layer): def __init__(self, config, **kwargs): super().__init__(**kwargs) self.LayerNorm = keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dense = keras.layers.Dense(config.embedding_size, name="dense") self.config = config def call(self, generator_hidden_states, training=False): hidden_states = self.dense(generator_hidden_states) hidden_states = get_tf_activation("gelu")(hidden_states) hidden_states = self.LayerNorm(hidden_states) return hidden_states def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "LayerNorm", None) is not None: with tf.name_scope(self.LayerNorm.name): self.LayerNorm.build([None, None, self.config.embedding_size]) if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.hidden_size]) class TFElectraPreTrainedModel(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = ElectraConfig base_model_prefix = "electra" # When the model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"generator_lm_head.weight"] _keys_to_ignore_on_load_missing = [r"dropout"] @keras_serializable class TFElectraMainLayer(keras.layers.Layer): config_class = ElectraConfig def __init__(self, config, **kwargs): super().__init__(**kwargs) self.config = config self.is_decoder = config.is_decoder self.embeddings = TFElectraEmbeddings(config, name="embeddings") if config.embedding_size != config.hidden_size: self.embeddings_project = keras.layers.Dense(config.hidden_size, name="embeddings_project") self.encoder = TFElectraEncoder(config, name="encoder") def get_input_embeddings(self): return self.embeddings def set_input_embeddings(self, value): self.embeddings.weight = value self.embeddings.vocab_size = shape_list(value)[0] def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ raise NotImplementedError def get_extended_attention_mask(self, attention_mask, input_shape, dtype, past_key_values_length=0): batch_size, seq_length = input_shape if attention_mask is None: attention_mask = tf.fill(dims=(batch_size, seq_length + past_key_values_length), value=1) # We create a 3D attention mask from a 2D tensor mask. # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] # this attention mask is more simple than the triangular masking of causal attention # used in OpenAI GPT, we just need to prepare the broadcast dimension here. attention_mask_shape = shape_list(attention_mask) mask_seq_length = seq_length + past_key_values_length # Copied from `modeling_tf_t5.py` # Provided a padding mask of dimensions [batch_size, mask_seq_length] # - if the model is a decoder, apply a causal mask in addition to the padding mask # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length] if self.is_decoder: seq_ids = tf.range(mask_seq_length) causal_mask = tf.less_equal( tf.tile(seq_ids[None, None, :], (batch_size, mask_seq_length, 1)), seq_ids[None, :, None], ) causal_mask = tf.cast(causal_mask, dtype=attention_mask.dtype) extended_attention_mask = causal_mask * attention_mask[:, None, :] attention_mask_shape = shape_list(extended_attention_mask) extended_attention_mask = tf.reshape( extended_attention_mask, (attention_mask_shape[0], 1, attention_mask_shape[1], attention_mask_shape[2]) ) if past_key_values_length > 0: extended_attention_mask = extended_attention_mask[:, :, -seq_length:, :] else: extended_attention_mask = tf.reshape( attention_mask, (attention_mask_shape[0], 1, 1, attention_mask_shape[1]) ) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. extended_attention_mask = tf.cast(extended_attention_mask, dtype=dtype) one_cst = tf.constant(1.0, dtype=dtype) ten_thousand_cst = tf.constant(-10000.0, dtype=dtype) extended_attention_mask = tf.multiply(tf.subtract(one_cst, extended_attention_mask), ten_thousand_cst) return extended_attention_mask def get_head_mask(self, head_mask): if head_mask is not None: raise NotImplementedError else: head_mask = [None] * self.config.num_hidden_layers return head_mask @unpack_inputs def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, encoder_hidden_states: np.ndarray | tf.Tensor | None = None, encoder_attention_mask: np.ndarray | tf.Tensor | None = None, past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFBaseModelOutputWithPastAndCrossAttentions, Tuple[tf.Tensor]]: if not self.config.is_decoder: use_cache = False if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = shape_list(input_ids) elif inputs_embeds is not None: input_shape = shape_list(inputs_embeds)[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") batch_size, seq_length = input_shape if past_key_values is None: past_key_values_length = 0 past_key_values = [None] * len(self.encoder.layer) else: past_key_values_length = shape_list(past_key_values[0][0])[-2] if attention_mask is None: attention_mask = tf.fill(dims=(batch_size, seq_length + past_key_values_length), value=1) if token_type_ids is None: token_type_ids = tf.fill(dims=input_shape, value=0) hidden_states = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, past_key_values_length=past_key_values_length, training=training, ) extended_attention_mask = self.get_extended_attention_mask( attention_mask, input_shape, hidden_states.dtype, past_key_values_length ) # Copied from `modeling_tf_t5.py` with -1e9 -> -10000 if self.is_decoder and encoder_attention_mask is not None: # If a 2D ou 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length] # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] encoder_attention_mask = tf.cast(encoder_attention_mask, dtype=extended_attention_mask.dtype) num_dims_encoder_attention_mask = len(shape_list(encoder_attention_mask)) if num_dims_encoder_attention_mask == 3: encoder_extended_attention_mask = encoder_attention_mask[:, None, :, :] if num_dims_encoder_attention_mask == 2: encoder_extended_attention_mask = encoder_attention_mask[:, None, None, :] # T5 has a mask that can compare sequence ids, we can simulate this here with this transposition # Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow/transformer/transformer_layers.py#L270 # encoder_extended_attention_mask = tf.math.equal(encoder_extended_attention_mask, # tf.transpose(encoder_extended_attention_mask, perm=(-1, -2))) encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -10000.0 else: encoder_extended_attention_mask = None head_mask = self.get_head_mask(head_mask) if hasattr(self, "embeddings_project"): hidden_states = self.embeddings_project(hidden_states, training=training) hidden_states = self.encoder( hidden_states=hidden_states, attention_mask=extended_attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) return hidden_states def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "embeddings", None) is not None: with tf.name_scope(self.embeddings.name): self.embeddings.build(None) if getattr(self, "encoder", None) is not None: with tf.name_scope(self.encoder.name): self.encoder.build(None) if getattr(self, "embeddings_project", None) is not None: with tf.name_scope(self.embeddings_project.name): self.embeddings_project.build([None, None, self.config.embedding_size]) @dataclass class TFElectraForPreTrainingOutput(ModelOutput): """ Output type of [`TFElectraForPreTraining`]. Args: loss (*optional*, returned when `labels` is provided, `tf.Tensor` of shape `(1,)`): Total loss of the ELECTRA objective. logits (`tf.Tensor` of shape `(batch_size, sequence_length)`): Prediction scores of the head (scores for each token before SoftMax). hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ELECTRA_START_DOCSTRING = r""" This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. <Tip> TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry about any of this, as you can just pass inputs like you would to any other Python function! </Tip> Parameters: config ([`ElectraConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. """ ELECTRA_INPUTS_DOCSTRING = r""" Args: input_ids (`Numpy array` or `tf.Tensor` of shape `({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and [`PreTrainedTokenizer.encode`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) position_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) head_mask (`Numpy array` or `tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (`tf.Tensor` of shape `({0}, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (`bool`, *optional*, defaults to `False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ @add_start_docstrings( "The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to " "the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the " "hidden size and embedding size are different. " "" "Both the generator and discriminator checkpoints may be loaded into this model.", ELECTRA_START_DOCSTRING, ) class TFElectraModel(TFElectraPreTrainedModel): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.electra = TFElectraMainLayer(config, name="electra") @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFBaseModelOutputWithPastAndCrossAttentions, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, encoder_hidden_states: np.ndarray | tf.Tensor | None = None, encoder_attention_mask: np.ndarray | tf.Tensor | None = None, past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFBaseModelOutputWithPastAndCrossAttentions, Tuple[tf.Tensor]]: r""" encoder_hidden_states (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. past_key_values (`Tuple[Tuple[tf.Tensor]]` of length `config.n_layers`) contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. use_cache (`bool`, *optional*, defaults to `True`): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). Set to `False` during training, `True` during generation """ outputs = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, use_cache=use_cache, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) return outputs def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) @add_start_docstrings( """ Electra model with a binary classification head on top as used during pretraining for identifying generated tokens. Even though both the discriminator and generator may be loaded into this model, the discriminator is the only model of the two to have the correct classification head to be used for this model. """, ELECTRA_START_DOCSTRING, ) class TFElectraForPreTraining(TFElectraPreTrainedModel): def __init__(self, config, **kwargs): super().__init__(config, **kwargs) self.electra = TFElectraMainLayer(config, name="electra") self.discriminator_predictions = TFElectraDiscriminatorPredictions(config, name="discriminator_predictions") @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @replace_return_docstrings(output_type=TFElectraForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFElectraForPreTrainingOutput, Tuple[tf.Tensor]]: r""" Returns: Examples: ```python >>> import tensorflow as tf >>> from transformers import AutoTokenizer, TFElectraForPreTraining >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = TFElectraForPreTraining.from_pretrained("google/electra-small-discriminator") >>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 >>> outputs = model(input_ids) >>> scores = outputs[0] ```""" discriminator_hidden_states = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) discriminator_sequence_output = discriminator_hidden_states[0] logits = self.discriminator_predictions(discriminator_sequence_output) if not return_dict: return (logits,) + discriminator_hidden_states[1:] return TFElectraForPreTrainingOutput( logits=logits, hidden_states=discriminator_hidden_states.hidden_states, attentions=discriminator_hidden_states.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) if getattr(self, "discriminator_predictions", None) is not None: with tf.name_scope(self.discriminator_predictions.name): self.discriminator_predictions.build(None) class TFElectraMaskedLMHead(keras.layers.Layer): def __init__(self, config, input_embeddings, **kwargs): super().__init__(**kwargs) self.config = config self.embedding_size = config.embedding_size self.input_embeddings = input_embeddings def build(self, input_shape): self.bias = self.add_weight(shape=(self.config.vocab_size,), initializer="zeros", trainable=True, name="bias") super().build(input_shape) def get_output_embeddings(self): return self.input_embeddings def set_output_embeddings(self, value): self.input_embeddings.weight = value self.input_embeddings.vocab_size = shape_list(value)[0] def get_bias(self): return {"bias": self.bias} def set_bias(self, value): self.bias = value["bias"] self.config.vocab_size = shape_list(value["bias"])[0] def call(self, hidden_states): seq_length = shape_list(tensor=hidden_states)[1] hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.embedding_size]) hidden_states = tf.matmul(a=hidden_states, b=self.input_embeddings.weight, transpose_b=True) hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.config.vocab_size]) hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.bias) return hidden_states @add_start_docstrings( """ Electra model with a language modeling head on top. Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task. """, ELECTRA_START_DOCSTRING, ) class TFElectraForMaskedLM(TFElectraPreTrainedModel, TFMaskedLanguageModelingLoss): def __init__(self, config, **kwargs): super().__init__(config, **kwargs) self.config = config self.electra = TFElectraMainLayer(config, name="electra") self.generator_predictions = TFElectraGeneratorPredictions(config, name="generator_predictions") if isinstance(config.hidden_act, str): self.activation = get_tf_activation(config.hidden_act) else: self.activation = config.hidden_act self.generator_lm_head = TFElectraMaskedLMHead(config, self.electra.embeddings, name="generator_lm_head") def get_lm_head(self): return self.generator_lm_head def get_prefix_bias_name(self): warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) return self.name + "/" + self.generator_lm_head.name @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint="google/electra-small-generator", output_type=TFMaskedLMOutput, config_class=_CONFIG_FOR_DOC, mask="[MASK]", expected_output="'paris'", expected_loss=1.22, ) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: np.ndarray | tf.Tensor | None = None, training: Optional[bool] = False, ) -> Union[TFMaskedLMOutput, Tuple[tf.Tensor]]: r""" labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ generator_hidden_states = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) generator_sequence_output = generator_hidden_states[0] prediction_scores = self.generator_predictions(generator_sequence_output, training=training) prediction_scores = self.generator_lm_head(prediction_scores, training=training) loss = None if labels is None else self.hf_compute_loss(labels, prediction_scores) if not return_dict: output = (prediction_scores,) + generator_hidden_states[1:] return ((loss,) + output) if loss is not None else output return TFMaskedLMOutput( loss=loss, logits=prediction_scores, hidden_states=generator_hidden_states.hidden_states, attentions=generator_hidden_states.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) if getattr(self, "generator_predictions", None) is not None: with tf.name_scope(self.generator_predictions.name): self.generator_predictions.build(None) if getattr(self, "generator_lm_head", None) is not None: with tf.name_scope(self.generator_lm_head.name): self.generator_lm_head.build(None) class TFElectraClassificationHead(keras.layers.Layer): """Head for sentence-level classification tasks.""" def __init__(self, config, **kwargs): super().__init__(**kwargs) self.dense = keras.layers.Dense( config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) classifier_dropout = ( config.classifhidden_dropout_probier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob ) self.dropout = keras.layers.Dropout(classifier_dropout) self.out_proj = keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="out_proj" ) self.config = config def call(self, inputs, **kwargs): x = inputs[:, 0, :] # take <s> token (equiv. to [CLS]) x = self.dropout(x) x = self.dense(x) x = get_tf_activation("gelu")(x) # although BERT uses tanh here, it seems Electra authors used gelu here x = self.dropout(x) x = self.out_proj(x) return x def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "dense", None) is not None: with tf.name_scope(self.dense.name): self.dense.build([None, None, self.config.hidden_size]) if getattr(self, "out_proj", None) is not None: with tf.name_scope(self.out_proj.name): self.out_proj.build([None, None, self.config.hidden_size]) @add_start_docstrings( """ ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. """, ELECTRA_START_DOCSTRING, ) class TFElectraForSequenceClassification(TFElectraPreTrainedModel, TFSequenceClassificationLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.electra = TFElectraMainLayer(config, name="electra") self.classifier = TFElectraClassificationHead(config, name="classifier") @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint="bhadresh-savani/electra-base-emotion", output_type=TFSequenceClassifierOutput, config_class=_CONFIG_FOR_DOC, expected_output="'joy'", expected_loss=0.06, ) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: np.ndarray | tf.Tensor | None = None, training: Optional[bool] = False, ) -> Union[TFSequenceClassifierOutput, Tuple[tf.Tensor]]: r""" labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ outputs = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) logits = self.classifier(outputs[0]) loss = None if labels is None else self.hf_compute_loss(labels, logits) if not return_dict: output = (logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFSequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) if getattr(self, "classifier", None) is not None: with tf.name_scope(self.classifier.name): self.classifier.build(None) @add_start_docstrings( """ ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """, ELECTRA_START_DOCSTRING, ) class TFElectraForMultipleChoice(TFElectraPreTrainedModel, TFMultipleChoiceLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.electra = TFElectraMainLayer(config, name="electra") self.sequence_summary = TFSequenceSummary( config, initializer_range=config.initializer_range, name="sequence_summary" ) self.classifier = keras.layers.Dense( 1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) self.config = config @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFMultipleChoiceModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: np.ndarray | tf.Tensor | None = None, training: Optional[bool] = False, ) -> Union[TFMultipleChoiceModelOutput, Tuple[tf.Tensor]]: r""" labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above) """ if input_ids is not None: num_choices = shape_list(input_ids)[1] seq_length = shape_list(input_ids)[2] else: num_choices = shape_list(inputs_embeds)[1] seq_length = shape_list(inputs_embeds)[2] flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None flat_attention_mask = tf.reshape(attention_mask, (-1, seq_length)) if attention_mask is not None else None flat_token_type_ids = tf.reshape(token_type_ids, (-1, seq_length)) if token_type_ids is not None else None flat_position_ids = tf.reshape(position_ids, (-1, seq_length)) if position_ids is not None else None flat_inputs_embeds = ( tf.reshape(inputs_embeds, (-1, seq_length, shape_list(inputs_embeds)[3])) if inputs_embeds is not None else None ) outputs = self.electra( input_ids=flat_input_ids, attention_mask=flat_attention_mask, token_type_ids=flat_token_type_ids, position_ids=flat_position_ids, head_mask=head_mask, inputs_embeds=flat_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) logits = self.sequence_summary(outputs[0]) logits = self.classifier(logits) reshaped_logits = tf.reshape(logits, (-1, num_choices)) loss = None if labels is None else self.hf_compute_loss(labels, reshaped_logits) if not return_dict: output = (reshaped_logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TFMultipleChoiceModelOutput( loss=loss, logits=reshaped_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) if getattr(self, "sequence_summary", None) is not None: with tf.name_scope(self.sequence_summary.name): self.sequence_summary.build(None) if getattr(self, "classifier", None) is not None: with tf.name_scope(self.classifier.name): self.classifier.build([None, None, self.config.hidden_size]) @add_start_docstrings( """ Electra model with a token classification head on top. Both the discriminator and generator may be loaded into this model. """, ELECTRA_START_DOCSTRING, ) class TFElectraForTokenClassification(TFElectraPreTrainedModel, TFTokenClassificationLoss): def __init__(self, config, **kwargs): super().__init__(config, **kwargs) self.electra = TFElectraMainLayer(config, name="electra") classifier_dropout = ( config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob ) self.dropout = keras.layers.Dropout(classifier_dropout) self.classifier = keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) self.config = config @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint="bhadresh-savani/electra-base-discriminator-finetuned-conll03-english", output_type=TFTokenClassifierOutput, config_class=_CONFIG_FOR_DOC, expected_output="['B-LOC', 'B-ORG', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'B-LOC', 'I-LOC']", expected_loss=0.11, ) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: np.ndarray | tf.Tensor | None = None, training: Optional[bool] = False, ) -> Union[TFTokenClassifierOutput, Tuple[tf.Tensor]]: r""" labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. """ discriminator_hidden_states = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) discriminator_sequence_output = discriminator_hidden_states[0] discriminator_sequence_output = self.dropout(discriminator_sequence_output) logits = self.classifier(discriminator_sequence_output) loss = None if labels is None else self.hf_compute_loss(labels, logits) if not return_dict: output = (logits,) + discriminator_hidden_states[1:] return ((loss,) + output) if loss is not None else output return TFTokenClassifierOutput( loss=loss, logits=logits, hidden_states=discriminator_hidden_states.hidden_states, attentions=discriminator_hidden_states.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) if getattr(self, "classifier", None) is not None: with tf.name_scope(self.classifier.name): self.classifier.build([None, None, self.config.hidden_size]) @add_start_docstrings( """ Electra Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). """, ELECTRA_START_DOCSTRING, ) class TFElectraForQuestionAnswering(TFElectraPreTrainedModel, TFQuestionAnsweringLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.electra = TFElectraMainLayer(config, name="electra") self.qa_outputs = keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" ) self.config = config @unpack_inputs @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint="bhadresh-savani/electra-base-squad2", output_type=TFQuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, qa_target_start_index=11, qa_target_end_index=12, expected_output="'a nice puppet'", expected_loss=2.64, ) def call( self, input_ids: TFModelInputType | None = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, start_positions: np.ndarray | tf.Tensor | None = None, end_positions: np.ndarray | tf.Tensor | None = None, training: Optional[bool] = False, ) -> Union[TFQuestionAnsweringModelOutput, Tuple[tf.Tensor]]: r""" start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ discriminator_hidden_states = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) discriminator_sequence_output = discriminator_hidden_states[0] logits = self.qa_outputs(discriminator_sequence_output) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1) end_logits = tf.squeeze(end_logits, axis=-1) loss = None if start_positions is not None and end_positions is not None: labels = {"start_position": start_positions} labels["end_position"] = end_positions loss = self.hf_compute_loss(labels, (start_logits, end_logits)) if not return_dict: output = ( start_logits, end_logits, ) + discriminator_hidden_states[1:] return ((loss,) + output) if loss is not None else output return TFQuestionAnsweringModelOutput( loss=loss, start_logits=start_logits, end_logits=end_logits, hidden_states=discriminator_hidden_states.hidden_states, attentions=discriminator_hidden_states.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "electra", None) is not None: with tf.name_scope(self.electra.name): self.electra.build(None) if getattr(self, "qa_outputs", None) is not None: with tf.name_scope(self.qa_outputs.name): self.qa_outputs.build([None, None, self.config.hidden_size])
transformers/src/transformers/models/electra/modeling_tf_electra.py/0
{ "file_path": "transformers/src/transformers/models/electra/modeling_tf_electra.py", "repo_id": "transformers", "token_count": 33408 }
346
# Copyright 2022 Facebook and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_tf_available, is_torch_available _import_structure = { "configuration_esm": ["EsmConfig"], "tokenization_esm": ["EsmTokenizer"], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_esm"] = [ "EsmForMaskedLM", "EsmForSequenceClassification", "EsmForTokenClassification", "EsmModel", "EsmPreTrainedModel", ] _import_structure["modeling_esmfold"] = ["EsmForProteinFolding", "EsmFoldPreTrainedModel"] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_esm"] = [ "TFEsmForMaskedLM", "TFEsmForSequenceClassification", "TFEsmForTokenClassification", "TFEsmModel", "TFEsmPreTrainedModel", ] if TYPE_CHECKING: from .configuration_esm import EsmConfig from .tokenization_esm import EsmTokenizer try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_esm import ( EsmForMaskedLM, EsmForSequenceClassification, EsmForTokenClassification, EsmModel, EsmPreTrainedModel, ) from .modeling_esmfold import EsmFoldPreTrainedModel, EsmForProteinFolding try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_esm import ( TFEsmForMaskedLM, TFEsmForSequenceClassification, TFEsmForTokenClassification, TFEsmModel, TFEsmPreTrainedModel, ) else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure)
transformers/src/transformers/models/esm/__init__.py/0
{ "file_path": "transformers/src/transformers/models/esm/__init__.py", "repo_id": "transformers", "token_count": 1077 }
347
# 🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚 # This file was automatically generated from <path_to_diff_file.py>. # Do NOT edit this file manually as any edits will be overwritten by the generation of # the file from the diff. If any change should be done, please apply the change to the # diff.py file directly. # 🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚🚚 # coding=utf-8 # Copyright 2024 Google Inc. HuggingFace Inc. team. All rights reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from transformers import PretrainedConfig class Gemma2Config(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`Gemma2Model`]. It is used to instantiate an Gemma2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Gemma2-7B. e.g. [google/gemma2-7b](https://huggingface.co/google/gemma2-7b) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 256000): Vocabulary size of the Gemma2 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Gemma2Model`] hidden_size (`int`, *optional*, defaults to 3072): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 24576): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 28): Number of hidden layers in the Transformer decoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. num_key_value_heads (`int`, *optional*, defaults to 16): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `num_attention_heads`. head_dim (`int`, *optional*, defaults to 256): The attention head dimension. hidden_activation (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 8192): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. pad_token_id (`int`, *optional*, defaults to 0): Padding token id. eos_token_id (`int`, *optional*, defaults to 1): End of stream token id. bos_token_id (`int`, *optional*, defaults to 2): Beginning of stream token id. tie_word_embeddings (`bool`, *optional*, defaults to `True`): Whether to tie weight embeddings rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`): Whether to use a bias in the query, key, value and output projection layers during self-attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. final_logit_softcapping (`float`, *optional*, defaults to 30.0): scaling factor when applying tanh softcapping on the logits. attn_logit_softcapping (`float`, *optional*, defaults to 50.0): scaling factor when applying tanh softcapping on the attention scores. query_pre_attn_scalar (`float`, *optional*, defaults to 224): scaling factor used on the attention scores sliding_window (`int`, *optional*, defaults to 4096): in Gemma2, every other layer uses sliding window attention. This is the size of the sliding window. ```python >>> from transformers import Gemma2Model, Gemma2Config >>> # Initializing a Gemma2 gemma2-9b style configuration >>> configuration = Gemma2Config() >>> # Initializing a model from the gemma2-9b style configuration >>> model = Gemma2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "gemma2" keys_to_ignore_at_inference = ["past_key_values"] def __init__( self, vocab_size=256000, hidden_size=3072, intermediate_size=24576, num_hidden_layers=28, num_attention_heads=16, num_key_value_heads=16, head_dim=256, hidden_activation="gelu_pytorch_tanh", max_position_embeddings=8192, initializer_range=0.02, rms_norm_eps=1e-6, use_cache=True, pad_token_id=0, eos_token_id=1, bos_token_id=2, tie_word_embeddings=True, rope_theta=10000.0, attention_bias=False, attention_dropout=0.0, final_logit_softcapping=30.0, attn_logit_softcapping=50.0, query_pre_attn_scalar=224, sliding_window=4096, **kwargs, ): self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.intermediate_size = intermediate_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.head_dim = head_dim self.num_key_value_heads = num_key_value_heads self.hidden_activation = hidden_activation self.initializer_range = initializer_range self.rms_norm_eps = rms_norm_eps self.use_cache = use_cache self.rope_theta = rope_theta self.attention_bias = attention_bias self.attention_dropout = attention_dropout self.attn_logit_softcapping = attn_logit_softcapping super().__init__( pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs, ) self.final_logit_softcapping = final_logit_softcapping self.query_pre_attn_scalar = query_pre_attn_scalar self.sliding_window = sliding_window self.cache_implementation = "hybrid"
transformers/src/transformers/models/gemma2/configuration_gemma2.py/0
{ "file_path": "transformers/src/transformers/models/gemma2/configuration_gemma2.py", "repo_id": "transformers", "token_count": 3344 }
348
# coding=utf-8 # Copyright 2021 The Eleuther AI and HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch GPT Neo model.""" import os from typing import Optional, Tuple, Union import torch import torch.utils.checkpoint from torch import nn from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss from ...activations import ACT2FN from ...cache_utils import Cache, DynamicCache, StaticCache from ...modeling_attn_mask_utils import AttentionMaskConverter, _prepare_4d_causal_attention_mask from ...modeling_outputs import ( BaseModelOutputWithPast, BaseModelOutputWithPastAndCrossAttentions, CausalLMOutputWithCrossAttentions, CausalLMOutputWithPast, QuestionAnsweringModelOutput, SequenceClassifierOutputWithPast, TokenClassifierOutput, ) from ...modeling_utils import PreTrainedModel from ...pytorch_utils import is_torch_greater_or_equal_than_1_13 from ...utils import ( add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, is_flash_attn_2_available, is_flash_attn_greater_or_equal_2_10, is_torch_fx_available, logging, ) from .configuration_gpt_neo import GPTNeoConfig if is_flash_attn_2_available(): from ...modeling_flash_attention_utils import _flash_attention_forward # This makes `_prepare_4d_causal_attention_mask` a leaf function in the FX graph. # It means that the function will not be traced through and simply appear as a node in the graph. if is_torch_fx_available(): if not is_torch_greater_or_equal_than_1_13: import torch.fx _prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask) logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "GPTNeoConfig" _CHECKPOINT_FOR_DOC = "EleutherAI/gpt-neo-1.3B" # Copied from transformers.models.llama.modeling_llama._prepare_4d_causal_attention_mask_with_cache_position def _prepare_4d_causal_attention_mask_with_cache_position( attention_mask: torch.Tensor, sequence_length: int, target_length: int, dtype: torch.dtype, device: torch.device, min_dtype: float, cache_position: torch.Tensor, batch_size: int, ): """ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing. Args: attention_mask (`torch.Tensor`): A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`. sequence_length (`int`): The sequence length being processed. target_length (`int`): The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet. dtype (`torch.dtype`): The dtype to use for the 4D attention mask. device (`torch.device`): The device to plcae the 4D attention mask on. min_dtype (`float`): The minimum value representable with the dtype `dtype`. cache_position (`torch.Tensor`): Indices depicting the position of the input sequence tokens in the sequence. batch_size (`torch.Tensor`): Batch size. """ if attention_mask is not None and attention_mask.dim() == 4: # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing. causal_mask = attention_mask else: causal_mask = torch.full((sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device) if sequence_length != 1: causal_mask = torch.triu(causal_mask, diagonal=1) causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1) causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1) if attention_mask is not None: causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit mask_length = attention_mask.shape[-1] padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :] padding_mask = padding_mask == 0 causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill( padding_mask, min_dtype ) return causal_mask def load_tf_weights_in_gpt_neo(model, config, gpt_neo_checkpoint_path): """Load tf checkpoints in a pytorch model""" try: import re import tensorflow as tf except ImportError: logger.error( "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " "https://www.tensorflow.org/install/ for installation instructions." ) raise tf_path = os.path.abspath(gpt_neo_checkpoint_path) logger.info(f"Converting TensorFlow checkpoint from {tf_path}") # Load weights from TF model init_vars = tf.train.list_variables(tf_path) names = [] arrays = [] for name, shape in init_vars: if "global_step" not in name and "adam" not in name: array = tf.train.load_variable(tf_path, name) array = tf.dtypes.cast(array.squeeze(), tf.float32).numpy() name = name.replace("attn/q", "attn/attention/q_proj/w") name = name.replace("attn/k", "attn/attention/k_proj/w") name = name.replace("attn/v", "attn/attention/v_proj/w") name = name.replace("attn/o", "attn/attention/out_proj/w") name = name.replace("norm_1", "ln_1") name = name.replace("norm_2", "ln_2") name = name.replace("attn/compute_output_bias/o_b", "attn/attention/out_proj/b") name = name.replace("conv1d_main/c_fc/kernel", "c_fc/w") name = name.replace("conv1d_main/c_fc/bias", "c_fc/b") name = name.replace("conv1d_main/c_proj/kernel", "c_proj/w") name = name.replace("conv1d_main/c_proj/bias", "c_proj/b") names.append(name) arrays.append(array) for name, array in zip(names, arrays): name = name[5:] # skip "gpt2/" name = name.split("/") pointer = model.transformer for m_name in name: if re.fullmatch(r"[A-Za-z]+\d+", m_name): scope_names = re.split(r"(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "w" or scope_names[0] == "g": pointer = getattr(pointer, "weight") elif scope_names[0] == "b": pointer = getattr(pointer, "bias") elif scope_names[0] == "wpe" or scope_names[0] == "wte": pointer = getattr(pointer, scope_names[0]) pointer = getattr(pointer, "weight") else: pointer = getattr(pointer, scope_names[0]) if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] if name[-1] == "w" and name[-2] in ["out_proj", "k_proj", "q_proj", "v_proj", "c_proj", "c_fc"]: array = array.transpose() if name == ["wte"]: # if vocab is padded, then trim off the padding embeddings array = array[: config.vocab_size] if pointer.shape != array.shape: raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched {name}") print(f"Initialize PyTorch weight {name}") pointer.data = torch.from_numpy(array) # init the final linear layer using word embeddings embs = model.transformer.wte.weight lin = nn.Linear(embs.size()[1], embs.size()[0], bias=False) lin.weight = embs model.set_output_embeddings(lin) return model class GPTNeoSelfAttention(nn.Module): def __init__(self, config, attention_type, layer_id=None): super().__init__() self.config = config max_positions = config.max_position_embeddings bias = torch.tril(torch.ones((max_positions, max_positions), dtype=bool)).view( 1, 1, max_positions, max_positions ) # local causal self attention is a sliding window where each token can only attend to the previous # window_size tokens. This is implemented by updating the causal mask such that for each token # all other tokens are masked except the previous window_size tokens. if attention_type == "local": bias = torch.bitwise_xor(bias, torch.tril(bias, -config.window_size)) self.register_buffer("bias", bias, persistent=False) self.register_buffer("masked_bias", torch.tensor(-1e9), persistent=False) self.attn_dropout = nn.Dropout(float(config.attention_dropout)) self.resid_dropout = nn.Dropout(float(config.resid_dropout)) self.is_causal = True self.layer_id = layer_id self.embed_dim = config.hidden_size self.num_heads = config.num_heads self.head_dim = self.embed_dim // self.num_heads if self.head_dim * self.num_heads != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" f" {self.num_heads})." ) self.k_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) self.v_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) self.q_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=True) def _split_heads(self, tensor, num_heads, attn_head_size): """ Splits hidden_size dim into attn_head_size and num_heads """ new_shape = tensor.size()[:-1] + (num_heads, attn_head_size) tensor = tensor.view(new_shape) return tensor.permute(0, 2, 1, 3) # (batch, head, seq_length, head_features) def _merge_heads(self, tensor, num_heads, attn_head_size): """ Merges attn_head_size dim and num_attn_heads dim into hidden_size """ tensor = tensor.permute(0, 2, 1, 3).contiguous() new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,) return tensor.view(new_shape) def _attn(self, query, key, value, attention_mask=None, head_mask=None): # Keep the attention weights computation in fp32 to avoid overflow issues query = query.to(torch.float32) key = key.to(torch.float32) attn_weights = torch.matmul(query, key.transpose(-1, -2)) # Apply sliding window masking for local attention layers query_length, key_length = query.size(-2), key.size(-2) causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length] mask_value = torch.finfo(attn_weights.dtype).min # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`. # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device` mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device) attn_weights = torch.where(causal_mask, attn_weights, mask_value) if attention_mask is not None: # no matter the length, we just slice it causal_mask = attention_mask[:, :, :, : key.shape[-2]] attn_weights = attn_weights + causal_mask attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = attn_weights.to(value.dtype) attn_weights = self.attn_dropout(attn_weights) # Mask heads if we want to if head_mask is not None: attn_weights = attn_weights * head_mask attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights def forward( self, hidden_states, attention_mask=None, layer_past=None, head_mask=None, use_cache=False, output_attentions=False, cache_position=None, ): query = self.q_proj(hidden_states) key = self.k_proj(hidden_states) value = self.v_proj(hidden_states) query = self._split_heads(query, self.num_heads, self.head_dim) key = self._split_heads(key, self.num_heads, self.head_dim) value = self._split_heads(value, self.num_heads, self.head_dim) if layer_past is not None: cache_kwargs = {"cache_position": cache_position} key, value = layer_past.update(key, value, self.layer_id, cache_kwargs) attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim) attn_output = self.out_proj(attn_output) attn_output = self.resid_dropout(attn_output) outputs = (attn_output, layer_past) if output_attentions: outputs += (attn_weights,) return outputs # a, past_kv, (attentions) class GPTNeoFlashAttention2(GPTNeoSelfAttention): """ GPTNeo flash attention module. This module inherits from `GPTNeoSelfAttention` as the weights of the module stays untouched. The only required change would be on the forward pass where it needs to correctly call the public API of flash attention and deal with padding tokens in case the input contains any of them. """ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2.__init__ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1. # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0. # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left). self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10() def forward( self, hidden_states, attention_mask=None, layer_past=None, head_mask=None, use_cache=False, output_attentions=False, cache_position=None, ): bsz, _, _ = hidden_states.size() query = self.q_proj(hidden_states) key = self.k_proj(hidden_states) value = self.v_proj(hidden_states) query = self._split_heads(query, self.num_heads, self.head_dim) key = self._split_heads(key, self.num_heads, self.head_dim) value = self._split_heads(value, self.num_heads, self.head_dim) if layer_past is not None: cache_kwargs = {"cache_position": cache_position} key, value = layer_past.update(key, value, self.layer_id, cache_kwargs) query_length = query.shape[2] tgt_len = key.shape[2] # Flash attention requires the input to have the shape # batch_size x seq_length x head_dim x hidden_dim query = query.transpose(1, 2).view(bsz, query_length, self.num_heads, self.head_dim) key = key.transpose(1, 2).view(bsz, tgt_len, self.num_heads, self.head_dim) value = value.transpose(1, 2).view(bsz, tgt_len, self.num_heads, self.head_dim) attn_dropout = self.config.attention_dropout if self.training else 0.0 if attention_mask is not None: # no matter the length, we just slice it attention_mask = attention_mask[:, :, :, : key.shape[-2]] # In PEFT, usually we cast the layer norms in float32 for training stability reasons # therefore the input hidden states gets silently casted in float32. Hence, we need # cast them back in the correct dtype just to be sure everything works as expected. # This might slowdown training & inference so it is recommended to not cast the LayerNorms # in fp32. (LlamaRMSNorm handles it correctly) if query.dtype == torch.float32: if torch.is_autocast_enabled(): target_dtype = torch.get_autocast_gpu_dtype() # Handle the case where the model is quantized elif hasattr(self.config, "_pre_quantization_dtype"): target_dtype = self.config._pre_quantization_dtype else: target_dtype = self.q_proj.weight.dtype logger.warning_once( f"The input hidden states seems to be silently casted in float32, this might be related to" f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in" f" {target_dtype}." ) query = query.to(target_dtype) key = key.to(target_dtype) value = value.to(target_dtype) attn_output = _flash_attention_forward( query, key, value, attention_mask, query_length, dropout=attn_dropout, softmax_scale=1.0, is_causal=self.is_causal, use_top_left_mask=self._flash_attn_uses_top_left_mask, ) attn_weights_reshaped = attn_output.reshape(bsz, query_length, self.num_heads * self.head_dim) attn_output = self.out_proj(attn_weights_reshaped) attn_output = self.resid_dropout(attn_output) outputs = (attn_output, layer_past) if output_attentions: outputs += (attn_weights_reshaped,) return outputs GPT_NEO_ATTENTION_CLASSES = { "eager": GPTNeoSelfAttention, "flash_attention_2": GPTNeoFlashAttention2, } class GPTNeoAttention(nn.Module): def __init__(self, config, layer_id=0): super().__init__() self.layer_id = layer_id self.attention_layers = config.attention_layers self.attention_type = self.attention_layers[layer_id] if self.attention_type in ["global", "local"]: self.attention = GPT_NEO_ATTENTION_CLASSES[config._attn_implementation]( config, self.attention_type, layer_id ) else: raise NotImplementedError( "Only attn layer types 'global' and 'local' exist, but got `config.attention_layers`: " f"{config.attention_layers}. Select attn layer types from ['global', 'local'] only." ) def forward( self, hidden_states, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, output_attentions=False, cache_position=None, ): return self.attention( hidden_states, attention_mask=attention_mask, layer_past=layer_past, head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, ) class GPTNeoMLP(nn.Module): def __init__(self, intermediate_size, config): # in MLP: intermediate_size= 4 * hidden_size super().__init__() embed_dim = config.hidden_size self.c_fc = nn.Linear(embed_dim, intermediate_size) self.c_proj = nn.Linear(intermediate_size, embed_dim) self.act = ACT2FN[config.activation_function] self.dropout = nn.Dropout(float(config.resid_dropout)) def forward(self, hidden_states): hidden_states = self.c_fc(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.c_proj(hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states class GPTNeoBlock(nn.Module): def __init__(self, config, layer_id=None): super().__init__() hidden_size = config.hidden_size inner_dim = config.intermediate_size if config.intermediate_size is not None else 4 * hidden_size self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) self.attn = GPTNeoAttention(config, layer_id) self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) self.mlp = GPTNeoMLP(inner_dim, config) def forward( self, hidden_states, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, output_attentions=False, cache_position=None, ): residual = hidden_states hidden_states = self.ln_1(hidden_states) attn_outputs = self.attn( hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, ) attn_output = attn_outputs[0] # output_attn: a, present, (attentions) outputs = attn_outputs[1:] # residual connection hidden_states = attn_output + residual residual = hidden_states hidden_states = self.ln_2(hidden_states) feed_forward_hidden_states = self.mlp(hidden_states) # residual connection hidden_states = residual + feed_forward_hidden_states if use_cache: outputs = (hidden_states,) + outputs else: outputs = (hidden_states,) + outputs[1:] return outputs # hidden_states, past_kv, attentions class GPTNeoPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = GPTNeoConfig load_tf_weights = load_tf_weights_in_gpt_neo base_model_prefix = "transformer" supports_gradient_checkpointing = True _no_split_modules = ["GPTNeoBlock"] _skip_keys_device_placement = "past_key_values" _supports_flash_attn_2 = True _supports_cache_class = True _supports_quantized_cache = True _supports_static_cache = False # TODO: needs a HybridCache def __init__(self, *inputs, **kwargs): super().__init__(*inputs, **kwargs) def _init_weights(self, module): """Initialize the weights.""" if isinstance(module, (nn.Linear,)): # Slightly different from the TF version which uses truncated_normal for initialization # cf https://github.com/pytorch/pytorch/pull/5617 module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) GPT_NEO_START_DOCSTRING = r""" This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`GPTNeoConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. """ GPT_NEO_INPUTS_DOCSTRING = r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`): `input_ids_length` = `sequence_length` if `past_key_values` is `None` else `past_key_values[0][0].shape[-2]` (`sequence_length` of input past key value states). Indices of input sequence tokens in the vocabulary. If `past_key_values` is used, only `input_ids` that do not have their past calculated should be passed as `input_ids`. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*): Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values` returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`. Two formats are allowed: - a [`~cache_utils.Cache`] instance; - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy cache format. The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the legacy cache format will be returned. If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids` of shape `(batch_size, sequence_length)`. attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) token_type_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*): Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a *sentence A* token, - 1 corresponds to a *sentence B* token. [What are token type IDs?](../glossary#token-type-ids) position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see `past_key_values`). use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*): Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length. """ @add_start_docstrings( "The bare GPT Neo Model transformer outputting raw hidden-states without any specific head on top.", GPT_NEO_START_DOCSTRING, ) class GPTNeoModel(GPTNeoPreTrainedModel): def __init__(self, config): super().__init__(config) self.embed_dim = config.hidden_size self.wte = nn.Embedding(config.vocab_size, self.embed_dim) self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim) self.drop = nn.Dropout(float(config.embed_dropout)) self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i) for i in range(config.num_layers)]) self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.wte def set_input_embeddings(self, new_embeddings): self.wte = new_embeddings @add_start_docstrings_to_model_forward(GPT_NEO_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=BaseModelOutputWithPastAndCrossAttentions, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids: Optional[torch.Tensor] = None, past_key_values: Optional[Union[Cache, Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.LongTensor] = None, ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if (input_ids is None) ^ (inputs_embeds is not None): raise ValueError( "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one" ) if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False if inputs_embeds is None: inputs_embeds = self.wte(input_ids) use_legacy_cache = False if use_cache and not isinstance(past_key_values, Cache) and not self.training: use_legacy_cache = True past_key_values = DynamicCache.from_legacy_cache(past_key_values) if not self.training: logger.warning_once( "We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.45. " "Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/internal/generation_utils#transformers.Cache)" ) seq_length = inputs_embeds.shape[1] if cache_position is None: past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0 cache_position = torch.arange(past_seen_tokens, past_seen_tokens + seq_length, device=inputs_embeds.device) if position_ids is None: position_ids = cache_position.unsqueeze(0) causal_mask = self._update_causal_mask( attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions ) # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x num_heads x N x N # head_mask has shape n_layer x batch x num_heads x N x N head_mask = self.get_head_mask(head_mask, self.config.num_layers) position_embeds = self.wpe(position_ids) hidden_states = inputs_embeds + position_embeds if token_type_ids is not None: token_type_ids = token_type_ids.view(-1, seq_length) token_type_embeds = self.wte(token_type_ids) hidden_states = hidden_states + token_type_embeds hidden_states = self.drop(hidden_states) output_shape = (-1, seq_length, hidden_states.size(-1)) next_decoder_cache = None all_self_attentions = () if output_attentions else None all_hidden_states = () if output_hidden_states else None for i, block in enumerate(self.h): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if self.gradient_checkpointing and self.training: outputs = self._gradient_checkpointing_func( block.__call__, hidden_states, None, causal_mask, head_mask[i], use_cache, output_attentions, cache_position, ) else: outputs = block( hidden_states, layer_past=past_key_values, attention_mask=causal_mask, head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, ) hidden_states = outputs[0] if use_cache: next_decoder_cache = outputs[1] if output_attentions: all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) hidden_states = self.ln_f(hidden_states) hidden_states = hidden_states.view(output_shape) # Add last hidden state if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) next_cache = None if use_cache: next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache if not return_dict: return tuple( v for v in [hidden_states, next_cache, all_hidden_states, all_self_attentions] if v is not None ) return BaseModelOutputWithPast( last_hidden_state=hidden_states, past_key_values=next_cache, hidden_states=all_hidden_states, attentions=all_self_attentions, ) # Copied from transformers.models.llama.modeling_llama.LlamaModel._update_causal_mask def _update_causal_mask( self, attention_mask: torch.Tensor, input_tensor: torch.Tensor, cache_position: torch.Tensor, past_key_values: Cache, output_attentions: bool, ): if self.config._attn_implementation == "flash_attention_2": if attention_mask is not None and 0.0 in attention_mask: return attention_mask return None # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail # to infer the attention mask. past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0 using_static_cache = isinstance(past_key_values, StaticCache) # When output attentions is True, sdpa implementation's forward method calls the eager implementation's forward if self.config._attn_implementation == "sdpa" and not using_static_cache and not output_attentions: if AttentionMaskConverter._ignore_causal_mask_sdpa( attention_mask, inputs_embeds=input_tensor, past_key_values_length=past_seen_tokens, is_training=self.training, ): return None dtype, device = input_tensor.dtype, input_tensor.device min_dtype = torch.finfo(dtype).min sequence_length = input_tensor.shape[1] if using_static_cache: target_length = past_key_values.get_max_length() else: target_length = ( attention_mask.shape[-1] if isinstance(attention_mask, torch.Tensor) else past_seen_tokens + sequence_length + 1 ) # In case the provided `attention` mask is 2D, we generate a causal mask here (4D). causal_mask = _prepare_4d_causal_attention_mask_with_cache_position( attention_mask, sequence_length=sequence_length, target_length=target_length, dtype=dtype, device=device, min_dtype=min_dtype, cache_position=cache_position, batch_size=input_tensor.shape[0], ) if ( self.config._attn_implementation == "sdpa" and attention_mask is not None and attention_mask.device.type == "cuda" and not output_attentions ): # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path. # Details: https://github.com/pytorch/pytorch/issues/110213 causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype) return causal_mask @add_start_docstrings( """ The GPT Neo Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). """, GPT_NEO_START_DOCSTRING, ) class GPTNeoForCausalLM(GPTNeoPreTrainedModel): _tied_weights_keys = ["lm_head.weight"] def __init__(self, config): super().__init__(config) self.transformer = GPTNeoModel(config) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() def get_output_embeddings(self): return self.lm_head def set_output_embeddings(self, new_embeddings): self.lm_head = new_embeddings def prepare_inputs_for_generation( self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, past_key_values=None, inputs_embeds=None, cache_position=None, use_cache=True, **kwargs, ): # If we have cache: let's slice `input_ids` through `cache_position`, to keep only the unprocessed tokens # Exception 1: when passing input_embeds, input_ids may be missing entries # Exception 2: some generation methods do special slicing of input_ids, so we don't need to do it here if past_key_values is not None: if inputs_embeds is not None: # Exception 1 input_ids = input_ids[:, -cache_position.shape[0] :] elif input_ids.shape[1] != cache_position.shape[0]: # Default case (the "else", a no op, is Exception 2) input_ids = input_ids[:, cache_position] if token_type_ids is not None: token_type_ids = token_type_ids[:, -input_ids.shape[1] :] if attention_mask is not None and position_ids is None: # create position_ids on the fly for batch generation position_ids = attention_mask.long().cumsum(-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) if past_key_values: position_ids = position_ids[:, -input_ids.shape[1] :] # This `clone` call is needed to avoid recapturing cuda graphs with `torch.compile`'s `mode="reduce-overhead`, as otherwise the input `position_ids` would have various stride during the decoding. Here, simply using `.contiguous()` is not sufficient as in the batch size = 1 case, `position_ids` is already contiguous but with varying stride which retriggers a capture. position_ids = position_ids.clone(memory_format=torch.contiguous_format) # if `inputs_embeds` are passed, we only want to use them in the 1st generation step if inputs_embeds is not None and cache_position[0] == 0: model_inputs = {"inputs_embeds": inputs_embeds, "input_ids": None} else: # The clone here is for the same reason as for `position_ids`. model_inputs = {"input_ids": input_ids.clone(memory_format=torch.contiguous_format), "inputs_embeds": None} if isinstance(past_key_values, StaticCache) and attention_mask.ndim == 2: if model_inputs["inputs_embeds"] is not None: batch_size, sequence_length, _ = model_inputs["inputs_embeds"].shape device = model_inputs["inputs_embeds"].device else: batch_size, sequence_length = model_inputs["input_ids"].shape device = model_inputs["input_ids"].device dtype = self.lm_head.weight.dtype min_dtype = torch.finfo(dtype).min attention_mask = _prepare_4d_causal_attention_mask_with_cache_position( attention_mask, sequence_length=sequence_length, target_length=past_key_values.get_max_length(), dtype=dtype, device=device, min_dtype=min_dtype, cache_position=cache_position, batch_size=batch_size, ) model_inputs.update( { "position_ids": position_ids, "cache_position": cache_position, "past_key_values": past_key_values, "use_cache": use_cache, "token_type_ids": token_type_ids, "attention_mask": attention_mask, } ) return model_inputs @add_start_docstrings_to_model_forward(GPT_NEO_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids: Optional[torch.Tensor] = None, past_key_values: Optional[Union[Cache, Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.LongTensor] = None, ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict transformer_outputs = self.transformer( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, cache_position=cache_position, ) hidden_states = transformer_outputs[0] lm_logits = self.lm_head(hidden_states) loss = None if labels is not None: # move labels to correct device to enable model parallelism labels = labels.to(lm_logits.device) # Compute loss in fp32 to match with mesh-tf version # https://github.com/EleutherAI/gpt-neo/blob/89ce74164da2fb16179106f54e2269b5da8db333/models/gpt2/gpt2.py#L179 lm_logits = lm_logits.to(torch.float32) # Shift so that tokens < n predict n shift_logits = lm_logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() # Flatten the tokens loss_fct = CrossEntropyLoss() loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) lm_logits = lm_logits.to(hidden_states.dtype) loss = loss.to(hidden_states.dtype) if not return_dict: output = (lm_logits,) + transformer_outputs[1:] return ((loss,) + output) if loss is not None else output return CausalLMOutputWithPast( loss=loss, logits=lm_logits, past_key_values=transformer_outputs.past_key_values, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) @staticmethod def _reorder_cache( past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor ) -> Tuple[Tuple[torch.Tensor]]: """ This function is used to re-order the `past_key_values` cache if [`~PretrainedModel.beam_search`] or [`~PretrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct beam_idx at every generation step. """ return tuple( tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) for layer_past in past_key_values ) @add_start_docstrings( """ The GPTNeo Model transformer with a sequence classification head on top (linear layer). [`GPTNeoForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). """, GPT_NEO_START_DOCSTRING, ) class GPTNeoForSequenceClassification(GPTNeoPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.transformer = GPTNeoModel(config) self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False) # Initialize weights and apply final processing self.post_init() @add_start_docstrings_to_model_forward(GPT_NEO_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=SequenceClassifierOutputWithPast, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids: Optional[torch.Tensor] = None, past_key_values: Optional[Union[Cache, Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutputWithPast]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict transformer_outputs = self.transformer( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = transformer_outputs[0] logits = self.score(hidden_states) if input_ids is not None: batch_size, sequence_length = input_ids.shape[:2] else: batch_size, sequence_length = inputs_embeds.shape[:2] if self.config.pad_token_id is None and batch_size != 1: raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") if self.config.pad_token_id is None: sequence_lengths = -1 else: if input_ids is not None: # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1 sequence_lengths = sequence_lengths % input_ids.shape[-1] sequence_lengths = sequence_lengths.to(logits.device) else: sequence_lengths = -1 logger.warning_once( f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " "unexpected if using padding tokens in conjunction with `inputs_embeds.`" ) pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] loss = None if labels is not None: if self.config.problem_type is None: if self.num_labels == 1: self.config.problem_type = "regression" elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): self.config.problem_type = "single_label_classification" else: self.config.problem_type = "multi_label_classification" if self.config.problem_type == "regression": loss_fct = MSELoss() if self.num_labels == 1: loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) else: loss = loss_fct(pooled_logits, labels) elif self.config.problem_type == "single_label_classification": loss_fct = CrossEntropyLoss() loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) elif self.config.problem_type == "multi_label_classification": loss_fct = BCEWithLogitsLoss() loss = loss_fct(pooled_logits, labels) if not return_dict: output = (pooled_logits,) + transformer_outputs[1:] return ((loss,) + output) if loss is not None else output return SequenceClassifierOutputWithPast( loss=loss, logits=pooled_logits, past_key_values=transformer_outputs.past_key_values, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) @add_start_docstrings( """ GPT Neo model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, GPT_NEO_START_DOCSTRING, ) class GPTNeoForTokenClassification(GPTNeoPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.transformer = GPTNeoModel(config) self.dropout = nn.Dropout(config.classifier_dropout) self.classifier = nn.Linear(config.hidden_size, config.num_labels) # Initialize weights and apply final processing self.post_init() @add_start_docstrings_to_model_forward(GPT_NEO_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint="EleutherAI/gpt-neo-125m", output_type=TokenClassifierOutput, config_class=_CONFIG_FOR_DOC, expected_loss=0.25, ) def forward( self, input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Union[Cache, Tuple[Tuple[torch.Tensor]]]] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, TokenClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict transformer_outputs = self.transformer( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = transformer_outputs[0] hidden_states = self.dropout(hidden_states) logits = self.classifier(hidden_states) loss = None if labels is not None: labels = labels.to(logits.device) loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + transformer_outputs[2:] return ((loss,) + output) if loss is not None else output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) @add_start_docstrings( """ The GPT-Neo Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). """, GPT_NEO_START_DOCSTRING, ) class GPTNeoForQuestionAnswering(GPTNeoPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.transformer = GPTNeoModel(config) self.qa_outputs = nn.Linear(config.hidden_size, 2) # Initialize weights and apply final processing self.post_init() @add_start_docstrings_to_model_forward(GPT_NEO_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=QuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, real_checkpoint=_CHECKPOINT_FOR_DOC, ) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, QuestionAnsweringModelOutput]: r""" start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.transformer( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] logits = self.qa_outputs(sequence_output) start_logits, end_logits = logits.split(1, dim=-1) start_logits = start_logits.squeeze(-1).contiguous() end_logits = end_logits.squeeze(-1).contiguous() total_loss = None if start_positions is not None and end_positions is not None: # If we are on multi-GPU, split add a dimension if len(start_positions.size()) > 1: start_positions = start_positions.squeeze(-1) if len(end_positions.size()) > 1: end_positions = end_positions.squeeze(-1) # sometimes the start/end positions are outside our model inputs, we ignore these terms ignored_index = start_logits.size(1) start_positions = start_positions.clamp(0, ignored_index) end_positions = end_positions.clamp(0, ignored_index) loss_fct = CrossEntropyLoss(ignore_index=ignored_index) start_loss = loss_fct(start_logits, start_positions) end_loss = loss_fct(end_logits, end_positions) total_loss = (start_loss + end_loss) / 2 if not return_dict: output = (start_logits, end_logits) + outputs[2:] return ((total_loss,) + output) if total_loss is not None else output return QuestionAnsweringModelOutput( loss=total_loss, start_logits=start_logits, end_logits=end_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, )
transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py/0
{ "file_path": "transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py", "repo_id": "transformers", "token_count": 27121 }
349
# coding=utf-8 # Copyright 2022 The EleutherAI and HuggingFace Teams. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """TF 2.0 GPT-J model.""" from __future__ import annotations from typing import Optional, Tuple, Union import numpy as np import tensorflow as tf from ...activations_tf import get_tf_activation from ...file_utils import ( add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, ) from ...modeling_tf_outputs import ( TFBaseModelOutputWithPast, TFCausalLMOutputWithPast, TFQuestionAnsweringModelOutput, TFSequenceClassifierOutputWithPast, ) from ...modeling_tf_utils import ( TFCausalLanguageModelingLoss, TFModelInputType, TFPreTrainedModel, TFQuestionAnsweringLoss, TFSequenceClassificationLoss, TFSharedEmbeddings, get_initializer, keras, keras_serializable, unpack_inputs, ) from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax from ...utils import logging from .configuration_gptj import GPTJConfig logger = logging.get_logger(__name__) _CHECKPOINT_FOR_DOC = "EleutherAI/gpt-j-6B" _CONFIG_FOR_DOC = "GPTJConfig" def create_sinusoidal_positions(num_pos: int, dim: int) -> tf.Tensor: inv_freq = tf.cast(1.0 / (10000 ** (tf.range(0, dim, 2) / dim)), tf.float32) sinusoid_inp = tf.cast(tf.einsum("i , j -> i j", tf.range(num_pos, dtype=tf.float32), inv_freq), tf.float32) sin, cos = tf.sin(sinusoid_inp), tf.cos(sinusoid_inp) out = tf.concat((sin, cos), axis=1) return out def rotate_every_two(x: tf.Tensor) -> tf.Tensor: rotate_half_tensor = tf.stack((-x[:, :, :, 1::2], x[:, :, :, ::2]), axis=-1) new_shape = shape_list(rotate_half_tensor)[:-2] + [tf.math.reduce_prod(shape_list(rotate_half_tensor)[-2:])] rotate_half_tensor = tf.reshape(rotate_half_tensor, new_shape) return rotate_half_tensor def apply_rotary_pos_emb(tensor: tf.Tensor, sincos: tf.Tensor) -> tf.Tensor: sin_pos, cos_pos = sincos sin_pos = tf.repeat(sin_pos[:, :, None, :], 2, 3) cos_pos = tf.repeat(cos_pos[:, :, None, :], 2, 3) return (tensor * cos_pos) + (rotate_every_two(tensor) * sin_pos) class TFGPTJAttention(keras.layers.Layer): def __init__(self, config: GPTJConfig, **kwargs): super().__init__(**kwargs) self.embed_dim = config.hidden_size self.num_attention_heads = config.num_attention_heads self.head_dim = self.embed_dim // self.num_attention_heads if self.head_dim * self.num_attention_heads != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_attention_heads (got `embed_dim`: {self.embed_dim} and" f" `num_attention_heads`: {self.num_attention_heads})." ) self.scale_attn = self.head_dim**0.5 self.rotary_dim = config.rotary_dim self.attn_dropout = keras.layers.Dropout(config.attn_pdrop) self.resid_dropout = keras.layers.Dropout(config.resid_pdrop) self.q_proj = keras.layers.Dense( self.embed_dim, use_bias=False, kernel_initializer=get_initializer(config.initializer_range), name="q_proj", ) self.k_proj = keras.layers.Dense( self.embed_dim, use_bias=False, kernel_initializer=get_initializer(config.initializer_range), name="k_proj", ) self.v_proj = keras.layers.Dense( self.embed_dim, use_bias=False, kernel_initializer=get_initializer(config.initializer_range), name="v_proj", ) self.out_proj = keras.layers.Dense( self.embed_dim, use_bias=False, kernel_initializer=get_initializer(config.initializer_range), name="out_proj", ) self.max_positions = config.max_position_embeddings self.lower_triangle_mask = tf.reshape( tf.cast(tf.experimental.numpy.tril(tf.ones((self.max_positions, self.max_positions))), tf.int8), (1, 1, self.max_positions, self.max_positions), ) pos_embd_dim = self.rotary_dim or self.embed_dim self.embed_positions = create_sinusoidal_positions(self.max_positions, pos_embd_dim) def get_causal_mask(self, key_length, query_length) -> tf.Tensor: return tf.cast(self.lower_triangle_mask[:, :, key_length - query_length : key_length, :key_length], tf.bool) @staticmethod def get_masked_bias(dtype: tf.DType) -> tf.Tensor: return tf.cast(tf.constant(-1e9), dtype) def _split_heads(self, hidden_states: tf.Tensor, rotary: bool) -> tf.Tensor: """ Splits hidden dim into attn_head_size and num_attention_heads """ new_shape = shape_list(hidden_states)[:-1] + [self.num_attention_heads, self.head_dim] hidden_states = tf.reshape(hidden_states, new_shape) if rotary: return hidden_states if len(shape_list(hidden_states)) == 4: return tf.transpose(hidden_states, (0, 2, 1, 3)) # (batch, head, seq_length, head_features) if len(shape_list(hidden_states)) == 5: return tf.transpose(hidden_states, (0, 1, 3, 2, 4)) # (batch, blocks, head, block_length, head_features) raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(shape_list(hidden_states))}") def _merge_heads(self, hidden_states: tf.Tensor) -> tf.Tensor: """ Merges attn_head_size dim and num_attn_heads dim into hidden dim """ if len(shape_list(hidden_states)) == 4: hidden_states = tf.transpose(hidden_states, (0, 2, 1, 3)) elif len(shape_list(hidden_states)) == 5: hidden_states = tf.transpose(hidden_states, (0, 1, 3, 2, 4)) else: raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(shape_list(hidden_states))}") new_shape = shape_list(hidden_states)[:-2] + [self.num_attention_heads * self.head_dim] return tf.reshape(hidden_states, new_shape) def _attn( self, query: tf.Tensor, key: tf.Tensor, value: tf.Tensor, attention_mask: tf.Tensor | None = None, head_mask: tf.Tensor | None = None, ) -> Tuple[tf.Tensor, tf.Tensor]: # compute causal mask from causal mask buffer query_length, key_length = shape_list(query)[-2], shape_list(key)[-2] causal_mask = self.get_causal_mask(key_length, query_length) # Keep the attention weights computation in fp32 to avoid overflow issues query = tf.cast(query, tf.float32) key = tf.cast(key, tf.float32) attn_weights = tf.matmul(query, key, transpose_b=True) attn_weights = tf.where(causal_mask, attn_weights, self.get_masked_bias(attn_weights.dtype)) attn_weights = attn_weights / self.scale_attn if attention_mask is not None: # Apply the attention mask attn_weights = attn_weights + attention_mask attn_weights = stable_softmax(attn_weights, axis=-1) attn_weights = tf.cast(attn_weights, value.dtype) attn_weights = self.attn_dropout(attn_weights) # Mask heads if we want to if head_mask is not None: attn_weights = attn_weights * head_mask attn_output = tf.matmul(attn_weights, value) return attn_output, attn_weights def call( self, hidden_states: tf.Tensor, layer_past: Optional[Tuple[tf.Tensor, tf.Tensor]] = None, attention_mask: tf.Tensor | None = None, position_ids: tf.Tensor | None = None, head_mask: tf.Tensor | None = None, use_cache: bool = False, output_attentions: bool = False, ): query = self.q_proj(hidden_states) key = self.k_proj(hidden_states) value = self.v_proj(hidden_states) query = self._split_heads(query, True) key = self._split_heads(key, True) value = self._split_heads(value, False) sincos = tf.cast(tf.gather(self.embed_positions, position_ids, axis=0), hidden_states.dtype) sincos = tf.split(sincos, 2, axis=-1) if self.rotary_dim is not None: k_rot = key[:, :, :, : self.rotary_dim] k_pass = key[:, :, :, self.rotary_dim :] q_rot = query[:, :, :, : self.rotary_dim] q_pass = query[:, :, :, self.rotary_dim :] k_rot = apply_rotary_pos_emb(k_rot, sincos) q_rot = apply_rotary_pos_emb(q_rot, sincos) key = tf.concat((k_rot, k_pass), axis=-1) query = tf.concat((q_rot, q_pass), axis=-1) else: key = apply_rotary_pos_emb(key, sincos) query = apply_rotary_pos_emb(query, sincos) key = tf.transpose(key, (0, 2, 1, 3)) query = tf.transpose(query, (0, 2, 1, 3)) if layer_past is not None: past_key = layer_past[0] past_value = layer_past[1] key = tf.concat((past_key, key), axis=-2) value = tf.concat((past_value, value), axis=-2) if use_cache is True: present = (key, value) else: present = None # compute self-attention: V x Softmax(QK^T) attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) attn_output = self._merge_heads(attn_output) attn_output = self.out_proj(attn_output) attn_output = self.resid_dropout(attn_output) outputs = (attn_output, present) if output_attentions: outputs += (attn_weights,) return outputs # a, present, (attentions) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "q_proj", None) is not None: with tf.name_scope(self.q_proj.name): self.q_proj.build([None, None, self.embed_dim]) if getattr(self, "k_proj", None) is not None: with tf.name_scope(self.k_proj.name): self.k_proj.build([None, None, self.embed_dim]) if getattr(self, "v_proj", None) is not None: with tf.name_scope(self.v_proj.name): self.v_proj.build([None, None, self.embed_dim]) if getattr(self, "out_proj", None) is not None: with tf.name_scope(self.out_proj.name): self.out_proj.build([None, None, self.embed_dim]) class TFGPTJMLP(keras.layers.Layer): def __init__(self, intermediate_size: int, config: GPTJConfig, **kwargs): super().__init__(**kwargs) embed_dim = config.n_embd self.fc_in = keras.layers.Dense( intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="fc_in" ) self.fc_out = keras.layers.Dense( embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="fc_out" ) self.act = get_tf_activation(config.activation_function) self.dropout = keras.layers.Dropout(config.embd_pdrop) self.embed_dim = config.n_embd self.intermediate_size = intermediate_size def call(self, hidden_states: tf.Tensor) -> tf.Tensor: hidden_states = self.fc_in(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.fc_out(hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "fc_in", None) is not None: with tf.name_scope(self.fc_in.name): self.fc_in.build([None, None, self.embed_dim]) if getattr(self, "fc_out", None) is not None: with tf.name_scope(self.fc_out.name): self.fc_out.build([None, None, self.intermediate_size]) class TFGPTJBlock(keras.layers.Layer): def __init__(self, config: GPTJConfig, **kwargs): super().__init__(**kwargs) inner_dim = config.n_inner if config.n_inner is not None else 4 * config.n_embd self.ln_1 = keras.layers.LayerNormalization(epsilon=config.layer_norm_epsilon, name="ln_1") self.attn = TFGPTJAttention(config, name="attn") self.mlp = TFGPTJMLP(inner_dim, config, name="mlp") self.config = config def call( self, hidden_states: tf.Tensor, layer_past: tf.Tensor | None = None, attention_mask: tf.Tensor | None = None, position_ids: tf.Tensor | None = None, head_mask: tf.Tensor | None = None, use_cache: bool = False, output_attentions: bool = False, ): residual = hidden_states hidden_states = self.ln_1(hidden_states) attn_outputs = self.attn( hidden_states=hidden_states, layer_past=layer_past, attention_mask=attention_mask, position_ids=position_ids, head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, ) # attn_outputs: attn_output, present, (attentions) attn_output = attn_outputs[0] outputs = attn_outputs[1:] feed_forward_hidden_states = self.mlp(hidden_states) hidden_states = attn_output + feed_forward_hidden_states + residual if use_cache: outputs = (hidden_states,) + outputs else: outputs = (hidden_states,) + outputs[1:] return outputs # hidden_states, present, (attentions) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "ln_1", None) is not None: with tf.name_scope(self.ln_1.name): self.ln_1.build([None, None, self.config.n_embd]) if getattr(self, "attn", None) is not None: with tf.name_scope(self.attn.name): self.attn.build(None) if getattr(self, "mlp", None) is not None: with tf.name_scope(self.mlp.name): self.mlp.build(None) @keras_serializable class TFGPTJMainLayer(keras.layers.Layer): config_class = GPTJConfig def __init__(self, config: GPTJConfig, *inputs, **kwargs): super().__init__(*inputs, **kwargs) self.config = config self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.use_cache = config.use_cache self.return_dict = config.use_return_dict self.num_hidden_layers = config.n_layer self.n_embd = config.n_embd self.n_positions = config.n_positions self.initializer_range = config.initializer_range self.wte = TFSharedEmbeddings( config.vocab_size, config.hidden_size, initializer_range=config.initializer_range, name="wte" ) self.drop = keras.layers.Dropout(config.embd_pdrop) self.h = [TFGPTJBlock(config, name=f"h_._{i}") for i in range(config.n_layer)] self.ln_f = keras.layers.LayerNormalization(epsilon=config.layer_norm_epsilon, name="ln_f") self.embed_dim = config.n_embd def get_input_embeddings(self): return self.wte def set_input_embeddings(self, value: tf.Tensor): self.wte.weight = value self.wte.vocab_size = shape_list(value)[0] def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} """ raise NotImplementedError @unpack_inputs def call( self, input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, ) -> Union[TFBaseModelOutputWithPast, Tuple[tf.Tensor]]: if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = shape_list(input_ids) input_ids = tf.reshape(input_ids, [-1, input_shape[-1]]) elif inputs_embeds is not None: input_shape = shape_list(inputs_embeds)[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if past_key_values is None: past_length = 0 past_key_values = [None] * len(self.h) else: past_length = shape_list(past_key_values[0][0])[-2] if position_ids is None: position_ids = tf.expand_dims(tf.range(past_length, input_shape[-1] + past_length), axis=0) if attention_mask is not None: # We create a 3D attention mask from a 2D tensor mask. # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] # this attention mask is more simple than the triangular masking of causal attention # used in OpenAI GPT, we just need to prepare the broadcast dimension here. attention_mask_shape = shape_list(attention_mask) attention_mask = tf.reshape(attention_mask, (attention_mask_shape[0], 1, 1, attention_mask_shape[1])) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. one_cst = tf.constant(1.0) attention_mask = tf.cast(attention_mask, dtype=one_cst.dtype) attention_mask = tf.multiply(tf.subtract(one_cst, attention_mask), tf.constant(-10000.0)) # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] if head_mask is not None: raise NotImplementedError else: head_mask = [None] * self.num_hidden_layers # head_mask = tf.constant([0] * self.num_hidden_layers) position_ids = tf.reshape(position_ids, [-1, shape_list(position_ids)[-1]]) if inputs_embeds is None: check_embeddings_within_bounds(input_ids, self.wte.vocab_size) inputs_embeds = self.wte(input_ids, mode="embedding") if token_type_ids is not None: token_type_ids = tf.reshape(token_type_ids, [-1, shape_list(token_type_ids)[-1]]) token_type_embeds = self.wte(token_type_ids, mode="embedding") else: token_type_embeds = tf.constant(0.0) token_type_embeds = tf.cast(token_type_embeds, dtype=inputs_embeds.dtype) hidden_states = inputs_embeds + token_type_embeds hidden_states = self.drop(hidden_states, training=training) output_shape = input_shape + [shape_list(hidden_states)[-1]] presents = () if use_cache else None all_attentions = () if output_attentions else None all_hidden_states = () if output_hidden_states else None for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): if output_hidden_states: all_hidden_states = all_hidden_states + (tf.reshape(hidden_states, output_shape),) outputs = block( hidden_states=hidden_states, layer_past=layer_past, attention_mask=attention_mask, position_ids=position_ids, head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, training=training, ) hidden_states = outputs[0] if use_cache: presents = presents + (outputs[1],) if output_attentions: all_attentions = all_attentions + (outputs[2 if use_cache else 1],) hidden_states = self.ln_f(hidden_states) hidden_states = tf.reshape(hidden_states, output_shape) # Add last hidden state if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if output_attentions: # let the number of heads free (-1) so we can extract attention even after head pruning attention_output_shape = input_shape[:-1] + [-1] + shape_list(all_attentions[0])[-2:] all_attentions = tuple(tf.reshape(t, attention_output_shape) for t in all_attentions) if not return_dict: return tuple(v for v in [hidden_states, presents, all_hidden_states, all_attentions] if v is not None) return TFBaseModelOutputWithPast( last_hidden_state=hidden_states, past_key_values=presents, hidden_states=all_hidden_states, attentions=all_attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "wte", None) is not None: with tf.name_scope(self.wte.name): self.wte.build(None) if getattr(self, "ln_f", None) is not None: with tf.name_scope(self.ln_f.name): self.ln_f.build([None, None, self.embed_dim]) if getattr(self, "h", None) is not None: for layer in self.h: with tf.name_scope(layer.name): layer.build(None) class TFGPTJPreTrainedModel(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = GPTJConfig base_model_prefix = "transformer" # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"h.\d+.attn.bias"] GPTJ_START_DOCSTRING = r""" This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. <Tip> TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry about any of this, as you can just pass inputs like you would to any other Python function! </Tip> Parameters: config ([`GPTJConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~TFPreTrainedModel.from_pretrained`] method to load the model weights. """ GPTJ_INPUTS_DOCSTRING = r""" Args: input_ids (`Numpy array` or `tf.Tensor` of shape `(batch_size, input_ids_length)`): `input_ids_length` = `sequence_length` if `past` is `None` else `past[0].shape[-2]` (`sequence_length` of input past key value states). Indices of input sequence tokens in the vocabulary. If `past` is used, only input IDs that do not have their past calculated should be passed as `input_ids`. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and [`PreTrainedTokenizer.encode`] for details. [What are input IDs?](../glossary#input-ids) past_key_values (`List[tf.Tensor]` of length `config.n_layers`): Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see `past` output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) token_type_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a *sentence A* token, - 1 corresponds to a *sentence B* token. [What are token type IDs?](../glossary#token-type-ids) position_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) head_mask (`Numpy array` or `tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (`bool`, *optional*): Whether or not to return a [`~file_utils.ModelOutput`] instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (`bool`, *optional*, defaults to `False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ @add_start_docstrings( "The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top.", GPTJ_START_DOCSTRING, ) class TFGPTJModel(TFGPTJPreTrainedModel): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.transformer = TFGPTJMainLayer(config, name="transformer") @unpack_inputs @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFBaseModelOutputWithPast, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids: TFModelInputType | None = None, past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFBaseModelOutputWithPast, Tuple[tf.Tensor]]: r""" use_cache (`bool`, *optional*, defaults to `True`): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past`). Set to `False` during training, `True` during generation """ outputs = self.transformer( input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) return outputs def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "transformer", None) is not None: with tf.name_scope(self.transformer.name): self.transformer.build(None) @add_start_docstrings( """ The GPT-J Model transformer with a language modeling head on top. """, GPTJ_START_DOCSTRING, ) class TFGPTJForCausalLM(TFGPTJPreTrainedModel, TFCausalLanguageModelingLoss): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.transformer = TFGPTJMainLayer(config, name="transformer") self.lm_head = keras.layers.Dense( config.vocab_size, kernel_initializer=get_initializer(config.initializer_range), name="lm_head" ) self.config = config def get_output_embeddings(self): return self.lm_head def set_output_embeddings(self, new_embeddings): self.lm_head = new_embeddings def prepare_inputs_for_generation(self, inputs, past_key_values=None, use_cache=None, **kwargs): token_type_ids = kwargs.get("token_type_ids", None) # only last token for inputs_ids if past is defined in kwargs if past_key_values: inputs = tf.expand_dims(inputs[:, -1], -1) if token_type_ids is not None: token_type_ids = tf.expand_dims(token_type_ids[:, -1], -1) position_ids = kwargs.get("position_ids", None) attention_mask = kwargs.get("attention_mask", None) if attention_mask is not None and position_ids is None: position_ids = tf.math.cumsum(attention_mask, axis=-1, exclusive=True) if past_key_values: position_ids = tf.expand_dims(position_ids[:, -1], -1) return { "input_ids": inputs, "attention_mask": attention_mask, "position_ids": position_ids, "past_key_values": past_key_values, "use_cache": use_cache, "token_type_ids": token_type_ids, } @unpack_inputs @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFCausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids: TFModelInputType | None = None, past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, labels: np.ndarray | tf.Tensor | None = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFCausalLMOutputWithPast, Tuple[tf.Tensor]]: r""" labels (`np.ndarray` or `tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` """ transformer_outputs = self.transformer( input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) hidden_states = transformer_outputs[0] lm_logits = self.lm_head(hidden_states) loss = None if labels is not None: # shift labels to the left and cut last logit token shifted_logits = lm_logits[:, :-1] labels = labels[:, 1:] loss = self.hf_compute_loss(labels, shifted_logits) if not return_dict: output = (lm_logits,) + transformer_outputs[1:] return ((loss,) + output) if loss is not None else output return TFCausalLMOutputWithPast( loss=loss, logits=lm_logits, past_key_values=transformer_outputs.past_key_values, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "transformer", None) is not None: with tf.name_scope(self.transformer.name): self.transformer.build(None) if getattr(self, "lm_head", None) is not None: with tf.name_scope(self.lm_head.name): self.lm_head.build([None, None, self.config.n_embd]) @add_start_docstrings( """ The GPT-J Model transformer with a sequence classification head on top (linear layer). [`GPTJForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT, GPT-2, GPT-Neo) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). """, GPTJ_START_DOCSTRING, ) class TFGPTJForSequenceClassification(TFGPTJPreTrainedModel, TFSequenceClassificationLoss): _keys_to_ignore_on_load_missing = [r"h.\d+.attn.masked_bias", r"h.\d+.attn.bias", r"lm_head.weight"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.transformer = TFGPTJMainLayer(config, name="transformer") self.score = keras.layers.Dense( self.num_labels, use_bias=False, kernel_initializer=get_initializer(config.initializer_range), name="score", ) self.config = config @unpack_inputs @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFSequenceClassifierOutputWithPast, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids: TFModelInputType | None = None, past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, labels: np.ndarray | tf.Tensor | None = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFSequenceClassifierOutputWithPast, Tuple[tf.Tensor]]: r""" labels (`np.ndarray` or `tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ if labels is not None and self.config.pad_token_id is None and input_ids.shape[0] != 1: raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") transformer_outputs = self.transformer( input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) hidden_states = transformer_outputs[0] logits = self.score(hidden_states) logits_shape = shape_list(logits) in_logits = None if self.config.pad_token_id is None: sequence_lengths = -1 else: if input_ids is not None: sequence_lengths = ( tf.argmax(tf.cast(tf.math.equal(input_ids, self.config.pad_token_id), input_ids.dtype), axis=-1) - 1 ) sequence_lengths = tf.where( sequence_lengths >= 0, sequence_lengths, tf.cast(shape_list(input_ids[-1]), sequence_lengths.dtype) - 1, ) in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1) else: sequence_lengths = -1 logger.warning_once( f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " "unexpected if using padding tokens in conjunction with `inputs_embeds.`" ) loss = None if labels is not None: if not tf.is_tensor(sequence_lengths): in_logits = logits[0 : logits_shape[0], sequence_lengths] loss = self.hf_compute_loss(tf.reshape(labels, [-1]), tf.reshape(in_logits, [-1, self.num_labels])) pooled_logits = in_logits if in_logits is not None else logits if not return_dict: output = (pooled_logits,) + transformer_outputs[1:] return ((loss,) + output) if loss is not None else output return TFSequenceClassifierOutputWithPast( loss=loss, logits=pooled_logits, past_key_values=transformer_outputs.past_key_values, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "transformer", None) is not None: with tf.name_scope(self.transformer.name): self.transformer.build(None) if getattr(self, "score", None) is not None: with tf.name_scope(self.score.name): self.score.build([None, None, self.config.n_embd]) @add_start_docstrings( """ The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). """, GPTJ_START_DOCSTRING, ) class TFGPTJForQuestionAnswering(TFGPTJPreTrainedModel, TFQuestionAnsweringLoss): _keys_to_ignore_on_load_missing = [r"h.\d+.attn.masked_bias", r"h.\d+.attn.bias", r"lm_head.weight"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.transformer = TFGPTJMainLayer(config, name="transformer") self.qa_outputs = keras.layers.Dense( self.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" ) self.config = config @unpack_inputs @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFQuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids: TFModelInputType | None = None, past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, attention_mask: np.ndarray | tf.Tensor | None = None, token_type_ids: np.ndarray | tf.Tensor | None = None, position_ids: np.ndarray | tf.Tensor | None = None, head_mask: np.ndarray | tf.Tensor | None = None, inputs_embeds: np.ndarray | tf.Tensor | None = None, start_positions: np.ndarray | tf.Tensor | None = None, end_positions: np.ndarray | tf.Tensor | None = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: Optional[bool] = False, ) -> Union[TFQuestionAnsweringModelOutput, Tuple[tf.Tensor]]: r""" start_positions (`np.ndarray` or `tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (`np.ndarray` or `tf.Tensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ transformer_outputs = self.transformer( input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) sequence_output = transformer_outputs[0] logits = self.qa_outputs(sequence_output) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1) end_logits = tf.squeeze(end_logits, axis=-1) loss = None if start_positions is not None and end_positions is not None: labels = {"start_position": start_positions} labels["end_position"] = end_positions loss = self.hf_compute_loss(labels, (start_logits, end_logits)) if not return_dict: output = (start_logits, end_logits) + transformer_outputs[2:] return ((loss,) + output) if loss is not None else output return TFQuestionAnsweringModelOutput( loss=loss, start_logits=start_logits, end_logits=end_logits, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, "transformer", None) is not None: with tf.name_scope(self.transformer.name): self.transformer.build(None) if getattr(self, "qa_outputs", None) is not None: with tf.name_scope(self.qa_outputs.name): self.qa_outputs.build([None, None, self.config.hidden_size])
transformers/src/transformers/models/gptj/modeling_tf_gptj.py/0
{ "file_path": "transformers/src/transformers/models/gptj/modeling_tf_gptj.py", "repo_id": "transformers", "token_count": 21093 }
350
# coding=utf-8 # Copyright 2020 The Google AI Language Team Authors, Allegro.pl, Facebook Inc. and the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import re import unicodedata from typing import List, Optional, Tuple from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace from ...utils import logging logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = { "vocab_file": "vocab.json", "merges_file": "merges.txt", } # Copied from transformers.models.xlm.tokenization_xlm.get_pairs def get_pairs(word): """ Return set of symbol pairs in a word. word is represented as tuple of symbols (symbols being variable-length strings) """ pairs = set() prev_char = word[0] for char in word[1:]: pairs.add((prev_char, char)) prev_char = char return pairs # Copied from transformers.models.xlm.tokenization_xlm.replace_unicode_punct def replace_unicode_punct(text): """ Port of https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/replace-unicode-punctuation.perl """ text = text.replace("", ",") text = re.sub(r"。\s*", ". ", text) text = text.replace("、", ",") text = text.replace("”", '"') text = text.replace("“", '"') text = text.replace("∶", ":") text = text.replace("", ":") text = text.replace("", "?") text = text.replace("《", '"') text = text.replace("》", '"') text = text.replace("", ")") text = text.replace("", "!") text = text.replace("", "(") text = text.replace("", ";") text = text.replace("", "1") text = text.replace("」", '"') text = text.replace("「", '"') text = text.replace("", "0") text = text.replace("", "3") text = text.replace("", "2") text = text.replace("", "5") text = text.replace("", "6") text = text.replace("", "9") text = text.replace("", "7") text = text.replace("", "8") text = text.replace("", "4") text = re.sub(r"\s*", ". ", text) text = text.replace("", "~") text = text.replace("’", "'") text = text.replace("
", "...") text = text.replace("━", "-") text = text.replace("〈", "<") text = text.replace("〉", ">") text = text.replace("【", "[") text = text.replace("】", "]") text = text.replace("", "%") return text # Copied from transformers.models.xlm.tokenization_xlm.remove_non_printing_char def remove_non_printing_char(text): """ Port of https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/remove-non-printing-char.perl """ output = [] for char in text: cat = unicodedata.category(char) if cat.startswith("C"): continue output.append(char) return "".join(output) # Copied from transformers.models.bert.tokenization_bert.whitespace_tokenize def whitespace_tokenize(text): """Runs basic whitespace cleaning and splitting on a piece of text.""" text = text.strip() if not text: return [] tokens = text.split() return tokens # Copied from transformers.models.bert.tokenization_bert.BasicTokenizer class BasicTokenizer: """ Constructs a BasicTokenizer that will run basic tokenization (punctuation splitting, lower casing, etc.). Args: do_lower_case (`bool`, *optional*, defaults to `True`): Whether or not to lowercase the input when tokenizing. never_split (`Iterable`, *optional*): Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). strip_accents (`bool`, *optional*): Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). do_split_on_punc (`bool`, *optional*, defaults to `True`): In some instances we want to skip the basic punctuation splitting so that later tokenization can capture the full context of the words, such as contractions. """ def __init__( self, do_lower_case=True, never_split=None, tokenize_chinese_chars=True, strip_accents=None, do_split_on_punc=True, ): if never_split is None: never_split = [] self.do_lower_case = do_lower_case self.never_split = set(never_split) self.tokenize_chinese_chars = tokenize_chinese_chars self.strip_accents = strip_accents self.do_split_on_punc = do_split_on_punc def tokenize(self, text, never_split=None): """ Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer. Args: never_split (`List[str]`, *optional*) Kept for backward compatibility purposes. Now implemented directly at the base class level (see [`PreTrainedTokenizer.tokenize`]) List of token not to split. """ # union() returns a new set by concatenating the two sets. never_split = self.never_split.union(set(never_split)) if never_split else self.never_split text = self._clean_text(text) # This was added on November 1st, 2018 for the multilingual and Chinese # models. This is also applied to the English models now, but it doesn't # matter since the English models were not trained on any Chinese data # and generally don't have any Chinese data in them (there are Chinese # characters in the vocabulary because Wikipedia does have some Chinese # words in the English Wikipedia.). if self.tokenize_chinese_chars: text = self._tokenize_chinese_chars(text) # prevents treating the same character with different unicode codepoints as different characters unicode_normalized_text = unicodedata.normalize("NFC", text) orig_tokens = whitespace_tokenize(unicode_normalized_text) split_tokens = [] for token in orig_tokens: if token not in never_split: if self.do_lower_case: token = token.lower() if self.strip_accents is not False: token = self._run_strip_accents(token) elif self.strip_accents: token = self._run_strip_accents(token) split_tokens.extend(self._run_split_on_punc(token, never_split)) output_tokens = whitespace_tokenize(" ".join(split_tokens)) return output_tokens def _run_strip_accents(self, text): """Strips accents from a piece of text.""" text = unicodedata.normalize("NFD", text) output = [] for char in text: cat = unicodedata.category(char) if cat == "Mn": continue output.append(char) return "".join(output) def _run_split_on_punc(self, text, never_split=None): """Splits punctuation on a piece of text.""" if not self.do_split_on_punc or (never_split is not None and text in never_split): return [text] chars = list(text) i = 0 start_new_word = True output = [] while i < len(chars): char = chars[i] if _is_punctuation(char): output.append([char]) start_new_word = True else: if start_new_word: output.append([]) start_new_word = False output[-1].append(char) i += 1 return ["".join(x) for x in output] def _tokenize_chinese_chars(self, text): """Adds whitespace around any CJK character.""" output = [] for char in text: cp = ord(char) if self._is_chinese_char(cp): output.append(" ") output.append(char) output.append(" ") else: output.append(char) return "".join(output) def _is_chinese_char(self, cp): """Checks whether CP is the codepoint of a CJK character.""" # This defines a "chinese character" as anything in the CJK Unicode block: # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) # # Note that the CJK Unicode block is NOT all Japanese and Korean characters, # despite its name. The modern Korean Hangul alphabet is a different block, # as is Japanese Hiragana and Katakana. Those alphabets are used to write # space-separated words, so they are not treated specially and handled # like the all of the other languages. if ( (cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) # or (cp >= 0x20000 and cp <= 0x2A6DF) # or (cp >= 0x2A700 and cp <= 0x2B73F) # or (cp >= 0x2B740 and cp <= 0x2B81F) # or (cp >= 0x2B820 and cp <= 0x2CEAF) # or (cp >= 0xF900 and cp <= 0xFAFF) or (cp >= 0x2F800 and cp <= 0x2FA1F) # ): # return True return False def _clean_text(self, text): """Performs invalid character removal and whitespace cleanup on text.""" output = [] for char in text: cp = ord(char) if cp == 0 or cp == 0xFFFD or _is_control(char): continue if _is_whitespace(char): output.append(" ") else: output.append(char) return "".join(output) class HerbertTokenizer(PreTrainedTokenizer): """ Construct a BPE tokenizer for HerBERT. Peculiarities: - uses BERT's pre-tokenizer: BaseTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of a punctuation character will be treated separately. - Such pretokenized input is BPE subtokenized This tokenizer inherits from [`XLMTokenizer`] which contains most of the methods. Users should refer to the superclass for more information regarding methods. """ vocab_files_names = VOCAB_FILES_NAMES def __init__( self, vocab_file, merges_file, tokenizer_file=None, cls_token="<s>", unk_token="<unk>", pad_token="<pad>", mask_token="<mask>", sep_token="</s>", bos_token="<s>", do_lowercase_and_remove_accent=False, additional_special_tokens=[ "<special0>", "<special1>", "<special2>", "<special3>", "<special4>", "<special5>", "<special6>", "<special7>", "<special8>", "<special9>", ], lang2id=None, id2lang=None, **kwargs, ): try: import sacremoses except ImportError: raise ImportError( "You need to install sacremoses to use HerbertTokenizer. " "See https://pypi.org/project/sacremoses/ for installation." ) self.sm = sacremoses # cache of sm.MosesPunctNormalizer instance self.cache_moses_punct_normalizer = {} # cache of sm.MosesTokenizer instance self.cache_moses_tokenizer = {} self.lang_with_custom_tokenizer = {"zh", "th", "ja"} # True for current supported model (v1.2.0), False for XLM-17 & 100 self.do_lowercase_and_remove_accent = do_lowercase_and_remove_accent self.lang2id = lang2id self.id2lang = id2lang if lang2id is not None and id2lang is not None: assert len(lang2id) == len(id2lang) self.ja_word_tokenizer = None self.zh_word_tokenizer = None with open(vocab_file, encoding="utf-8") as vocab_handle: self.encoder = json.load(vocab_handle) self.decoder = {v: k for k, v in self.encoder.items()} with open(merges_file, encoding="utf-8") as merges_handle: merges = merges_handle.read().split("\n")[:-1] merges = [tuple(merge.split()[:2]) for merge in merges] self.bpe_ranks = dict(zip(merges, range(len(merges)))) self.cache = {} super().__init__( unk_token=unk_token, bos_token=bos_token, sep_token=sep_token, pad_token=pad_token, cls_token=cls_token, mask_token=mask_token, additional_special_tokens=additional_special_tokens, lang2id=lang2id, id2lang=id2lang, do_lowercase_and_remove_accent=do_lowercase_and_remove_accent, tokenizer_file=None, **kwargs, ) self.bert_pre_tokenizer = BasicTokenizer( do_lower_case=False, never_split=self.all_special_tokens, tokenize_chinese_chars=False, strip_accents=False, ) @property # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.do_lower_case def do_lower_case(self): return self.do_lowercase_and_remove_accent # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.moses_punct_norm def moses_punct_norm(self, text, lang): if lang not in self.cache_moses_punct_normalizer: punct_normalizer = self.sm.MosesPunctNormalizer(lang=lang) self.cache_moses_punct_normalizer[lang] = punct_normalizer else: punct_normalizer = self.cache_moses_punct_normalizer[lang] return punct_normalizer.normalize(text) # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.moses_tokenize def moses_tokenize(self, text, lang): if lang not in self.cache_moses_tokenizer: moses_tokenizer = self.sm.MosesTokenizer(lang=lang) self.cache_moses_tokenizer[lang] = moses_tokenizer else: moses_tokenizer = self.cache_moses_tokenizer[lang] return moses_tokenizer.tokenize(text, return_str=False, escape=False) # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.moses_pipeline def moses_pipeline(self, text, lang): text = replace_unicode_punct(text) text = self.moses_punct_norm(text, lang) text = remove_non_printing_char(text) return text # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.ja_tokenize def ja_tokenize(self, text): if self.ja_word_tokenizer is None: try: import Mykytea self.ja_word_tokenizer = Mykytea.Mykytea( f"-model {os.path.expanduser('~')}/local/share/kytea/model.bin" ) except (AttributeError, ImportError): logger.error( "Make sure you install KyTea (https://github.com/neubig/kytea) and it's python wrapper" " (https://github.com/chezou/Mykytea-python) with the following steps" ) logger.error("1. git clone [email protected]:neubig/kytea.git && cd kytea") logger.error("2. autoreconf -i") logger.error("3. ./configure --prefix=$HOME/local") logger.error("4. make && make install") logger.error("5. pip install kytea") raise return list(self.ja_word_tokenizer.getWS(text)) @property # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.vocab_size def vocab_size(self): return len(self.encoder) # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.get_vocab def get_vocab(self): return dict(self.encoder, **self.added_tokens_encoder) # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.bpe def bpe(self, token): word = tuple(token[:-1]) + (token[-1] + "</w>",) if token in self.cache: return self.cache[token] pairs = get_pairs(word) if not pairs: return token + "</w>" while True: bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) if bigram not in self.bpe_ranks: break first, second = bigram new_word = [] i = 0 while i < len(word): try: j = word.index(first, i) except ValueError: new_word.extend(word[i:]) break else: new_word.extend(word[i:j]) i = j if word[i] == first and i < len(word) - 1 and word[i + 1] == second: new_word.append(first + second) i += 2 else: new_word.append(word[i]) i += 1 new_word = tuple(new_word) word = new_word if len(word) == 1: break else: pairs = get_pairs(word) word = " ".join(word) if word == "\n </w>": word = "\n</w>" self.cache[token] = word return word def _tokenize(self, text): pre_tokens = self.bert_pre_tokenizer.tokenize(text) split_tokens = [] for token in pre_tokens: if token: split_tokens.extend(list(self.bpe(token).split(" "))) return split_tokens # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer._convert_token_to_id def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.encoder.get(token, self.encoder.get(self.unk_token)) # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer._convert_id_to_token def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" return self.decoder.get(index, self.unk_token) # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.convert_tokens_to_string def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" out_string = "".join(tokens).replace("</w>", " ").strip() return out_string # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.build_inputs_with_special_tokens def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s> B </s>` Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ bos = [self.bos_token_id] sep = [self.sep_token_id] if token_ids_1 is None: return bos + token_ids_0 + sep return bos + token_ids_0 + sep + token_ids_1 + sep # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.get_special_tokens_mask def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) if token_ids_1 is not None: return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] return [1] + ([0] * len(token_ids_0)) + [1] # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.create_token_type_ids_from_sequences def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). """ sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1] # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.save_vocabulary def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) merge_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] ) with open(vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") index = 0 with open(merge_file, "w", encoding="utf-8") as writer: for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): if index != token_index: logger.warning( f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive." " Please check that the tokenizer is not corrupted!" ) index = token_index writer.write(" ".join(bpe_tokens) + "\n") index += 1 return vocab_file, merge_file # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.__getstate__ def __getstate__(self): state = self.__dict__.copy() state["sm"] = None return state # Copied from transformers.models.xlm.tokenization_xlm.XLMTokenizer.__setstate__ def __setstate__(self, d): self.__dict__ = d try: import sacremoses except ImportError: raise ImportError( "You need to install sacremoses to use XLMTokenizer. " "See https://pypi.org/project/sacremoses/ for installation." ) self.sm = sacremoses
transformers/src/transformers/models/herbert/tokenization_herbert.py/0
{ "file_path": "transformers/src/transformers/models/herbert/tokenization_herbert.py", "repo_id": "transformers", "token_count": 11460 }
351
# coding=utf-8 # Copyright 2021 The I-BERT Authors (Sehoon Kim, Amir Gholami, Zhewei Yao, # Michael Mahoney, Kurt Keutzer - UC Berkeley) and The HuggingFace Inc. team. # Copyright (c) 20121, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import decimal import numpy as np import torch from torch import nn from torch.autograd import Function from ...utils import logging logger = logging.get_logger(__name__) class QuantEmbedding(nn.Module): """ Quantized version of `torch.nn.Embedding`. Adds quantization-specific arguments on top of `torch.nn.Embedding`. Args: weight_bit (`int`, *optional*, defaults to `8`): Bitwidth for the quantized weight. momentum (`float`, *optional*, defaults to `0.95`): Momentum for updating the activation quantization range. quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized. """ def __init__( self, num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, weight_bit=8, momentum=0.95, quant_mode=False, ): super().__init__() self.num_ = num_embeddings self.dim = embedding_dim self.padding_idx = padding_idx self.max_norm = max_norm self.norm_type = norm_type self.scale_grad_by_freq = scale_grad_by_freq self.sparse = sparse self.weight = nn.Parameter(torch.zeros([num_embeddings, embedding_dim])) self.register_buffer("weight_scaling_factor", torch.zeros(1)) self.register_buffer("weight_integer", torch.zeros_like(self.weight)) self.weight_bit = weight_bit self.momentum = momentum self.quant_mode = quant_mode self.percentile_mode = False self.weight_function = SymmetricQuantFunction.apply def forward(self, x, positions=None, incremental_state=None): if not self.quant_mode: return ( nn.functional.embedding( x, self.weight, self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq, self.sparse, ), None, ) w = self.weight w_transform = w.data.detach() w_min = w_transform.min().expand(1) w_max = w_transform.max().expand(1) self.weight_scaling_factor = symmetric_linear_quantization_params(self.weight_bit, w_min, w_max, False) self.weight_integer = self.weight_function( self.weight, self.weight_bit, self.percentile_mode, self.weight_scaling_factor ) emb_int = nn.functional.embedding( x, self.weight_integer, self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq, self.sparse, ) return emb_int * self.weight_scaling_factor, self.weight_scaling_factor class QuantAct(nn.Module): """ Quantizes the given activation. Args: activation_bit (`int`): Bitwidth for the quantized activation. act_range_momentum (`float`, *optional*, defaults to `0.95`): Momentum for updating the activation quantization range. per_channel (`bool`, *optional*, defaults to `False`): Whether to or not use channel-wise quantization. channel_len (`int`, *optional*): Specify the channel length when set the *per_channel* True. quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized. """ def __init__(self, activation_bit, act_range_momentum=0.95, per_channel=False, channel_len=None, quant_mode=False): super().__init__() self.activation_bit = activation_bit self.act_range_momentum = act_range_momentum self.quant_mode = quant_mode self.per_channel = per_channel self.percentile = False self.act_function = SymmetricQuantFunction.apply if not self.per_channel: self.register_buffer("x_min", torch.zeros(1)) self.register_buffer("x_max", torch.zeros(1)) self.register_buffer("act_scaling_factor", torch.zeros(1)) self.x_min -= 1e-5 self.x_max += 1e-5 else: raise NotImplementedError("per-channel mode is not currently supported for activation.") def __repr__(self): return ( f"{self.__class__.__name__}(activation_bit={self.activation_bit}, " f"quant_mode: {self.quant_mode}, Act_min: {self.x_min.item():.2f}, " f"Act_max: {self.x_max.item():.2f})" ) def forward( self, x, pre_act_scaling_factor=None, identity=None, identity_scaling_factor=None, specified_min=None, specified_max=None, ): x_act = x if identity is None else identity + x # collect running stats if training if self.training: assert not self.percentile, "percentile mode is not currently supported for activation." assert not self.per_channel, "per-channel mode is not currently supported for activation." x_min = x_act.data.min() x_max = x_act.data.max() assert ( x_max.isnan().sum() == 0 and x_min.isnan().sum() == 0 ), "NaN detected when computing min/max of the activation" # Initialization if self.x_min.min() > -1.1e-5 and self.x_max.max() < 1.1e-5: self.x_min = self.x_min + x_min self.x_max = self.x_max + x_max # exponential moving average (EMA) # use momentum to prevent the quantized values change greatly every iteration elif self.act_range_momentum == -1: self.x_min = torch.min(self.x_min, x_min) self.x_max = torch.max(self.x_max, x_max) else: self.x_min = self.x_min * self.act_range_momentum + x_min * (1 - self.act_range_momentum) self.x_max = self.x_max * self.act_range_momentum + x_max * (1 - self.act_range_momentum) if not self.quant_mode: return x_act, None x_min = self.x_min if specified_min is None else specified_min x_max = self.x_max if specified_max is None else specified_max self.act_scaling_factor = symmetric_linear_quantization_params( self.activation_bit, x_min, x_max, per_channel=self.per_channel ) if pre_act_scaling_factor is None: # this is for the input quantization quant_act_int = self.act_function(x, self.activation_bit, self.percentile, self.act_scaling_factor) else: quant_act_int = FixedPointMul.apply( x, pre_act_scaling_factor, self.activation_bit, self.act_scaling_factor, identity, identity_scaling_factor, ) correct_output_scale = self.act_scaling_factor.view(-1) return quant_act_int * correct_output_scale, self.act_scaling_factor class QuantLinear(nn.Module): """ Quantized version of `torch.nn.Linear`. Adds quantization-specific arguments on top of `torch.nn.Linear`. Args: weight_bit (`int`, *optional*, defaults to `8`): Bitwidth for the quantized weight. bias_bit (`int`, *optional*, defaults to `32`): Bitwidth for the quantized bias. per_channel (`bool`, *optional*, defaults to `False`): Whether or not to use channel-wise quantization. quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized. """ def __init__( self, in_features, out_features, bias=True, weight_bit=8, bias_bit=32, per_channel=False, quant_mode=False ): super().__init__() self.in_features = in_features self.out_features = out_features self.weight = nn.Parameter(torch.zeros([out_features, in_features])) self.register_buffer("weight_integer", torch.zeros_like(self.weight)) self.register_buffer("fc_scaling_factor", torch.zeros(self.out_features)) if bias: self.bias = nn.Parameter(torch.zeros(out_features)) self.register_buffer("bias_integer", torch.zeros_like(self.bias)) self.weight_bit = weight_bit self.quant_mode = quant_mode self.per_channel = per_channel self.bias_bit = bias_bit self.quant_mode = quant_mode self.percentile_mode = False self.weight_function = SymmetricQuantFunction.apply def __repr__(self): s = super().__repr__() s = f"({s} weight_bit={self.weight_bit}, quant_mode={self.quant_mode})" return s def forward(self, x, prev_act_scaling_factor=None): if not self.quant_mode: return nn.functional.linear(x, weight=self.weight, bias=self.bias), None # assert that prev_act_scaling_factor is a scalar tensor assert prev_act_scaling_factor is not None and prev_act_scaling_factor.shape == (1,), ( "Input activation to the QuantLinear layer should be globally (non-channel-wise) quantized. " "Please add a QuantAct layer with `per_channel = True` before this QuantAct layer" ) w = self.weight w_transform = w.data.detach() if self.per_channel: w_min, _ = torch.min(w_transform, dim=1, out=None) w_max, _ = torch.max(w_transform, dim=1, out=None) else: w_min = w_transform.min().expand(1) w_max = w_transform.max().expand(1) self.fc_scaling_factor = symmetric_linear_quantization_params(self.weight_bit, w_min, w_max, self.per_channel) self.weight_integer = self.weight_function( self.weight, self.weight_bit, self.percentile_mode, self.fc_scaling_factor ) bias_scaling_factor = self.fc_scaling_factor * prev_act_scaling_factor if self.bias is not None: self.bias_integer = self.weight_function(self.bias, self.bias_bit, False, bias_scaling_factor) prev_act_scaling_factor = prev_act_scaling_factor.view(1, -1) x_int = x / prev_act_scaling_factor return ( nn.functional.linear(x_int, weight=self.weight_integer, bias=self.bias_integer) * bias_scaling_factor, bias_scaling_factor, ) class IntGELU(nn.Module): """ Quantized version of `torch.nn.GELU`. Adds quantization-specific arguments on top of `torch.nn.GELU`. Args: quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized. force_dequant (`str`, *optional*, defaults to `"none"`): Force dequantize the layer if either "gelu" or "nonlinear" is given. """ def __init__(self, quant_mode=True, force_dequant="none"): super().__init__() self.quant_mode = quant_mode if force_dequant in ["nonlinear", "gelu"]: logger.info("Force dequantize gelu") self.quant_mode = False if not self.quant_mode: self.activation_fn = nn.GELU() self.k = 1.4142 self.const = 14 # dummy integer constant self.coeff = [-0.2888, -1.769, 1] # a(x+b)**2 + c self.coeff[2] /= self.coeff[0] def int_erf(self, x_int, scaling_factor): b_int = torch.floor(self.coeff[1] / scaling_factor) c_int = torch.floor(self.coeff[2] / scaling_factor**2) sign = torch.sign(x_int) abs_int = torch.min(torch.abs(x_int), -b_int) y_int = sign * ((abs_int + b_int) ** 2 + c_int) scaling_factor = scaling_factor**2 * self.coeff[0] # avoid overflow y_int = floor_ste.apply(y_int / 2**self.const) scaling_factor = scaling_factor * 2**self.const return y_int, scaling_factor def forward(self, x, scaling_factor=None): if not self.quant_mode: return self.activation_fn(x), None x_int = x / scaling_factor sigmoid_int, sigmoid_scaling_factor = self.int_erf(x_int, scaling_factor / self.k) shift_int = 1.0 // sigmoid_scaling_factor x_int = x_int * (sigmoid_int + shift_int) scaling_factor = scaling_factor * sigmoid_scaling_factor / 2 return x_int * scaling_factor, scaling_factor class IntSoftmax(nn.Module): """ Quantized version of `torch.nn.Softmax`. Adds quantization-specific arguments on top of `torch.nn.Softmax`. Args: output_bit (`int`): Bitwidth for the layer output activation. quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized. force_dequant (`str`, *optional*, defaults to `"none"`): Force dequantize the layer if either "softmax" or "nonlinear" is given. """ def __init__(self, output_bit, quant_mode=False, force_dequant="none"): super().__init__() self.output_bit = output_bit self.max_bit = 32 self.quant_mode = quant_mode if force_dequant in ["nonlinear", "softmax"]: logger.info("Force dequantize softmax") self.quant_mode = False self.act = QuantAct(16, quant_mode=self.quant_mode) self.x0 = -0.6931 # -ln2 self.const = 30 # dummy integer constant self.coef = [0.35815147, 0.96963238, 1.0] # ax**2 + bx + c self.coef[1] /= self.coef[0] self.coef[2] /= self.coef[0] def int_polynomial(self, x_int, scaling_factor): with torch.no_grad(): b_int = torch.floor(self.coef[1] / scaling_factor) c_int = torch.floor(self.coef[2] / scaling_factor**2) z = (x_int + b_int) * x_int + c_int scaling_factor = self.coef[0] * scaling_factor**2 return z, scaling_factor def int_exp(self, x_int, scaling_factor): with torch.no_grad(): x0_int = torch.floor(self.x0 / scaling_factor) x_int = torch.max(x_int, self.const * x0_int) q = floor_ste.apply(x_int / x0_int) r = x_int - x0_int * q exp_int, exp_scaling_factor = self.int_polynomial(r, scaling_factor) exp_int = torch.clamp(floor_ste.apply(exp_int * 2 ** (self.const - q)), min=0) scaling_factor = exp_scaling_factor / 2**self.const return exp_int, scaling_factor def forward(self, x, scaling_factor): if not self.quant_mode: return nn.functional.softmax(x, dim=-1), None x_int = x / scaling_factor x_int_max, _ = x_int.max(dim=-1, keepdim=True) x_int = x_int - x_int_max exp_int, exp_scaling_factor = self.int_exp(x_int, scaling_factor) # Avoid overflow exp, exp_scaling_factor = self.act(exp_int, exp_scaling_factor) exp_int = exp / exp_scaling_factor exp_int_sum = exp_int.sum(dim=-1, keepdim=True) factor = floor_ste.apply(2**self.max_bit / exp_int_sum) exp_int = floor_ste.apply(exp_int * factor / 2 ** (self.max_bit - self.output_bit)) scaling_factor = 1 / 2**self.output_bit return exp_int * scaling_factor, scaling_factor class IntLayerNorm(nn.Module): """ Quantized version of `torch.nn.LayerNorm`. Adds quantization-specific arguments on top of `torch.nn.LayerNorm`. Args: output_bit (`int`, *optional*, defaults to `8`): Bitwidth for the layer output activation. quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized. force_dequant (`str`, *optional*, defaults to `"none"`): Force dequantize the layer if either "layernorm" or "nonlinear" is given. """ def __init__(self, normalized_shape, eps, output_bit=8, quant_mode=False, force_dequant="none"): super().__init__() self.normalized_shape = normalized_shape self.eps = eps self.weight = nn.Parameter(torch.zeros(normalized_shape)) self.bias = nn.Parameter(torch.zeros(normalized_shape)) self.quant_mode = quant_mode if force_dequant in ["nonlinear", "layernorm"]: logger.info("Force dequantize layernorm") self.quant_mode = False self.register_buffer("shift", torch.zeros(1)) self.output_bit = output_bit self.max_bit = 32 self.dim_sqrt = None self.activation = QuantAct(self.output_bit, quant_mode=self.quant_mode) def set_shift(self, y_int): with torch.no_grad(): y_sq_int = y_int**2 var_int = torch.sum(y_sq_int, axis=2, keepdim=True) shift = (torch.log2(torch.sqrt(var_int / 2**self.max_bit)).ceil()).max() shift_old = self.shift self.shift = torch.max(self.shift, shift) logger.info(f"Dynamic shift adjustment: {int(shift_old)} -> {int(self.shift)}") def overflow_fallback(self, y_int): """ This fallback function is called when overflow is detected during training time, and adjusts the `self.shift` to avoid overflow in the subsequent runs. """ self.set_shift(y_int) # adjusts `self.shift` y_int_shifted = floor_ste.apply(y_int / 2**self.shift) y_sq_int = y_int_shifted**2 var_int = torch.sum(y_sq_int, axis=2, keepdim=True) return var_int def forward(self, x, scaling_factor=None): if not self.quant_mode: mean = x.mean(axis=2, keepdim=True) y = x - mean var = torch.mean(y**2, axis=2, keepdim=True) x = y / torch.sqrt(self.eps + var) x = x * self.weight + self.bias return x, None # compute sqrt of the feature dimension if it is the first run if self.dim_sqrt is None: n = torch.tensor(x.shape[2], dtype=torch.float) self.dim_sqrt = torch.sqrt(n).to(x.device) # Normalization: computes mean and variance(std) x_int = x / scaling_factor mean_int = round_ste.apply(x_int.mean(axis=2, keepdim=True)) y_int = x_int - mean_int y_int_shifted = floor_ste.apply(y_int / 2**self.shift) y_sq_int = y_int_shifted**2 var_int = torch.sum(y_sq_int, axis=2, keepdim=True) # overflow handling in training time if self.training: # if overflow is detected if var_int.max() >= 2**self.max_bit: var_int = self.overflow_fallback(y_int) assert var_int.max() < 2**self.max_bit + 0.1, ( "Error detected in overflow handling: " "`var_int` exceeds `self.max_bit` (the maximum possible bit width)" ) # To be replaced with integer-sqrt kernel that produces the same output std_int = floor_ste.apply(torch.sqrt(var_int)) * 2**self.shift factor = floor_ste.apply(2**31 / std_int) y_int = floor_ste.apply(y_int * factor / 2) scaling_factor = self.dim_sqrt / 2**30 # scaling and shifting bias = self.bias.data.detach() / (self.weight.data.detach()) bias_int = floor_ste.apply(bias / scaling_factor) y_int = y_int + bias_int scaling_factor = scaling_factor * self.weight x = y_int * scaling_factor return x, scaling_factor def get_percentile_min_max(input, lower_percentile, upper_percentile, output_tensor=False): """ Calculate the percentile max and min values in a given tensor Args: input (`torch.Tensor`): The target tensor to calculate percentile max and min. lower_percentile (`float`): If 0.1, means we return the value of the smallest 0.1% value in the tensor as percentile min. upper_percentile (`float`): If 99.9, means we return the value of the largest 0.1% value in the tensor as percentile max. output_tensor (`bool`, *optional*, defaults to `False`): If True, this function returns tensors, otherwise it returns values. Returns: `Tuple(torch.Tensor, torch.Tensor)`: Percentile min and max value of *input* """ input_length = input.shape[0] lower_index = round(input_length * (1 - lower_percentile * 0.01)) upper_index = round(input_length * upper_percentile * 0.01) upper_bound = torch.kthvalue(input, k=upper_index).values if lower_percentile == 0: lower_bound = upper_bound * 0 # lower_index += 1 else: lower_bound = -torch.kthvalue(-input, k=lower_index).values if not output_tensor: lower_bound = lower_bound.item() upper_bound = upper_bound.item() return lower_bound, upper_bound def linear_quantize(input, scale, zero_point, inplace=False): """ Quantize single-precision input tensor to integers with the given scaling factor and zeropoint. Args: input (`torch.Tensor`): Single-precision input tensor to be quantized. scale (`torch.Tensor`): Scaling factor for quantization. zero_pint (`torch.Tensor`): Shift for quantization. inplace (`bool`, *optional*, defaults to `False`): Whether to compute inplace or not. Returns: `torch.Tensor`: Linearly quantized value of *input* according to *scale* and *zero_point*. """ # reshape scale and zeropoint for convolutional weights and activation if len(input.shape) == 4: scale = scale.view(-1, 1, 1, 1) zero_point = zero_point.view(-1, 1, 1, 1) # reshape scale and zeropoint for linear weights elif len(input.shape) == 2: scale = scale.view(-1, 1) zero_point = zero_point.view(-1, 1) else: scale = scale.view(-1) zero_point = zero_point.view(-1) # quantized = float / scale + zero_point if inplace: input.mul_(1.0 / scale).add_(zero_point).round_() return input return torch.round(1.0 / scale * input + zero_point) def symmetric_linear_quantization_params(num_bits, saturation_min, saturation_max, per_channel=False): """ Compute the scaling factor with the given quantization range for symmetric quantization. Args: saturation_min (`torch.Tensor`): Lower bound for quantization range. saturation_max (`torch.Tensor`): Upper bound for quantization range. per_channel (`bool`, *optional*, defaults to `False`): Whether to or not use channel-wise quantization. Returns: `torch.Tensor`: Scaling factor that linearly quantizes the given range between *saturation_min* and *saturation_max*. """ # in this part, we do not need any gradient computation, # in order to enforce this, we put torch.no_grad() with torch.no_grad(): n = 2 ** (num_bits - 1) - 1 if per_channel: scale, _ = torch.max(torch.stack([saturation_min.abs(), saturation_max.abs()], dim=1), dim=1) scale = torch.clamp(scale, min=1e-8) / n else: scale = max(saturation_min.abs(), saturation_max.abs()) scale = torch.clamp(scale, min=1e-8) / n return scale class SymmetricQuantFunction(Function): """ Class to quantize the given floating-point values using symmetric quantization with given range and bitwidth. """ @staticmethod def forward(ctx, x, k, percentile_mode, scale): """ Args: x (`torch.Tensor`): Floating point tensor to be quantized. k (`int`): Quantization bitwidth. percentile_mode (`bool`): Whether or not to use percentile calibration. scale (`torch.Tensor`): Pre-calculated scaling factor for *x*. Note that the current implementation of SymmetricQuantFunction requires pre-calculated scaling factor. Returns: `torch.Tensor`: Symmetric-quantized value of *input*. """ zero_point = torch.tensor(0.0).to(scale.device) n = 2 ** (k - 1) - 1 new_quant_x = linear_quantize(x, scale, zero_point, inplace=False) new_quant_x = torch.clamp(new_quant_x, -n, n - 1) ctx.scale = scale return new_quant_x @staticmethod def backward(ctx, grad_output): scale = ctx.scale if len(grad_output.shape) == 4: scale = scale.view(-1, 1, 1, 1) # reshape scale and zeropoint for linear weights elif len(grad_output.shape) == 2: scale = scale.view(-1, 1) else: scale = scale.view(-1) return grad_output.clone() / scale, None, None, None, None class floor_ste(Function): """ Straight-through Estimator(STE) for torch.floor() """ @staticmethod def forward(ctx, x): return torch.floor(x) @staticmethod def backward(ctx, grad_output): return grad_output.clone() class round_ste(Function): """ Straight-through Estimator(STE) for torch.round() """ @staticmethod def forward(ctx, x): return torch.round(x) @staticmethod def backward(ctx, grad_output): return grad_output.clone() def batch_frexp(inputs, max_bit=31): """ Decompose the scaling factor into mantissa and twos exponent. Args: scaling_factor (`torch.Tensor`): Target scaling factor to decompose. Returns: ``Tuple(torch.Tensor, torch.Tensor)`: mantisa and exponent """ shape_of_input = inputs.size() # trans the input to be a 1-d tensor inputs = inputs.view(-1) output_m, output_e = np.frexp(inputs.cpu().numpy()) tmp_m = [] for m in output_m: int_m_shifted = int( decimal.Decimal(m * (2**max_bit)).quantize(decimal.Decimal("1"), rounding=decimal.ROUND_HALF_UP) ) tmp_m.append(int_m_shifted) output_m = np.array(tmp_m) output_e = float(max_bit) - output_e return ( torch.from_numpy(output_m).to(inputs.device).view(shape_of_input), torch.from_numpy(output_e).to(inputs.device).view(shape_of_input), ) class FixedPointMul(Function): """ Function to perform fixed-point arithmetic that can match integer arithmetic on hardware. Args: pre_act (`torch.Tensor`): Input tensor. pre_act_scaling_factor (`torch.Tensor`): Scaling factor of the input tensor *pre_act*. bit_num (`int`): Quantization bitwidth. z_scaling_factor (`torch.Tensor`): Scaling factor of the output tensor. identity (`torch.Tensor`, *optional*): Identity tensor, if exists. identity_scaling_factor (`torch.Tensor`, *optional*): Scaling factor of the identity tensor *identity*, if exists. Returns: `torch.Tensor`: Output tensor(*pre_act* if *identity* is not given, otherwise the addition of *pre_act* and *identity*), whose scale is rescaled to *z_scaling_factor*. """ @staticmethod def forward( ctx, pre_act, pre_act_scaling_factor, bit_num, z_scaling_factor, identity=None, identity_scaling_factor=None, ): if len(pre_act_scaling_factor.shape) == 3: reshape = lambda x: x # noqa: E731 else: reshape = lambda x: x.view(1, 1, -1) # noqa: E731 ctx.identity = identity n = 2 ** (bit_num - 1) - 1 with torch.no_grad(): pre_act_scaling_factor = reshape(pre_act_scaling_factor) if identity is not None: identity_scaling_factor = reshape(identity_scaling_factor) ctx.z_scaling_factor = z_scaling_factor z_int = torch.round(pre_act / pre_act_scaling_factor) _A = pre_act_scaling_factor.type(torch.double) _B = (z_scaling_factor.type(torch.float)).type(torch.double) new_scale = _A / _B new_scale = reshape(new_scale) m, e = batch_frexp(new_scale) output = z_int.type(torch.double) * m.type(torch.double) output = torch.round(output / (2.0**e)) if identity is not None: # needs addition of identity activation wx_int = torch.round(identity / identity_scaling_factor) _A = identity_scaling_factor.type(torch.double) _B = (z_scaling_factor.type(torch.float)).type(torch.double) new_scale = _A / _B new_scale = reshape(new_scale) m1, e1 = batch_frexp(new_scale) output1 = wx_int.type(torch.double) * m1.type(torch.double) output1 = torch.round(output1 / (2.0**e1)) output = output1 + output return torch.clamp(output.type(torch.float), -n - 1, n) @staticmethod def backward(ctx, grad_output): identity_grad = None if ctx.identity is not None: identity_grad = grad_output.clone() / ctx.z_scaling_factor return grad_output.clone() / ctx.z_scaling_factor, None, None, None, None, identity_grad, None
transformers/src/transformers/models/ibert/quant_modules.py/0
{ "file_path": "transformers/src/transformers/models/ibert/quant_modules.py", "repo_id": "transformers", "token_count": 13544 }
352
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Processor class for IDEFICS2. """ from typing import TYPE_CHECKING, List, Optional, Union from ...feature_extraction_utils import BatchFeature from ...image_utils import ImageInput, is_valid_image, load_image from ...processing_utils import ProcessorMixin from ...tokenization_utils_base import AddedToken, BatchEncoding, PaddingStrategy, TextInput, TruncationStrategy from ...utils import TensorType, logging if TYPE_CHECKING: from ...tokenization_utils_base import PreTokenizedInput logger = logging.get_logger(__name__) def is_url(val) -> bool: return isinstance(val, str) and val.startswith("http") def is_image_or_image_url(elem): return is_url(elem) or is_valid_image(elem) class Idefics2Processor(ProcessorMixin): r""" Constructs a IDEFICS2 processor which wraps a LLama tokenizer and IDEFICS2 image processor into a single processor. [`IdeficsProcessor`] offers all the functionalities of [`Idefics2ImageProcessor`] and [`LlamaTokenizerFast`]. See the docstring of [`~IdeficsProcessor.__call__`] and [`~IdeficsProcessor.decode`] for more information. Args: image_processor (`Idefics2ImageProcessor`): An instance of [`Idefics2ImageProcessor`]. The image processor is a required input. tokenizer (`PreTrainedTokenizerBase`, *optional*): An instance of [`PreTrainedTokenizerBase`]. This should correspond with the model's text model. The tokenizer is a required input. image_seq_len (`int`, *optional*, defaults to 64): The length of the image sequence i.e. the number of " self.eoc_token = "</chunk>" self.eol_token = "</line>" self.bop_token = "<phrase>" self.eop_token = "</phrase>" self.boo_token = "<object>" self.eoo_token = "</object>" self.dom_token = "</delimiter_of_multi_objects/>" self.grd_token = "<grounding>" self.tag_tokens = [ self.eod_token, self.boi_token, self.eoi_token, self.eoc_token, self.eol_token, self.bop_token, self.eop_token, self.boo_token, self.eoo_token, self.dom_token, self.grd_token, ] self.num_patch_index_tokens = num_patch_index_tokens patch_index_tokens = [f"<patch_index_{str(x).zfill(4)}>" for x in range(self.num_patch_index_tokens)] tokens_to_add = [] for token in self.tag_tokens + patch_index_tokens: tokens_to_add.append(AddedToken(token, lstrip=True, rstrip=False, normalized=False)) tokenizer.add_tokens(tokens_to_add) super().__init__(image_processor, tokenizer) def __call__( self, images: ImageInput = None, text: Union[TextInput, List[TextInput]] = None, bboxes: BboxInput = None, num_image_tokens: Optional[int] = 64, first_image_token_id: Optional[int] = None, add_special_tokens: bool = True, add_eos_token: bool = False, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_length: bool = False, verbose: bool = True, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs, ) -> BatchFeature: """ This method uses [`CLIPImageProcessor.__call__`] method to prepare image(s) for the model, and [`XLMRobertaTokenizerFast.__call__`] to prepare text for the model. Please refer to the docstring of the above two methods for more information. The rest of this documentation shows the arguments specific to `Kosmos2Processor`. Args: bboxes (`Union[List[Tuple[int]], List[Tuple[float]], List[List[Tuple[int]]], List[List[Tuple[float]]]]`, *optional*): The bounding bboxes associated to `texts`. num_image_tokens (`int`, *optional* defaults to 64): The number of (consecutive) places that are used to mark the placeholders to store image information. This should be the same as `latent_query_num` in the instance of `Kosmos2Config` you are using. first_image_token_id (`int`, *optional*): The token id that will be used for the first place of the subsequence that is reserved to store image information. If unset, will default to `self.tokenizer.unk_token_id + 1`. add_eos_token (`bool`, defaults to `False`): Whether or not to include `EOS` token id in the encoding when `add_special_tokens=True`. """ if images is None and text is None: raise ValueError("You have to specify either images or text.") encoding = BatchFeature() if images is not None: image_encoding = self.image_processor(images, return_tensors=return_tensors) encoding.update(image_encoding) if text is not None: text = self.preprocess_examples(text, images, bboxes, num_image_tokens=num_image_tokens) if add_special_tokens and not add_eos_token: if isinstance(text, str): text = f"{self.tokenizer.bos_token}{text}" elif isinstance(text, list): text = [f"{self.tokenizer.bos_token}{s}" for s in text] text_encoding = self.tokenizer( text=text, add_special_tokens=(add_special_tokens and add_eos_token), padding=padding and images is None, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of if images is None else pad_to_multiple_of, return_attention_mask=return_attention_mask, verbose=verbose, return_tensors=return_tensors if images is None else None, **kwargs, ) encoding.update(text_encoding) if text is not None and images is not None: # Use the id of the first token after <unk> if first_image_token_id is None: first_image_token_id = self.tokenizer.unk_token_id + 1 # To see if we need one more `0` (for `<s>`) at the beginning of `image_embeds_position_mask`. with_bos = add_special_tokens # The first (actual) `` text = f"{img_info_tokens} {text}" # Add `<object> <patch_idx_xxxx> <patch_idx_yyy> </object>` after `<phrase> phrase text </phrase>` text = self._insert_patch_index_tokens(text, bboxes) return text def preprocess_examples( self, texts: Union[TextInput, List[TextInput]], images: ImageInput = None, bboxes: BboxInput = None, num_image_tokens: Optional[int] = 64, ) -> Union[str, List[str]]: """Add image and bounding box information to `texts` as image and patch index tokens. Args: texts (`Union[TextInput, List[TextInput]]`): The texts to be processed. images (`ImageInput`, *optional*): The images associated to `texts`. bboxes (`Union[List[Tuple[int]], List[Tuple[float]], List[List[Tuple[int]]], List[List[Tuple[float]]]]`, *optional*): The bounding bboxes associated to `texts`. num_image_tokens (`int`, *optional*, defaults to 64): The number of image tokens (used as latent queries). This should corresponds to the `latent_query_num` attribute in `Kosmos2Config`. Returns: `Union[TextInput, List[TextInput]]`: The processed texts with image and patch index tokens. """ # These are fake ``. img_tokens = [self.boi_token] * num_image_tokens img_info_tokens = " ".join([self.boi_token] + img_tokens + [self.eoi_token]) # make batch to simplify processing logic batched = True if isinstance(texts, str): batched = False texts = [texts] if images is None: images = [None] * len(texts) elif not is_batched(images): images = [images] if len(texts) != len(images): raise ValueError( f"The number of examples in `texts` and `images` should be the same. Got {len(texts)} v.s. {len(images)} instead." ) if not batched: self._check_bboxes_for_single_text(bboxes) bboxes = [bboxes] elif bboxes is not None: if not isinstance(bboxes, list): raise ValueError("`bboxes` should be `None` or a list (as a batch) when `texts` is passed as a batch.") for x in bboxes: self._check_bboxes_for_single_text(x) else: bboxes = [None] * len(texts) if len(bboxes) != len(texts): raise ValueError( f"The number of examples in `texts` and `bboxes` should be the same. Got {len(texts)} v.s. {len(bboxes)} instead." ) result = [ self._preprocess_single_example(text, image, bbox, img_info_tokens) for text, image, bbox in zip(texts, images, bboxes) ] # un-batch if necessary if not batched: result = result[0] return result # Copied from transformers.models.blip.processing_blip.BlipProcessor.batch_decode with BertTokenizerFast->PreTrainedTokenizer def batch_decode(self, *args, **kwargs): """ This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please refer to the docstring of this method for more information. """ return self.tokenizer.batch_decode(*args, **kwargs) # Copied from transformers.models.blip.processing_blip.BlipProcessor.decode with BertTokenizerFast->PreTrainedTokenizer def decode(self, *args, **kwargs): """ This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to the docstring of this method for more information. """ return self.tokenizer.decode(*args, **kwargs) def post_process_generation(self, text, cleanup_and_extract=True): caption = text.split(self.eoi_token)[-1] if cleanup_and_extract: return clean_text_and_extract_entities_with_bboxes(caption) return caption @property # Copied from transformers.models.blip.processing_blip.BlipProcessor.model_input_names def model_input_names(self): tokenizer_input_names = self.tokenizer.model_input_names image_processor_input_names = self.image_processor.model_input_names return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names)) def _insert_patch_index_tokens(self, text: str, bboxes: Union[List[Tuple[int]], List[Tuple[float]]]) -> str: if bboxes is None or len(bboxes) == 0: return text matched_phrases = list(re.finditer(r"<phrase>.+?</phrase>", string=text)) if len(matched_phrases) != len(bboxes): raise ValueError( f"The number of elements in `bboxes` should be the same as the number of `<phrase> ... </phrase>` pairs in `text`. Got {len(matched_phrases)} v.s. {len(bboxes)} instead." ) # insert object's patch index tokens # the found `<phrase> ... </phrase>` pairs. curr_pos = 0 buffer = [] for matched, bbox in zip(matched_phrases, bboxes): _, end = matched.span() buffer.append(text[curr_pos:end]) curr_pos = end # A phrase without bbox if bbox is None: continue # A phrase with a single bbox if isinstance(bbox, tuple): bbox = [bbox] patch_index_strings = [] # A phrase could have multiple bboxes if not all(box is not None for box in bbox): raise ValueError( "The multiple bounding boxes for a single phrase should not contain any `None` value." ) for box in bbox: patch_index_1, patch_index_2 = self._convert_bbox_to_patch_index_tokens(box) patch_index_strings.append(f"{patch_index_1} {patch_index_2}") # `bbox` being an empty list if len(patch_index_strings) == 0: continue position_str = " </delimiter_of_multi_objects/> ".join(patch_index_strings) buffer.append(f"<object> {position_str} </object>") # remaining if curr_pos < len(text): buffer.append(text[curr_pos:]) text = "".join(buffer) return text def _convert_bbox_to_patch_index_tokens( self, bbox: Union[Tuple[int, int], Tuple[float, float, float, float]] ) -> Tuple[str, str]: # already computed patch indices if len(bbox) == 2: idx_1, idx_2 = bbox # bbox specified with (normalized) coordinates else: # use `self.tokenizer` to get `num_patches_per_side` num_patches_per_side = int(math.sqrt(self.num_patch_index_tokens)) idx_1, idx_2 = coordinate_to_patch_index(bbox, num_patches_per_side) token_1 = f"<patch_index_{str(idx_1).zfill(4)}>" token_2 = f"<patch_index_{str(idx_2).zfill(4)}>" return token_1, token_2 def coordinate_to_patch_index(bbox: Tuple[float, float, float, float], num_patches_per_side: int) -> Tuple[int, int]: """Convert a bounding box to a pair of patch indices. Args: bbox (`Tuple[float, float, float, float]`): The 4 coordinates of the bounding box, with the format being (x1, y1, x2, y2) specifying the upper-left and lower-right corners of the box. It should have x2 > x1 and y2 > y1. num_patches_per_side (`int`): the number of patches along each side. Returns: `Tuple[int, int]`: A pair of patch indices representing the upper-left patch and lower-right patch. """ (x1, y1, x2, y2) = bbox if not (x2 > x1 and y2 > y1): raise ValueError("The coordinates in `bbox` should be `(x1, y1, x2, y2)` with `x2 > x1` and `y2 > y1`.") ul_x = math.floor(x1 * num_patches_per_side) ul_y = math.floor(y1 * num_patches_per_side) lr_x = math.ceil(x2 * num_patches_per_side - 1) lr_y = math.ceil(y2 * num_patches_per_side - 1) ul_idx = ul_y * num_patches_per_side + ul_x lr_idx = lr_y * num_patches_per_side + lr_x return ul_idx, lr_idx # copied from https://github.com/microsoft/unilm/blob/97e4923e97d3ee10b57e97013556e3fd0d207a9b/kosmos-2/demo/decode_string.py#L35C1-L75C38 # (with format modifications) def patch_index_to_coordinate(ul_idx: int, lr_idx: int, num_patches_per_side: int): """ Given a grid of length `num_patches_per_side` and the indices of the upper-left and lower-right corners of a bounding box, returns the normalized coordinates of the bounding box, in the form (x1, y1, x2, y2). Args: ul_idx (`int`): the index of the grid cell that corresponds to the upper-left corner of the bounding box. lr_idx (`int`): the index of the grid cell that corresponds to the lower-right corner of the bounding box. num_patches_per_side (`int`): the number of patches along each side. Returns: `Tuple[float]`: the normalized coordinates of the bounding box, in the form (x1, y1, x2, y2). """ # Compute the size of each cell in the grid cell_size = 1.0 / num_patches_per_side # Compute the x and y indices of the upper-left and lower-right corners of the bounding box ul_x = ul_idx % num_patches_per_side ul_y = ul_idx // num_patches_per_side lr_x = lr_idx % num_patches_per_side lr_y = lr_idx // num_patches_per_side # Compute the normalized coordinates of the bounding box if ul_idx == lr_idx: x1 = ul_x * cell_size y1 = ul_y * cell_size x2 = lr_x * cell_size + cell_size y2 = lr_y * cell_size + cell_size elif ul_x == lr_x or ul_y == lr_y: x1 = ul_x * cell_size y1 = ul_y * cell_size x2 = lr_x * cell_size + cell_size y2 = lr_y * cell_size + cell_size else: x1 = ul_x * cell_size + cell_size / 2 y1 = ul_y * cell_size + cell_size / 2 x2 = lr_x * cell_size + cell_size / 2 y2 = lr_y * cell_size + cell_size / 2 return x1, y1, x2, y2 # copied from https://github.com/microsoft/unilm/blob/97e4923e97d3ee10b57e97013556e3fd0d207a9b/kosmos-2/demo/decode_string.py#L4-L33 # (with format modifications) def extract_entities_with_patch_indices(text): """Extract entities contained in `text`. The bounding bboxes is given in the form of patch indices. This functioin is only intended to be used within `clean_text_and_extract_entities_with_bboxes` where further processing happens, including converting to normalized coordinates and whitespace character cleaning up. Examples: ```python >>> text = "<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>." >>> entities = extract_entities_with_patch_indices(text) >>> entities [(' a snowman', (31, 41), [(44, 863)]), (' a fire', (130, 137), [(5, 911)])] ```""" # The regular expression pattern for matching the required formats pattern = r"(?:(<phrase>([^<]+)</phrase>))?<object>((?:<patch_index_\d+><patch_index_\d+></delimiter_of_multi_objects/>)*<patch_index_\d+><patch_index_\d+>)</object>" # Find all matches in the given string matches = re.finditer(pattern, text) # Initialize an empty list to store the valid patch_index combinations entities_with_patch_indices = [] for match in matches: # span of a `phrase` that is between <phrase> and </phrase> span = match.span(2) phrase_tag, phrase, match_content = match.groups() if not phrase_tag: phrase = None # We take the starting position of `<object>` span = (match.span(0)[0], match.span(0)[0]) # Split the match_content by the delimiter to get individual patch_index pairs patch_index_pairs = match_content.split("</delimiter_of_multi_objects/>") entity_bboxes = [] for pair in patch_index_pairs: # Extract the xxxx and yyyy values from the patch_index pair x = re.search(r"<patch_index_(\d+)>", pair) y = re.search(r"<patch_index_(\d+)>", pair[1:]) if x and y: if phrase: entity_bboxes.append((int(x.group(1)), int(y.group(1)))) else: entity_bboxes.append((int(x.group(1)), int(y.group(1)))) if phrase: entities_with_patch_indices.append((phrase, span, entity_bboxes)) else: for bbox in entity_bboxes: # fake entity name entity = f"<patch_index_{bbox[0]}><patch_index_{bbox[1]}>" entities_with_patch_indices.append((entity, span, [bbox])) return entities_with_patch_indices def adjust_entity_positions(entity, text): """Adjust the positions of the entities in `text` to be relative to the text with special fields removed.""" entity_name, (start, end) = entity # computed the length of strings with special fields (tag tokens, patch index tokens, etc.) removed adjusted_start = len(re.sub("<.*?>", "", text[:start])) adjusted_end = len(re.sub("<.*?>", "", text[:end])) adjusted_entity = (entity_name, (adjusted_start, adjusted_end)) return adjusted_entity def _cleanup_spaces(text, entities): """Remove the spaces around the text and the entities in it.""" new_text = text.strip() leading_spaces = len(text) - len(text.lstrip()) new_entities = [] for entity_name, (start, end), bboxes in entities: entity_name_leading_spaces = len(entity_name) - len(entity_name.lstrip()) entity_name_trailing_spaces = len(entity_name) - len(entity_name.rstrip()) start = start - leading_spaces + entity_name_leading_spaces end = end - leading_spaces - entity_name_trailing_spaces entity_name = entity_name.strip() new_entities.append((entity_name, (start, end), bboxes)) return new_text, new_entities # copied from https://github.com/microsoft/unilm/blob/97e4923e97d3ee10b57e97013556e3fd0d207a9b/kosmos-2/demo/decode_string.py#L77-L87 # (with format modifications) def clean_text_and_extract_entities_with_bboxes(text, num_patches_per_side=32): """Remove the tag tokens from `text`, extract entities in it with some cleaning up of white characters. Examples: ```python >>> text = "<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>." >>> clean_text, entities = clean_text_and_extract_entities_with_bboxes(text) >>> clean_text 'An image of a snowman warming himself by a fire.' >>> entities [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])] ```""" # remove special fields (tag tokens, patch index tokens, etc.) processed_text = re.sub("<.*?>", "", text) entities_with_patch_indices = extract_entities_with_patch_indices(text) entities = [] for item in entities_with_patch_indices: entity, bboxes = item[0:2], item[2] adjusted_entity = adjust_entity_positions(entity, text) bboxes_in_coords = [patch_index_to_coordinate(bbox[0], bbox[1], num_patches_per_side) for bbox in bboxes] entities.append(adjusted_entity + (bboxes_in_coords,)) return _cleanup_spaces(processed_text, entities)
transformers/src/transformers/models/kosmos2/processing_kosmos2.py/0
{ "file_path": "transformers/src/transformers/models/kosmos2/processing_kosmos2.py", "repo_id": "transformers", "token_count": 13380 }
354
# coding=utf-8 # Copyright 2022 Microsoft Research and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """LayoutLMv3 model configuration""" from collections import OrderedDict from typing import TYPE_CHECKING, Any, Mapping, Optional from packaging import version from ...configuration_utils import PretrainedConfig from ...onnx import OnnxConfig from ...onnx.utils import compute_effective_axis_dimension from ...utils import logging if TYPE_CHECKING: from ...processing_utils import ProcessorMixin from ...utils import TensorType logger = logging.get_logger(__name__) class LayoutLMv3Config(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`LayoutLMv3Model`]. It is used to instantiate an LayoutLMv3 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the LayoutLMv3 [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the LayoutLMv3 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`LayoutLMv3Model`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`LayoutLMv3Model`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon used by the layer normalization layers. max_2d_position_embeddings (`int`, *optional*, defaults to 1024): The maximum value that the 2D position embedding might ever be used with. Typically set this to something large just in case (e.g., 1024). coordinate_size (`int`, *optional*, defaults to `128`): Dimension of the coordinate embeddings. shape_size (`int`, *optional*, defaults to `128`): Dimension of the width and height embeddings. has_relative_attention_bias (`bool`, *optional*, defaults to `True`): Whether or not to use a relative attention bias in the self-attention mechanism. rel_pos_bins (`int`, *optional*, defaults to 32): The number of relative position bins to be used in the self-attention mechanism. max_rel_pos (`int`, *optional*, defaults to 128): The maximum number of relative positions to be used in the self-attention mechanism. max_rel_2d_pos (`int`, *optional*, defaults to 256): The maximum number of relative 2D positions in the self-attention mechanism. rel_2d_pos_bins (`int`, *optional*, defaults to 64): The number of 2D relative position bins in the self-attention mechanism. has_spatial_attention_bias (`bool`, *optional*, defaults to `True`): Whether or not to use a spatial attention bias in the self-attention mechanism. visual_embed (`bool`, *optional*, defaults to `True`): Whether or not to add patch embeddings. input_size (`int`, *optional*, defaults to `224`): The size (resolution) of the images. num_channels (`int`, *optional*, defaults to `3`): The number of channels of the images. patch_size (`int`, *optional*, defaults to `16`) The size (resolution) of the patches. classifier_dropout (`float`, *optional*): The dropout ratio for the classification head. Example: ```python >>> from transformers import LayoutLMv3Config, LayoutLMv3Model >>> # Initializing a LayoutLMv3 microsoft/layoutlmv3-base style configuration >>> configuration = LayoutLMv3Config() >>> # Initializing a model (with random weights) from the microsoft/layoutlmv3-base style configuration >>> model = LayoutLMv3Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "layoutlmv3" def __init__( self, vocab_size=50265, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-5, pad_token_id=1, bos_token_id=0, eos_token_id=2, max_2d_position_embeddings=1024, coordinate_size=128, shape_size=128, has_relative_attention_bias=True, rel_pos_bins=32, max_rel_pos=128, rel_2d_pos_bins=64, max_rel_2d_pos=256, has_spatial_attention_bias=True, text_embed=True, visual_embed=True, input_size=224, num_channels=3, patch_size=16, classifier_dropout=None, **kwargs, ): super().__init__( vocab_size=vocab_size, hidden_size=hidden_size, num_hidden_layers=num_hidden_layers, num_attention_heads=num_attention_heads, intermediate_size=intermediate_size, hidden_act=hidden_act, hidden_dropout_prob=hidden_dropout_prob, attention_probs_dropout_prob=attention_probs_dropout_prob, max_position_embeddings=max_position_embeddings, type_vocab_size=type_vocab_size, initializer_range=initializer_range, layer_norm_eps=layer_norm_eps, pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs, ) self.max_2d_position_embeddings = max_2d_position_embeddings self.coordinate_size = coordinate_size self.shape_size = shape_size self.has_relative_attention_bias = has_relative_attention_bias self.rel_pos_bins = rel_pos_bins self.max_rel_pos = max_rel_pos self.has_spatial_attention_bias = has_spatial_attention_bias self.rel_2d_pos_bins = rel_2d_pos_bins self.max_rel_2d_pos = max_rel_2d_pos self.text_embed = text_embed self.visual_embed = visual_embed self.input_size = input_size self.num_channels = num_channels self.patch_size = patch_size self.classifier_dropout = classifier_dropout class LayoutLMv3OnnxConfig(OnnxConfig): torch_onnx_minimum_version = version.parse("1.12") @property def inputs(self) -> Mapping[str, Mapping[int, str]]: # The order of inputs is different for question answering and sequence classification if self.task in ["question-answering", "sequence-classification"]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"}), ] ) else: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("pixel_values", {0: "batch", 1: "num_channels"}), ] ) @property def atol_for_validation(self) -> float: return 1e-5 @property def default_onnx_opset(self) -> int: return 12 def generate_dummy_inputs( self, processor: "ProcessorMixin", batch_size: int = -1, seq_length: int = -1, is_pair: bool = False, framework: Optional["TensorType"] = None, num_channels: int = 3, image_width: int = 40, image_height: int = 40, ) -> Mapping[str, Any]: """ Generate inputs to provide to the ONNX exporter for the specific framework Args: processor ([`ProcessorMixin`]): The processor associated with this model configuration. batch_size (`int`, *optional*, defaults to -1): The batch size to export the model for (-1 means dynamic axis). seq_length (`int`, *optional*, defaults to -1): The sequence length to export the model for (-1 means dynamic axis). is_pair (`bool`, *optional*, defaults to `False`): Indicate if the input is a pair (sentence 1, sentence 2). framework (`TensorType`, *optional*, defaults to `None`): The framework (PyTorch or TensorFlow) that the processor will generate tensors for. num_channels (`int`, *optional*, defaults to 3): The number of channels of the generated images. image_width (`int`, *optional*, defaults to 40): The width of the generated images. image_height (`int`, *optional*, defaults to 40): The height of the generated images. Returns: Mapping[str, Any]: holding the kwargs to provide to the model's forward function """ # A dummy image is used so OCR should not be applied setattr(processor.image_processor, "apply_ocr", False) # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX batch_size = compute_effective_axis_dimension( batch_size, fixed_dimension=OnnxConfig.default_fixed_batch, num_token_to_add=0 ) # If dynamic axis (-1) we forward with a fixed dimension of 8 tokens to avoid optimizations made by ONNX token_to_add = processor.tokenizer.num_special_tokens_to_add(is_pair) seq_length = compute_effective_axis_dimension( seq_length, fixed_dimension=OnnxConfig.default_fixed_sequence, num_token_to_add=token_to_add ) # Generate dummy inputs according to compute batch and sequence dummy_text = [[" ".join([processor.tokenizer.unk_token]) * seq_length]] * batch_size # Generate dummy bounding boxes dummy_bboxes = [[[48, 84, 73, 128]]] * batch_size # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX # batch_size = compute_effective_axis_dimension(batch_size, fixed_dimension=OnnxConfig.default_fixed_batch) dummy_image = self._generate_dummy_images(batch_size, num_channels, image_height, image_width) inputs = dict( processor( dummy_image, text=dummy_text, boxes=dummy_bboxes, return_tensors=framework, ) ) return inputs
transformers/src/transformers/models/layoutlmv3/configuration_layoutlmv3.py/0
{ "file_path": "transformers/src/transformers/models/layoutlmv3/configuration_layoutlmv3.py", "repo_id": "transformers", "token_count": 5416 }
355
# coding=utf-8 # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. # # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX # and OPT implementations in this library. It has been modified from its # original forms to accommodate minor architectural differences compared # to GPT-NeoX and OPT used by the Meta AI team that trained the model. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tokenization classes for LLaMA.""" import os from shutil import copyfile from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple import sentencepiece as spm from ...convert_slow_tokenizer import import_protobuf from ...tokenization_utils import AddedToken, PreTrainedTokenizer from ...utils import logging if TYPE_CHECKING: from ...tokenization_utils_base import TextInput logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"} SPIECE_UNDERLINE = "▁" B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" # fmt: off DEFAULT_SYSTEM_PROMPT = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \ answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\ that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \ correct. If you don't know the answer to a question, please don't share false information.""" # fmt: on class LlamaTokenizer(PreTrainedTokenizer): """ Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is no padding token in the original model. Args: vocab_file (`str`): Path to the vocabulary file. unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str` or `tokenizers.AddedToken`, *optional*): A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. sp_model_kwargs (`Dict[str, Any]`, `Optional`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. add_bos_token (`bool`, *optional*, defaults to `True`): Whether or not to add an `bos_token` at the start of sequences. add_eos_token (`bool`, *optional*, defaults to `False`): Whether or not to add an `eos_token` at the end of sequences. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`): Whether or not to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. use_default_system_prompt (`bool`, *optional*, defaults to `False`): Whether or not the default system prompt for Llama should be used. spaces_between_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to add spaces between special tokens. legacy (`bool`, *optional*): Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622 and #25224 which includes fixes to properly handle tokens that appear after special tokens. Make sure to also set `from_slow` to `True`. A simple example: - `legacy=True`: ```python >>> from transformers import LlamaTokenizerFast >>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=True, from_slow=True) >>> tokenizer.encode("Hello <s>.") # 869 is '▁.' [1, 15043, 29871, 1, 869] ``` - `legacy=False`: ```python >>> from transformers import LlamaTokenizerFast >>> tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b", legacy=False, from_slow=True) >>> tokenizer.encode("Hello <s>.") # 29889 is '.' [1, 15043, 29871, 1, 29889] ``` Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details. add_prefix_space (`bool`, *optional*, defaults to `True`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. Again, this should be set with `from_slow=True` to make sure it's taken into account. """ vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] def __init__( self, vocab_file, unk_token="<unk>", bos_token="<s>", eos_token="</s>", pad_token=None, sp_model_kwargs: Optional[Dict[str, Any]] = None, add_bos_token=True, add_eos_token=False, clean_up_tokenization_spaces=False, use_default_system_prompt=False, spaces_between_special_tokens=False, legacy=None, add_prefix_space=True, **kwargs, ): self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs bos_token = AddedToken(bos_token, normalized=False, special=True) if isinstance(bos_token, str) else bos_token eos_token = AddedToken(eos_token, normalized=False, special=True) if isinstance(eos_token, str) else eos_token unk_token = AddedToken(unk_token, normalized=False, special=True) if isinstance(unk_token, str) else unk_token pad_token = AddedToken(pad_token, normalized=False, special=True) if isinstance(pad_token, str) else pad_token if legacy is None: logger.warning_once( f"You are using the default legacy behaviour of the {self.__class__}. This is" " expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you." " If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it" " means, and thoroughly read the reason why this was added as explained in" " https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file" " you can ignore this message" ) legacy = True self.legacy = legacy self.vocab_file = vocab_file self.add_bos_token = add_bos_token self.add_eos_token = add_eos_token self.use_default_system_prompt = use_default_system_prompt self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False)) self.add_prefix_space = add_prefix_space super().__init__( bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, pad_token=pad_token, add_bos_token=add_bos_token, add_eos_token=add_eos_token, sp_model_kwargs=self.sp_model_kwargs, clean_up_tokenization_spaces=clean_up_tokenization_spaces, use_default_system_prompt=use_default_system_prompt, spaces_between_special_tokens=spaces_between_special_tokens, legacy=legacy, add_prefix_space=add_prefix_space, **kwargs, ) @property def unk_token_length(self): return len(self.sp_model.encode(str(self.unk_token))) # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_spm_processor def get_spm_processor(self, from_slow=False): tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs) if self.legacy or from_slow: # no dependency on protobuf tokenizer.Load(self.vocab_file) return tokenizer with open(self.vocab_file, "rb") as f: sp_model = f.read() model_pb2 = import_protobuf(f"The new behaviour of {self.__class__.__name__} (with `self.legacy = False`)") model = model_pb2.ModelProto.FromString(sp_model) normalizer_spec = model_pb2.NormalizerSpec() normalizer_spec.add_dummy_prefix = False model.normalizer_spec.MergeFrom(normalizer_spec) sp_model = model.SerializeToString() tokenizer.LoadFromSerializedProto(sp_model) return tokenizer def __getstate__(self): state = self.__dict__.copy() state["sp_model"] = None state["sp_model_proto"] = self.sp_model.serialized_model_proto() return state def __setstate__(self, d): self.__dict__ = d self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.LoadFromSerializedProto(self.sp_model_proto) @property def vocab_size(self): """Returns vocab size""" return self.sp_model.get_piece_size() def get_vocab(self): """Returns vocab as a dict""" vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} vocab.update(self.added_tokens_encoder) return vocab # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.tokenize def tokenize(self, text: "TextInput", **kwargs) -> List[str]: """ Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the first token is special. """ if self.legacy or len(text) == 0: return super().tokenize(text, **kwargs) text = text.replace(SPIECE_UNDERLINE, " ") if self.add_prefix_space: text = SPIECE_UNDERLINE + text tokens = super().tokenize(text, **kwargs) if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens: tokens = tokens[1:] return tokens # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer._tokenize def _tokenize(self, text, **kwargs): """ Returns a tokenized string. We de-activated the `add_dummy_prefix` option, thus the sentencepiece internals will always strip any SPIECE_UNDERLINE. For example: `self.sp_model.encode(f"{SPIECE_UNDERLINE}Hey", out_type = str)` will give `['H', 'e', 'y']` instead of `['▁He', 'y']`. Thus we always encode `f"{unk_token}text"` and strip the `unk_token`. Here is an example with `unk_token = "<unk>"` and `unk_token_length = 4`. `self.tokenizer.sp_model.encode("<unk> Hey", out_type = str)[4:]`. """ if self.legacy or not text.startswith((SPIECE_UNDERLINE, " ")): return self.sp_model.encode(text, out_type=str) # 1. Encode string + prefix ex: "<unk> Hey" tokens = self.sp_model.encode(self.unk_token + text, out_type=str) # 2. Remove self.unk_token from ['<','unk','>', '▁Hey'] return tokens[self.unk_token_length :] if len(tokens) >= self.unk_token_length else tokens def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.sp_model.piece_to_id(token) def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" token = self.sp_model.IdToPiece(index) return token def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" # since we manually add the prefix space, we have to remove it when decoding if tokens[0].startswith(SPIECE_UNDERLINE) and self.add_prefix_space: tokens[0] = tokens[0][1:] current_sub_tokens = [] out_string = "" prev_is_special = False for i, token in enumerate(tokens): # make sure that special tokens are not decoded using sentencepiece model if token in self.all_special_tokens: if not prev_is_special and i != 0 and self.legacy: out_string += " " out_string += self.sp_model.decode(current_sub_tokens) + token prev_is_special = True current_sub_tokens = [] else: if prev_is_special and i == 1 and self.add_prefix_space and not token.startswith(SPIECE_UNDERLINE): out_string += " " current_sub_tokens.append(token) prev_is_special = False out_string += self.sp_model.decode(current_sub_tokens) return out_string def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]: """ Save the vocabulary and special tokens file to a directory. Args: save_directory (`str`): The directory in which to save the vocabulary. Returns: `Tuple(str)`: Paths to the files saved. """ if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return out_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): copyfile(self.vocab_file, out_vocab_file) elif not os.path.isfile(self.vocab_file): with open(out_vocab_file, "wb") as fi: content_spiece_model = self.sp_model.serialized_model_proto() fi.write(content_spiece_model) return (out_vocab_file,) def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): bos_token_id = [self.bos_token_id] if self.add_bos_token else [] eos_token_id = [self.eos_token_id] if self.add_eos_token else [] output = bos_token_id + token_ids_0 + eos_token_id if token_ids_1 is not None: output = output + bos_token_id + token_ids_1 + eos_token_id return output def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) bos_token_id = [1] if self.add_bos_token else [] eos_token_id = [1] if self.add_eos_token else [] if token_ids_1 is None: return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id return ( bos_token_id + ([0] * len(token_ids_0)) + eos_token_id + bos_token_id + ([0] * len(token_ids_1)) + eos_token_id ) def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` if token_ids_1 is None, only returns the first portion of the mask (0s). Args: token_ids_0 (`List[int]`): List of ids. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). """ bos_token_id = [self.bos_token_id] if self.add_bos_token else [] eos_token_id = [self.eos_token_id] if self.add_eos_token else [] output = [0] * len(bos_token_id + token_ids_0 + eos_token_id) if token_ids_1 is not None: output += [1] * len(bos_token_id + token_ids_1 + eos_token_id) return output
transformers/src/transformers/models/llama/tokenization_llama.py/0
{ "file_path": "transformers/src/transformers/models/llama/tokenization_llama.py", "repo_id": "transformers", "token_count": 7901 }
356
# coding=utf-8 # Copyright 2024 HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from dataclasses import dataclass from typing import List, Optional, Tuple, Union import torch import torch.utils.checkpoint from torch import nn from transformers import PretrainedConfig from transformers.models.llava_next.modeling_llava_next import ( LlavaNextCausalLMOutputWithPast, LlavaNextForConditionalGeneration, LlavaNextMultiModalProjector, image_size_to_num_patches, ) from ...cache_utils import Cache from ...utils import ( logging, replace_return_docstrings, ) from ..auto import CONFIG_MAPPING logger = logging.get_logger(__name__) class LlavaNextVideoConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`LlavaNextVideoForConditionalGeneration`]. It is used to instantiate an Llava-NeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [llava-hf/LLaVA-NeXT-Video-7B-hf](https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf) model. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (`Union[AutoConfig, dict]`, *optional*, defaults to `CLIPVisionConfig`): The config object or dictionary of the vision backbone. text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `LlamaConfig`): The config object or dictionary of the text backbone. ignore_index (`int`, *optional*, defaults to -100): The ignore index for the loss function. video_token_index (`int`, *optional*, defaults to 32000): The video token index to encode the image prompt. image_token_index (`int`, *optional*, defaults to 32001): The image token index to encode the image prompt. spatial_pool_mode (`str`, *optional*, defaults to `"average"`): Pooling mode to use for videos. Can be "average", "max" or "conv". spatial_pool_stride (`int`, *optional*, defaults to 2): Stride used in the pooling layer for videos. image_seq_length (`int`, *optional*, defaults to 576): Sequence length of one image embedding. video_seq_length (`int`, *optional*, defaults to 288): Sequence length of one video embedding. projector_hidden_act (`str`, *optional*, defaults to `"gelu"`): The activation function used by the multimodal projector. vision_feature_select_strategy (`str`, *optional*, defaults to `"default"`): The feature selection strategy used to select the vision feature from the vision backbone. Can be one of `"default"` or `"full"`. If `"default"`, the CLS token is removed from the vision features. If `"full"`, the full vision features are used. vision_feature_layer (`int`, *optional*, defaults to -2): The index of the layer to select the vision feature. image_grid_pinpoints (`List`, *optional*, defaults to `[[336, 672], [672, 336], [672, 672], [1008, 336], [336, 1008]]`): A list of possible resolutions to use for processing high resolution images. Each item in the list should be a tuple or list of the form `(height, width)`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. Example: ```python >>> from transformers import LlavaNextVideoForConditionalGeneration, LlavaNextVideoConfig, CLIPVisionConfig, LlamaConfig >>> # Initializing a CLIP-vision config >>> vision_config = CLIPVisionConfig() >>> # Initializing a Llama config >>> text_config = LlamaConfig() >>> configuration = LlavaNextVideoConfig(vision_config, text_config) >>> model = LlavaNextVideoForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "llava_next_video" is_composition = True def __init__( self, vision_config=None, text_config=None, ignore_index=-100, image_token_index=32001, projector_hidden_act="gelu", vision_feature_select_strategy="default", vision_feature_layer=-2, image_grid_pinpoints=None, tie_word_embeddings=False, video_token_index=32000, spatial_pool_mode="average", spatial_pool_stride=2, image_seq_length=576, video_seq_length=288, **kwargs, ): self.video_token_index = video_token_index self.spatial_pool_mode = spatial_pool_mode self.spatial_pool_stride = spatial_pool_stride self.image_seq_length = image_seq_length self.video_seq_length = video_seq_length self.ignore_index = ignore_index self.image_token_index = image_token_index self.projector_hidden_act = projector_hidden_act if vision_feature_select_strategy not in ["default", "full"]: raise ValueError( "vision_feature_select_strategy should be one of 'default', 'full'." f"Got: {vision_feature_select_strategy}" ) self.vision_feature_select_strategy = vision_feature_select_strategy self.vision_feature_layer = vision_feature_layer image_grid_pinpoints = ( image_grid_pinpoints if image_grid_pinpoints is not None else [[336, 672], [672, 336], [672, 672], [1008, 336], [336, 1008]] ) self.image_grid_pinpoints = image_grid_pinpoints if isinstance(vision_config, dict): vision_config["model_type"] = ( vision_config["model_type"] if "model_type" in vision_config else "clip_vision_model" ) vision_config = CONFIG_MAPPING[vision_config["model_type"]](**vision_config) elif vision_config is None: vision_config = CONFIG_MAPPING["clip_vision_model"]( intermediate_size=4096, hidden_size=1024, patch_size=14, image_size=336, num_hidden_layers=24, num_attention_heads=16, vocab_size=32000, projection_dim=768, ) self.vision_config = vision_config if isinstance(text_config, dict): text_config["model_type"] = text_config["model_type"] if "model_type" in text_config else "llama" text_config = CONFIG_MAPPING[text_config["model_type"]](**text_config) elif text_config is None: text_config = CONFIG_MAPPING["llama"]() self.text_config = text_config super().__init__(tie_word_embeddings=tie_word_embeddings, **kwargs) @dataclass class LlavaNextVideoCausalLMOutputWithPast(LlavaNextCausalLMOutputWithPast): pass class LlavaNextVideoPooler(nn.Module): def __init__(self, config): super().__init__() mode = config.spatial_pool_mode stride = config.spatial_pool_stride out_channels = getattr(config, "spatial_pool_out_channels", config.vision_config.hidden_size) self.image_size = config.vision_config.image_size // config.vision_config.patch_size**2 if mode == "average": self.pool = nn.AvgPool2d(kernel_size=stride, stride=stride) elif mode == "max": self.pool = nn.MaxPool2d(kernel_size=stride, stride=stride) elif mode == "conv": self.pool = nn.Conv2d( in_channels=config.vision_config.hidden_size, out_channels=out_channels, kernel_size=stride, stride=stride, ) else: raise ValueError(f"Unknown pooling mode: {mode}. Has to be one of [`average`, `max`, `conv`]") def forward(self, image_features): ori_width = int(math.sqrt(image_features.shape[1] * self.image_size // self.image_size)) ori_height = int(ori_width * self.image_size // self.image_size) batch_size, _, dim = image_features.shape image_features_spatial = image_features.view(batch_size, ori_height, ori_height, dim).permute(0, 3, 1, 2) image_features_spatial_pool = self.pool(image_features_spatial) return image_features_spatial_pool.flatten(2).transpose(1, 2).contiguous() class LlavaNextVideoMultiModalProjector(LlavaNextMultiModalProjector): pass class LlavaNextVideoForConditionalGeneration(LlavaNextForConditionalGeneration): def __init__(self, config: LlavaNextVideoConfig, **super_kwargs): super().__init__(config, **super_kwargs) self.vision_resampler = LlavaNextVideoPooler(config) self.post_init() def _get_image_features(self, pixel_values, image_sizes): # ! infer image_num_patches from image_sizes image_num_patches = [ image_size_to_num_patches( image_size=imsize, grid_pinpoints=self.config.image_grid_pinpoints, patch_size=self.config.vision_config.image_size, ) for imsize in image_sizes ] if pixel_values.dim() == 5: # stacked if input is (batch_size, num_patches, num_channels, height, width) _pixel_values_list = [pix_val[:num_patch] for pix_val, num_patch in zip(pixel_values, image_num_patches)] pixel_values = torch.cat(_pixel_values_list, dim=0) elif pixel_values.dim() != 4: # otherwise has to be stacked from list of (num_patches, num_channels, height, width) raise ValueError(f"pixel_values of shape {pixel_values.shape}, expect to be of 4 or 5 dimensions") image_features = self.vision_tower(pixel_values, output_hidden_states=True) selected_image_feature = image_features.hidden_states[self.vision_feature_layer] if self.vision_feature_select_strategy == "default": selected_image_feature = selected_image_feature[:, 1:] elif self.vision_feature_select_strategy == "full": selected_image_feature = selected_image_feature image_features = self.multi_modal_projector(selected_image_feature) image_features = torch.split(image_features, image_num_patches, dim=0) return image_features def _get_video_features(self, pixel_values): batch_size, frames, channels, height, width = pixel_values.shape pixel_values = pixel_values.reshape(batch_size * frames, channels, height, width) image_features = self.vision_tower(pixel_values, output_hidden_states=True) selected_image_feature = image_features.hidden_states[self.vision_feature_layer] if self.vision_feature_select_strategy == "default": selected_image_feature = selected_image_feature[:, 1:] elif self.vision_feature_select_strategy == "full": selected_image_feature = selected_image_feature # Same as image features except that video has pooling layer image_features = self.vision_resampler(selected_image_feature) image_features = self.multi_modal_projector(image_features) image_features = torch.split(image_features, frames, dim=0) return image_features @replace_return_docstrings(output_type=LlavaNextVideoCausalLMOutputWithPast, config_class="LlavaNextVideoConfig") def forward( self, input_ids: torch.LongTensor = None, pixel_values: torch.FloatTensor = None, pixel_values_videos: torch.FloatTensor = None, image_sizes: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, vision_feature_layer: Optional[int] = None, vision_feature_select_strategy: Optional[str] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, LlavaNextVideoCausalLMOutputWithPast]: r""" Args: pixel_values_videos (`torch.FloatTensor` of shape `(batch_size, num_frames, num_channels, image_size, image_size)): The tensors corresponding to the input videos. Pixel values can be obtained using [`AutoImageProcessor`]. See [`LlavaNextVideoVideoProcessor.__call__`] for details. [`LlavaProcessor`] uses [`LlavaNextVideoVideoProcessor`] for processing videos. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. Returns: Example: ```python >>> from PIL import Image >>> import requests >>> import av >>> from transformers import AutoProcessor, LlavaNextVideoForConditionalGeneration >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> model = LlavaNextVideoForConditionalGeneration.from_pretrained("llava-hf/LLaVA-NeXT-Video-7B-hf", device_map="auto) >>> processor = AutoProcessor.from_pretrained("llava-hf/LLaVA-NeXT-Video-7B-hf") >>> prompt = "USER: <video>\nWhy is this video funny? ASSISTANT:" >>> video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset") >>> container = av.open(video_path) >>> # sample uniformly 8 frames from the video (model was trained with 32 frames per video, but this video is short) >>> total_frames = container.streams.video[0].frames >>> indices = np.arange(0, total_frames, total_frames / 8).astype(int) >>> clip = read_video_pyav(container, indices) >>> inputs_video = processor(text=prompt, videos=clip, return_tensors="pt").to(model.device) >>> # load an image to generate from an image >>> prompt = "USER:<image>\nWhat is shown in this image? ASSISTANT:" >>> url = "https://www.ilankelman.org/stopsigns/australia.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs_image = processor(text=prompt, images=image, return_tensors="pt").to(model.device) >>> # Generate from video >>> generate_ids = model.generate(**inputs_video, max_length=50) >>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] "USER:\nWhy is this video funny? ASSISTANT: The humor in this video comes from the unexpected and endearing sight of a baby wearing glasses and (...)" >>> # Generate from image >>> generate_ids = model.generate(**inputs_image, max_length=30) >>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] "USER: \nWhat's the content of the image? ASSISTANT: The image shows a red stop sign on a pole, with a traditional Chinese archway (...)" ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict self.vision_feature_layer = ( vision_feature_layer if vision_feature_layer is not None else self.config.vision_feature_layer ) self.vision_feature_select_strategy = ( vision_feature_select_strategy if vision_feature_select_strategy is not None else self.config.vision_feature_select_strategy ) if (input_ids is None) ^ (inputs_embeds is not None): raise ValueError( "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one" ) if (pixel_values is not None or pixel_values_videos is not None) and inputs_embeds is not None: raise ValueError( "You cannot specify both pixel_values and inputs_embeds at the same time, and must specify either one" ) legacy_processing = False if inputs_embeds is None: inputs_embeds = self.get_input_embeddings()(input_ids) # if the number of image/video tokens is more than image embeddings seq length, then prob we expanded it in processing # not very reliable, but we don't expect one to actually pass 500+ images for one prompt img_token_count = (input_ids == self.config.image_token_index).sum(1).max() video_token_count = (input_ids == self.config.video_token_index).sum(1).max() inputs_expanded = ( img_token_count < self.config.image_seq_length and video_token_count < self.config.video_seq_length ) pixels_present = input_ids.shape[-1] == 1 and pixel_values is not None and pixel_values_videos is not None legacy_processing = inputs_expanded or pixels_present image_features = feature_lens = None if pixel_values is not None and pixel_values.size(0) > 0: image_features = self._get_image_features(pixel_values, image_sizes) image_features, feature_lens = self.pack_image_features( image_features, image_sizes, image_newline=self.image_newline, ) video_features = video_feature_lens = None if pixel_values_videos is not None and pixel_values_videos.size(0) > 0: video_features = self._get_video_features(pixel_values_videos) video_features = [feature.flatten(0, 1) for feature in video_features] video_feature_lens = [feature.size(0) for feature in video_features] video_features = torch.cat(video_features, dim=0) video_feature_lens = torch.tensor(video_feature_lens, dtype=torch.long, device=video_features.device) if legacy_processing: logger.warning_once( "Expanding inputs for image.video tokens in LLaVa-NeXT-Video should be done in processing. " "Please add `patch_size` and `vision_feature_select_strategy` to the model's processing config or set directly " "with `processor.patch_size = {{patch_size}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. " "Using processors without these attributes in the config is deprecated and will throw an error in v4.47." ) if input_ids.shape[1] != 1: iterator = ( (image_features, feature_lens, self.config.image_token_index), (video_features, video_feature_lens, self.config.video_token_index), ) for features, lens, special_token in zip(iterator): if features is not None: ( inputs_embeds, attention_mask, position_ids, labels, input_ids, ) = self._merge_input_ids_with_image_features( features, lens, inputs_embeds, input_ids, attention_mask, position_ids, labels=labels, image_token_index=special_token, ) else: # Retrieve the first layer to inspect the logits and mask out the hidden states that are set to 0 first_layer_past_key_value = past_key_values[0][0][:, :, :, 0] # Sum all dimensions of head_dim (-2) to avoid random errors such as: https://github.com/huggingface/transformers/pull/28032#issuecomment-1863691941 batch_index, non_attended_tokens = torch.where(first_layer_past_key_value.float().sum(-2) == 0) # Get the target length target_length = input_ids.shape[1] past_length = first_layer_past_key_value.shape[-1] extended_attention_mask = torch.ones( (attention_mask.shape[0], past_length), dtype=attention_mask.dtype, device=attention_mask.device, ) # Filter out only the tokens that can be un-attended, this can happen # if one uses Llava + Fused modules where the cache on the # first iteration is already big enough, or if one passes custom cache valid_indices = non_attended_tokens < extended_attention_mask.size(-1) new_batch_index = batch_index[valid_indices] new_non_attended_tokens = non_attended_tokens[valid_indices] # Zero-out the places where we don't need to attend extended_attention_mask[new_batch_index, new_non_attended_tokens] = 0 attention_mask = torch.cat((extended_attention_mask, attention_mask[:, -target_length:]), dim=1) position_ids = torch.sum(attention_mask, dim=1).unsqueeze(-1) - 1 # TODO: @raushan retain only the new behavior after v4.47 else: if image_features is not None: special_image_mask = ( (input_ids == self.config.image_token_index).unsqueeze(-1).expand_as(inputs_embeds) ) image_features = image_features.to(inputs_embeds.device, inputs_embeds.dtype) inputs_embeds = inputs_embeds.masked_scatter(special_image_mask, image_features) if video_features is not None: special_image_mask = ( (input_ids == self.config.video_token_index).unsqueeze(-1).expand_as(inputs_embeds) ) video_features = video_features.to(inputs_embeds.device, inputs_embeds.dtype) inputs_embeds = inputs_embeds.masked_scatter(special_image_mask, video_features) outputs = self.language_model( attention_mask=attention_mask, position_ids=position_ids, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) logits = outputs[0] loss = None if labels is not None: # Shift so that tokens < n predict n if attention_mask is not None: shift_attention_mask = attention_mask[..., 1:] shift_logits = logits[..., :-1, :][shift_attention_mask.to(logits.device) != 0].contiguous() shift_labels = labels[..., 1:][shift_attention_mask.to(labels.device) != 0].contiguous() else: shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() # Flatten the tokens loss_fct = nn.CrossEntropyLoss() loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1).to(shift_logits.device) ) if not return_dict: output = (logits,) + outputs[1:] return (loss,) + output if loss is not None else output return LlavaNextVideoCausalLMOutputWithPast( loss=loss, logits=logits, past_key_values=outputs.past_key_values, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) def prepare_inputs_for_generation( self, input_ids, past_key_values=None, inputs_embeds=None, pixel_values=None, pixel_values_videos=None, image_sizes=None, attention_mask=None, **kwargs, ): if past_key_values is not None: if isinstance(past_key_values, Cache): cache_length = past_key_values.get_seq_length() past_length = past_key_values.seen_tokens else: cache_length = past_length = past_key_values[0][0].shape[2] # Keep only the unprocessed tokens: # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as # input) if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]: input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :] # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard # input_ids based on the past_length. elif past_length < input_ids.shape[1]: input_ids = input_ids[:, past_length:] # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens. elif self.config.image_token_index in input_ids or self.config.video_token_index in input_ids: input_ids = input_ids[:, input_ids.shape[1] - 1 :] # If the cache has seen more tokens than it can hold, then the cache has a size limit. Let's discard the # older attention values, as their corresponding values are not part of the input. if cache_length < past_length and attention_mask is not None: attention_mask = attention_mask[:, -(cache_length + input_ids.shape[1]) :] position_ids = kwargs.get("position_ids", None) if attention_mask is not None and position_ids is None: # create position_ids on the fly for batch generation position_ids = attention_mask.long().cumsum(-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) if past_key_values: position_ids = position_ids[:, -input_ids.shape[1] :] # if `inputs_embeds` are passed, we only want to use them in the 1st generation step if inputs_embeds is not None and past_key_values is None: model_inputs = {"inputs_embeds": inputs_embeds} else: model_inputs = {"input_ids": input_ids} model_inputs.update( { "position_ids": position_ids, "past_key_values": past_key_values, "use_cache": kwargs.get("use_cache"), "attention_mask": attention_mask, "pixel_values": pixel_values, "pixel_values_videos": pixel_values_videos, "image_sizes": image_sizes, } ) return model_inputs
transformers/src/transformers/models/llava_next_video/diff_llava_next_video.py/0
{ "file_path": "transformers/src/transformers/models/llava_next_video/diff_llava_next_video.py", "repo_id": "transformers", "token_count": 12784 }
357
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available _import_structure = { "configuration_luke": ["LukeConfig"], "tokenization_luke": ["LukeTokenizer"], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_luke"] = [ "LukeForEntityClassification", "LukeForEntityPairClassification", "LukeForEntitySpanClassification", "LukeForMultipleChoice", "LukeForQuestionAnswering", "LukeForSequenceClassification", "LukeForTokenClassification", "LukeForMaskedLM", "LukeModel", "LukePreTrainedModel", ] if TYPE_CHECKING: from .configuration_luke import LukeConfig from .tokenization_luke import LukeTokenizer try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_luke import ( LukeForEntityClassification, LukeForEntityPairClassification, LukeForEntitySpanClassification, LukeForMaskedLM, LukeForMultipleChoice, LukeForQuestionAnswering, LukeForSequenceClassification, LukeForTokenClassification, LukeModel, LukePreTrainedModel, ) else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
transformers/src/transformers/models/luke/__init__.py/0
{ "file_path": "transformers/src/transformers/models/luke/__init__.py", "repo_id": "transformers", "token_count": 825 }
358
# Copyright 2021 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tokenization classes for M2M100.""" import json import os from pathlib import Path from shutil import copyfile from typing import Any, Dict, List, Optional, Tuple, Union import sentencepiece from ...tokenization_utils import BatchEncoding, PreTrainedTokenizer from ...utils import logging logger = logging.get_logger(__name__) SPIECE_UNDERLINE = "▁" VOCAB_FILES_NAMES = { "vocab_file": "vocab.json", "spm_file": "sentencepiece.bpe.model", "tokenizer_config_file": "tokenizer_config.json", } # fmt: off FAIRSEQ_LANGUAGE_CODES = { "m2m100": ["af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"], "wmt21": ['en', 'ha', 'is', 'ja', 'cs', 'ru', 'zh', 'de'] } # fmt: on class M2M100Tokenizer(PreTrainedTokenizer): """ Construct an M2M100 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. spm_file (`str`): Path to [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary. src_lang (`str`, *optional*): A string representing the source language. tgt_lang (`str`, *optional*): A string representing the target language. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. language_codes (`str`, *optional*, defaults to `"m2m100"`): What language codes to use. Should be one of `"m2m100"` or `"wmt21"`. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Examples: ```python >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="ro") >>> src_text = " UN Chief Says There Is No Military Solution in Syria" >>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria" >>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") >>> outputs = model(**model_inputs) # should work ```""" vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] prefix_tokens: List[int] = [] suffix_tokens: List[int] = [] def __init__( self, vocab_file, spm_file, src_lang=None, tgt_lang=None, bos_token="<s>", eos_token="</s>", sep_token="</s>", pad_token="<pad>", unk_token="<unk>", language_codes="m2m100", sp_model_kwargs: Optional[Dict[str, Any]] = None, num_madeup_words=8, **kwargs, ) -> None: self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs self.language_codes = language_codes fairseq_language_code = FAIRSEQ_LANGUAGE_CODES[language_codes] self.lang_code_to_token = {lang_code: f"__{lang_code}__" for lang_code in fairseq_language_code} additional_special_tokens = kwargs.pop("additional_special_tokens", []) for lang_code in fairseq_language_code: token = self.get_lang_token(lang_code) if token not in additional_special_tokens and lang_code not in str(token) not in self.added_tokens_encoder: additional_special_tokens.append(token) self.vocab_file = vocab_file self.encoder = load_json(vocab_file) self.decoder = {v: k for k, v in self.encoder.items()} self.spm_file = spm_file self.sp_model = load_spm(spm_file, self.sp_model_kwargs) self.encoder_size = len(self.encoder) self.lang_token_to_id = { self.get_lang_token(lang_code): self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code) } self.lang_code_to_id = {lang_code: self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code)} self.id_to_lang_token = {v: k for k, v in self.lang_token_to_id.items()} self._src_lang = src_lang if src_lang is not None else "en" self.tgt_lang = tgt_lang self.cur_lang_id = self.get_lang_id(self._src_lang) self.num_madeup_words = num_madeup_words super().__init__( src_lang=src_lang, tgt_lang=tgt_lang, bos_token=bos_token, eos_token=eos_token, sep_token=sep_token, unk_token=unk_token, pad_token=pad_token, language_codes=language_codes, sp_model_kwargs=self.sp_model_kwargs, additional_special_tokens=additional_special_tokens, num_madeup_words=num_madeup_words, **kwargs, ) self.set_src_lang_special_tokens(self._src_lang) @property def vocab_size(self) -> int: return len(self.encoder) def get_vocab(self) -> Dict: vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} vocab.update(self.added_tokens_encoder) return vocab @property def src_lang(self) -> str: return self._src_lang @src_lang.setter def src_lang(self, new_src_lang: str) -> None: self._src_lang = new_src_lang self.set_src_lang_special_tokens(self._src_lang) def _tokenize(self, text: str) -> List[str]: return self.sp_model.encode(text, out_type=str) def _convert_token_to_id(self, token): if token in self.lang_token_to_id: return self.lang_token_to_id[token] return self.encoder.get(token, self.encoder[self.unk_token]) def _convert_id_to_token(self, index: int) -> str: """Converts an index (integer) in a token (str) using the decoder.""" if index in self.id_to_lang_token: return self.id_to_lang_token[index] return self.decoder.get(index, self.unk_token) def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" current_sub_tokens = [] out_string = "" for token in tokens: # make sure that special tokens are not decoded using sentencepiece model if token in self.all_special_tokens: out_string += self.sp_model.decode(current_sub_tokens) + token current_sub_tokens = [] else: current_sub_tokens.append(token) out_string += self.sp_model.decode(current_sub_tokens) return out_string.strip() def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) prefix_ones = [1] * len(self.prefix_tokens) suffix_ones = [1] * len(self.suffix_tokens) if token_ids_1 is None: return prefix_ones + ([0] * len(token_ids_0)) + suffix_ones return prefix_ones + ([0] * len(token_ids_0)) + ([0] * len(token_ids_1)) + suffix_ones def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An MBART sequence has the following format, where `X` represents the sequence: - `input_ids` (for encoder) `X [eos, src_lang_code]` - `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]` BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator. Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ if token_ids_1 is None: return self.prefix_tokens + token_ids_0 + self.suffix_tokens # We don't expect to process pairs, but leave the pair logic for API consistency return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens def __getstate__(self) -> Dict: state = self.__dict__.copy() state["sp_model"] = None return state def __setstate__(self, d: Dict) -> None: self.__dict__ = d # for backward compatibility if not hasattr(self, "sp_model_kwargs"): self.sp_model_kwargs = {} self.sp_model = load_spm(self.spm_file, self.sp_model_kwargs) def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: save_dir = Path(save_directory) if not save_dir.is_dir(): raise OSError(f"{save_directory} should be a directory") vocab_save_path = save_dir / ( (filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["vocab_file"] ) spm_save_path = save_dir / ( (filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["spm_file"] ) save_json(self.encoder, vocab_save_path) if os.path.abspath(self.spm_file) != os.path.abspath(spm_save_path) and os.path.isfile(self.spm_file): copyfile(self.spm_file, spm_save_path) elif not os.path.isfile(self.spm_file): with open(spm_save_path, "wb") as fi: content_spiece_model = self.sp_model.serialized_model_proto() fi.write(content_spiece_model) return (str(vocab_save_path), str(spm_save_path)) def prepare_seq2seq_batch( self, src_texts: List[str], src_lang: str = "en", tgt_texts: Optional[List[str]] = None, tgt_lang: str = "ro", **kwargs, ) -> BatchEncoding: self.src_lang = src_lang self.tgt_lang = tgt_lang self.set_src_lang_special_tokens(self.src_lang) return super().prepare_seq2seq_batch(src_texts, tgt_texts, **kwargs) def _build_translation_inputs(self, raw_inputs, src_lang: Optional[str], tgt_lang: Optional[str], **extra_kwargs): """Used by translation pipeline, to prepare inputs for the generate function""" if src_lang is None or tgt_lang is None: raise ValueError("Translation requires a `src_lang` and a `tgt_lang` for this model") self.src_lang = src_lang inputs = self(raw_inputs, add_special_tokens=True, **extra_kwargs) tgt_lang_id = self.get_lang_id(tgt_lang) inputs["forced_bos_token_id"] = tgt_lang_id return inputs def _switch_to_input_mode(self): self.set_src_lang_special_tokens(self.src_lang) def _switch_to_target_mode(self): self.set_tgt_lang_special_tokens(self.tgt_lang) def set_src_lang_special_tokens(self, src_lang: str) -> None: """Reset the special tokens to the source lang setting. No prefix and suffix=[eos, src_lang_code].""" lang_token = self.get_lang_token(src_lang) self.cur_lang_id = self.lang_token_to_id[lang_token] self.prefix_tokens = [self.cur_lang_id] self.suffix_tokens = [self.eos_token_id] def set_tgt_lang_special_tokens(self, tgt_lang: str) -> None: """Reset the special tokens to the target language setting. No prefix and suffix=[eos, tgt_lang_code].""" lang_token = self.get_lang_token(tgt_lang) self.cur_lang_id = self.lang_token_to_id[lang_token] self.prefix_tokens = [self.cur_lang_id] self.suffix_tokens = [self.eos_token_id] def get_lang_token(self, lang: str) -> str: return self.lang_code_to_token[lang] def get_lang_id(self, lang: str) -> int: lang_token = self.get_lang_token(lang) return self.lang_token_to_id[lang_token] def load_spm(path: str, sp_model_kwargs: Dict[str, Any]) -> sentencepiece.SentencePieceProcessor: spm = sentencepiece.SentencePieceProcessor(**sp_model_kwargs) spm.Load(str(path)) return spm def load_json(path: str) -> Union[Dict, List]: with open(path, "r") as f: return json.load(f) def save_json(data, path: str) -> None: with open(path, "w") as f: json.dump(data, f, indent=2)
transformers/src/transformers/models/m2m_100/tokenization_m2m_100.py/0
{ "file_path": "transformers/src/transformers/models/m2m_100/tokenization_m2m_100.py", "repo_id": "transformers", "token_count": 7110 }
359
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import re import warnings from pathlib import Path from shutil import copyfile from typing import Any, Dict, List, Optional, Tuple, Union import sentencepiece from ...tokenization_utils import PreTrainedTokenizer from ...utils import logging logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = { "source_spm": "source.spm", "target_spm": "target.spm", "vocab": "vocab.json", "target_vocab_file": "target_vocab.json", "tokenizer_config_file": "tokenizer_config.json", } SPIECE_UNDERLINE = "▁" # Example URL https://huggingface.co/Helsinki-NLP/opus-mt-en-de/resolve/main/vocab.json class MarianTokenizer(PreTrainedTokenizer): r""" Construct a Marian tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: source_spm (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary for the source language. target_spm (`str`): [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that contains the vocabulary for the target language. source_lang (`str`, *optional*): A string representing the source language. target_lang (`str`, *optional*): A string representing the target language. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. model_max_length (`int`, *optional*, defaults to 512): The maximum sentence length the model accepts. additional_special_tokens (`List[str]`, *optional*, defaults to `["<eop>", "<eod>"]`): Additional special tokens used by the tokenizer. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Examples: ```python >>> from transformers import MarianForCausalLM, MarianTokenizer >>> model = MarianForCausalLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") >>> src_texts = ["I am a small frog.", "Tom asked his teacher for advice."] >>> tgt_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] # optional >>> inputs = tokenizer(src_texts, text_target=tgt_texts, return_tensors="pt", padding=True) >>> outputs = model(**inputs) # should work ```""" vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] language_code_re = re.compile(">>.+<<") # type: re.Pattern def __init__( self, source_spm, target_spm, vocab, target_vocab_file=None, source_lang=None, target_lang=None, unk_token="<unk>", eos_token="</s>", pad_token="<pad>", model_max_length=512, sp_model_kwargs: Optional[Dict[str, Any]] = None, separate_vocabs=False, **kwargs, ) -> None: self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs assert Path(source_spm).exists(), f"cannot find spm source {source_spm}" self.separate_vocabs = separate_vocabs self.encoder = load_json(vocab) if str(unk_token) not in self.encoder: raise KeyError("<unk> token must be in the vocab") assert str(pad_token) in self.encoder if separate_vocabs: self.target_encoder = load_json(target_vocab_file) self.decoder = {v: k for k, v in self.target_encoder.items()} self.supported_language_codes = [] else: self.decoder = {v: k for k, v in self.encoder.items()} self.supported_language_codes: list = [k for k in self.encoder if k.startswith(">>") and k.endswith("<<")] self.source_lang = source_lang self.target_lang = target_lang self.spm_files = [source_spm, target_spm] # load SentencePiece model for pre-processing self.spm_source = load_spm(source_spm, self.sp_model_kwargs) self.spm_target = load_spm(target_spm, self.sp_model_kwargs) self.current_spm = self.spm_source self.current_encoder = self.encoder # Multilingual target side: default to using first supported language code. self._setup_normalizer() super().__init__( # bos_token=bos_token, unused. Start decoding with config.decoder_start_token_id source_lang=source_lang, target_lang=target_lang, unk_token=unk_token, eos_token=eos_token, pad_token=pad_token, model_max_length=model_max_length, sp_model_kwargs=self.sp_model_kwargs, target_vocab_file=target_vocab_file, separate_vocabs=separate_vocabs, **kwargs, ) def _setup_normalizer(self): try: from sacremoses import MosesPunctNormalizer self.punc_normalizer = MosesPunctNormalizer(self.source_lang).normalize except (ImportError, FileNotFoundError): warnings.warn("Recommended: pip install sacremoses.") self.punc_normalizer = lambda x: x def normalize(self, x: str) -> str: """Cover moses empty string edge case. They return empty list for '' input!""" return self.punc_normalizer(x) if x else "" def _convert_token_to_id(self, token): return self.current_encoder.get(token, self.current_encoder[self.unk_token]) def remove_language_code(self, text: str): """Remove language codes like >>fr<< before sentencepiece""" match = self.language_code_re.match(text) code: list = [match.group(0)] if match else [] return code, self.language_code_re.sub("", text) def _tokenize(self, text: str) -> List[str]: code, text = self.remove_language_code(text) pieces = self.current_spm.encode(text, out_type=str) return code + pieces def _convert_id_to_token(self, index: int) -> str: """Converts an index (integer) in a token (str) using the decoder.""" return self.decoder.get(index, self.unk_token) def batch_decode(self, sequences, **kwargs): """ Convert a list of lists of token ids into a list of strings by calling decode. Args: sequences (`Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`): List of tokenized input ids. Can be obtained using the `__call__` method. skip_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to remove special tokens in the decoding. clean_up_tokenization_spaces (`bool`, *optional*): Whether or not to clean up the tokenization spaces. If `None`, will default to `self.clean_up_tokenization_spaces` (available in the `tokenizer_config`). use_source_tokenizer (`bool`, *optional*, defaults to `False`): Whether or not to use the source tokenizer to decode sequences (only applicable in sequence-to-sequence problems). kwargs (additional keyword arguments, *optional*): Will be passed to the underlying model specific decode method. Returns: `List[str]`: The list of decoded sentences. """ return super().batch_decode(sequences, **kwargs) def decode(self, token_ids, **kwargs): """ Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`. Args: token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`): List of tokenized input ids. Can be obtained using the `__call__` method. skip_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to remove special tokens in the decoding. clean_up_tokenization_spaces (`bool`, *optional*): Whether or not to clean up the tokenization spaces. If `None`, will default to `self.clean_up_tokenization_spaces` (available in the `tokenizer_config`). use_source_tokenizer (`bool`, *optional*, defaults to `False`): Whether or not to use the source tokenizer to decode sequences (only applicable in sequence-to-sequence problems). kwargs (additional keyword arguments, *optional*): Will be passed to the underlying model specific decode method. Returns: `str`: The decoded sentence. """ return super().decode(token_ids, **kwargs) def convert_tokens_to_string(self, tokens: List[str]) -> str: """Uses source spm if _decode_use_source_tokenizer is True, and target spm otherwise""" sp_model = self.spm_source if self._decode_use_source_tokenizer else self.spm_target current_sub_tokens = [] out_string = "" for token in tokens: # make sure that special tokens are not decoded using sentencepiece model if token in self.all_special_tokens: out_string += sp_model.decode_pieces(current_sub_tokens) + token + " " current_sub_tokens = [] else: current_sub_tokens.append(token) out_string += sp_model.decode_pieces(current_sub_tokens) out_string = out_string.replace(SPIECE_UNDERLINE, " ") return out_string.strip() def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None) -> List[int]: """Build model inputs from a sequence by appending eos_token_id.""" if token_ids_1 is None: return token_ids_0 + [self.eos_token_id] # We don't expect to process pairs, but leave the pair logic for API consistency return token_ids_0 + token_ids_1 + [self.eos_token_id] def _switch_to_input_mode(self): self.current_spm = self.spm_source self.current_encoder = self.encoder def _switch_to_target_mode(self): self.current_spm = self.spm_target if self.separate_vocabs: self.current_encoder = self.target_encoder @property def vocab_size(self) -> int: return len(self.encoder) def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return saved_files = [] if self.separate_vocabs: out_src_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab"], ) out_tgt_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["target_vocab_file"], ) save_json(self.encoder, out_src_vocab_file) save_json(self.target_encoder, out_tgt_vocab_file) saved_files.append(out_src_vocab_file) saved_files.append(out_tgt_vocab_file) else: out_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab"] ) save_json(self.encoder, out_vocab_file) saved_files.append(out_vocab_file) for spm_save_filename, spm_orig_path, spm_model in zip( [VOCAB_FILES_NAMES["source_spm"], VOCAB_FILES_NAMES["target_spm"]], self.spm_files, [self.spm_source, self.spm_target], ): spm_save_path = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + spm_save_filename ) if os.path.abspath(spm_orig_path) != os.path.abspath(spm_save_path) and os.path.isfile(spm_orig_path): copyfile(spm_orig_path, spm_save_path) saved_files.append(spm_save_path) elif not os.path.isfile(spm_orig_path): with open(spm_save_path, "wb") as fi: content_spiece_model = spm_model.serialized_model_proto() fi.write(content_spiece_model) saved_files.append(spm_save_path) return tuple(saved_files) def get_vocab(self) -> Dict: return self.get_src_vocab() def get_src_vocab(self): return dict(self.encoder, **self.added_tokens_encoder) def get_tgt_vocab(self): return dict(self.target_encoder, **self.added_tokens_decoder) def __getstate__(self) -> Dict: state = self.__dict__.copy() state.update( {k: None for k in ["spm_source", "spm_target", "current_spm", "punc_normalizer", "target_vocab_file"]} ) return state def __setstate__(self, d: Dict) -> None: self.__dict__ = d # for backward compatibility if not hasattr(self, "sp_model_kwargs"): self.sp_model_kwargs = {} self.spm_source, self.spm_target = (load_spm(f, self.sp_model_kwargs) for f in self.spm_files) self.current_spm = self.spm_source self._setup_normalizer() def num_special_tokens_to_add(self, *args, **kwargs): """Just EOS""" return 1 def _special_token_mask(self, seq): all_special_ids = set(self.all_special_ids) # call it once instead of inside list comp all_special_ids.remove(self.unk_token_id) # <unk> is only sometimes special return [1 if x in all_special_ids else 0 for x in seq] def get_special_tokens_mask( self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False ) -> List[int]: """Get list where entries are [1] if a token is [eos] or [pad] else 0.""" if already_has_special_tokens: return self._special_token_mask(token_ids_0) elif token_ids_1 is None: return self._special_token_mask(token_ids_0) + [1] else: return self._special_token_mask(token_ids_0 + token_ids_1) + [1] def load_spm(path: str, sp_model_kwargs: Dict[str, Any]) -> sentencepiece.SentencePieceProcessor: spm = sentencepiece.SentencePieceProcessor(**sp_model_kwargs) spm.Load(path) return spm def save_json(data, path: str) -> None: with open(path, "w") as f: json.dump(data, f, indent=2) def load_json(path: str) -> Union[Dict, List]: with open(path, "r") as f: return json.load(f)
transformers/src/transformers/models/marian/tokenization_marian.py/0
{ "file_path": "transformers/src/transformers/models/marian/tokenization_marian.py", "repo_id": "transformers", "token_count": 7216 }
360
# coding=utf-8 # Copyright 2022 Meta Platforms, Inc. and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys from argparse import ArgumentParser from dataclasses import dataclass from pathlib import Path from pprint import pformat from typing import Any, Dict, Iterator, List, Set, Tuple import requests import torch import torchvision.transforms as T from detectron2.checkpoint import DetectionCheckpointer from detectron2.config import get_cfg from detectron2.data import MetadataCatalog from detectron2.projects.deeplab import add_deeplab_config from PIL import Image from torch import Tensor, nn from transformers.models.maskformer.feature_extraction_maskformer import MaskFormerImageProcessor from transformers.models.maskformer.modeling_maskformer import ( MaskFormerConfig, MaskFormerForInstanceSegmentation, MaskFormerForInstanceSegmentationOutput, MaskFormerModel, MaskFormerModelOutput, ) from transformers.utils import logging StateDict = Dict[str, Tensor] logging.set_verbosity_info() logger = logging.get_logger() torch.manual_seed(0) class TrackedStateDict: def __init__(self, to_track: Dict): """This class "tracks" a python dictionary by keeping track of which item is accessed. Args: to_track (Dict): The dictionary we wish to track """ self.to_track = to_track self._seen: Set[str] = set() def __getitem__(self, key: str) -> Any: return self.to_track[key] def __setitem__(self, key: str, item: Any): self._seen.add(key) self.to_track[key] = item def diff(self) -> List[str]: """This method returns a set difference between the keys in the tracked state dict and the one we have access so far. This is an effective method to check if we have update all the keys Returns: List[str]: List of keys not yet updated """ return set(self.to_track.keys()) - self._seen def copy(self) -> Dict: # proxy the call to the internal dictionary return self.to_track.copy() # We will verify our results on an image of cute cats def prepare_img(): url = "http://images.cocodataset.org/val2017/000000039769.jpg" img_data = requests.get(url, stream=True).raw im = Image.open(img_data) return im @dataclass class Args: """Fake command line arguments needed by maskformer/detectron implementation""" config_file: str def setup_cfg(args: Args): # load config from file and command-line arguments cfg = get_cfg() add_deeplab_config(cfg) add_mask_former_config(cfg) cfg.merge_from_file(args.config_file) cfg.freeze() return cfg class OriginalMaskFormerConfigToOursConverter: def __call__(self, original_config: object) -> MaskFormerConfig: model = original_config.MODEL mask_former = model.MASK_FORMER swin = model.SWIN dataset_catalog = MetadataCatalog.get(original_config.DATASETS.TEST[0]) id2label = dict(enumerate(dataset_catalog.stuff_classes)) label2id = {label: idx for idx, label in id2label.items()} config: MaskFormerConfig = MaskFormerConfig( fpn_feature_size=model.SEM_SEG_HEAD.CONVS_DIM, mask_feature_size=model.SEM_SEG_HEAD.MASK_DIM, num_labels=model.SEM_SEG_HEAD.NUM_CLASSES, no_object_weight=mask_former.NO_OBJECT_WEIGHT, num_queries=mask_former.NUM_OBJECT_QUERIES, backbone_config={ "pretrain_img_size": swin.PRETRAIN_IMG_SIZE, "image_size": swin.PRETRAIN_IMG_SIZE, "in_channels": 3, "patch_size": swin.PATCH_SIZE, "embed_dim": swin.EMBED_DIM, "depths": swin.DEPTHS, "num_heads": swin.NUM_HEADS, "window_size": swin.WINDOW_SIZE, "drop_path_rate": swin.DROP_PATH_RATE, "model_type": "swin", }, dice_weight=mask_former.DICE_WEIGHT, ce_weight=1.0, mask_weight=mask_former.MASK_WEIGHT, decoder_config={ "model_type": "detr", "max_position_embeddings": 1024, "encoder_layers": 6, "encoder_ffn_dim": 2048, "encoder_attention_heads": 8, "decoder_layers": mask_former.DEC_LAYERS, "decoder_ffn_dim": mask_former.DIM_FEEDFORWARD, "decoder_attention_heads": mask_former.NHEADS, "encoder_layerdrop": 0.0, "decoder_layerdrop": 0.0, "d_model": mask_former.HIDDEN_DIM, "dropout": mask_former.DROPOUT, "attention_dropout": 0.0, "activation_dropout": 0.0, "init_std": 0.02, "init_xavier_std": 1.0, "scale_embedding": False, "auxiliary_loss": False, "dilation": False, # default pretrained config values }, id2label=id2label, label2id=label2id, ) return config class OriginalMaskFormerConfigToImageProcessorConverter: def __call__(self, original_config: object) -> MaskFormerImageProcessor: model = original_config.MODEL model_input = original_config.INPUT dataset_catalog = MetadataCatalog.get(original_config.DATASETS.TEST[0]) return MaskFormerImageProcessor( image_mean=(torch.tensor(model.PIXEL_MEAN) / 255).tolist(), image_std=(torch.tensor(model.PIXEL_STD) / 255).tolist(), size=model_input.MIN_SIZE_TEST, max_size=model_input.MAX_SIZE_TEST, num_labels=model.SEM_SEG_HEAD.NUM_CLASSES, ignore_index=dataset_catalog.ignore_label, size_divisibility=32, # 32 is required by swin ) class OriginalMaskFormerCheckpointToOursConverter: def __init__(self, original_model: nn.Module, config: MaskFormerConfig): self.original_model = original_model self.config = config def pop_all(self, renamed_keys: List[Tuple[str, str]], dst_state_dict: StateDict, src_state_dict: StateDict): for src_key, dst_key in renamed_keys: dst_state_dict[dst_key] = src_state_dict.pop(src_key) def replace_backbone(self, dst_state_dict: StateDict, src_state_dict: StateDict, config: MaskFormerConfig): dst_prefix: str = "pixel_level_module.encoder" src_prefix: str = "backbone" renamed_keys = [ ( f"{src_prefix}.patch_embed.proj.weight", f"{dst_prefix}.model.embeddings.patch_embeddings.projection.weight", ), (f"{src_prefix}.patch_embed.proj.bias", f"{dst_prefix}.model.embeddings.patch_embeddings.projection.bias"), (f"{src_prefix}.patch_embed.norm.weight", f"{dst_prefix}.model.embeddings.norm.weight"), (f"{src_prefix}.patch_embed.norm.bias", f"{dst_prefix}.model.embeddings.norm.bias"), ] num_layers = len(config.backbone_config.depths) for layer_idx in range(num_layers): for block_idx in range(config.backbone_config.depths[layer_idx]): renamed_keys.extend( [ # src, dst ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.norm1.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.layernorm_before.weight", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.norm1.bias", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.layernorm_before.bias", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.relative_position_bias_table", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.relative_position_bias_table", ), ] ) # now we need to handle the attentions # read in weights + bias of input projection layer of cross-attention src_att_weight = src_state_dict[f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.qkv.weight"] src_att_bias = src_state_dict[f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.qkv.bias"] size = src_att_weight.shape[0] offset = size // 3 dst_state_dict[ f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.query.weight" ] = src_att_weight[:offset, :] dst_state_dict[ f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.query.bias" ] = src_att_bias[:offset] dst_state_dict[ f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.key.weight" ] = src_att_weight[offset : offset * 2, :] dst_state_dict[ f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.key.bias" ] = src_att_bias[offset : offset * 2] dst_state_dict[ f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.value.weight" ] = src_att_weight[-offset:, :] dst_state_dict[ f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.value.bias" ] = src_att_bias[-offset:] # let's pop them src_state_dict.pop(f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.qkv.weight") src_state_dict.pop(f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.qkv.bias") # proj renamed_keys.extend( [ ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.proj.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.output.dense.weight", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.proj.bias", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.output.dense.bias", ), ] ) # second norm renamed_keys.extend( [ ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.norm2.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.layernorm_after.weight", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.norm2.bias", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.layernorm_after.bias", ), ] ) # mlp renamed_keys.extend( [ ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.mlp.fc1.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.intermediate.dense.weight", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.mlp.fc1.bias", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.intermediate.dense.bias", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.mlp.fc2.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.output.dense.weight", ), ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.mlp.fc2.bias", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.output.dense.bias", ), ] ) renamed_keys.extend( [ ( f"{src_prefix}.layers.{layer_idx}.blocks.{block_idx}.attn.relative_position_index", f"{dst_prefix}.model.encoder.layers.{layer_idx}.blocks.{block_idx}.attention.self.relative_position_index", ) ] ) if layer_idx < num_layers - 1: # patch merging renamed_keys.extend( [ ( f"{src_prefix}.layers.{layer_idx}.downsample.reduction.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.downsample.reduction.weight", ), ( f"{src_prefix}.layers.{layer_idx}.downsample.norm.weight", f"{dst_prefix}.model.encoder.layers.{layer_idx}.downsample.norm.weight", ), ( f"{src_prefix}.layers.{layer_idx}.downsample.norm.bias", f"{dst_prefix}.model.encoder.layers.{layer_idx}.downsample.norm.bias", ), ] ) # hidden states norms renamed_keys.extend( [ ( f"{src_prefix}.norm{layer_idx}.weight", f"{dst_prefix}.hidden_states_norms.{layer_idx}.weight", ), ( f"{src_prefix}.norm{layer_idx}.bias", f"{dst_prefix}.hidden_states_norms.{layer_idx}.bias", ), ] ) self.pop_all(renamed_keys, dst_state_dict, src_state_dict) def replace_pixel_module(self, dst_state_dict: StateDict, src_state_dict: StateDict): dst_prefix: str = "pixel_level_module.decoder" src_prefix: str = "sem_seg_head.pixel_decoder" self.replace_backbone(dst_state_dict, src_state_dict, self.config) def rename_keys_for_conv(detectron_conv: str, mine_conv: str): return [ (f"{detectron_conv}.weight", f"{mine_conv}.0.weight"), # 2 cuz the have act in the middle -> rename it (f"{detectron_conv}.norm.weight", f"{mine_conv}.1.weight"), (f"{detectron_conv}.norm.bias", f"{mine_conv}.1.bias"), ] renamed_keys = [ (f"{src_prefix}.mask_features.weight", f"{dst_prefix}.mask_projection.weight"), (f"{src_prefix}.mask_features.bias", f"{dst_prefix}.mask_projection.bias"), # the layers in the original one are in reverse order, stem is the last one! ] renamed_keys.extend(rename_keys_for_conv(f"{src_prefix}.layer_4", f"{dst_prefix}.fpn.stem")) # add all the fpn layers (here we need some config parameters to know the size in advance) for src_i, dst_i in zip(range(3, 0, -1), range(0, 3)): renamed_keys.extend( rename_keys_for_conv(f"{src_prefix}.adapter_{src_i}", f"{dst_prefix}.fpn.layers.{dst_i}.proj") ) renamed_keys.extend( rename_keys_for_conv(f"{src_prefix}.layer_{src_i}", f"{dst_prefix}.fpn.layers.{dst_i}.block") ) self.pop_all(renamed_keys, dst_state_dict, src_state_dict) def rename_keys_in_detr_decoder(self, dst_state_dict: StateDict, src_state_dict: StateDict): dst_prefix: str = "transformer_module.decoder" src_prefix: str = "sem_seg_head.predictor.transformer.decoder" # not sure why we are not popping direcetly here! # here we list all keys to be renamed (original name on the left, our name on the right) rename_keys = [] for i in range(self.config.decoder_config.decoder_layers): # decoder layers: 2 times output projection, 2 feedforward neural networks and 3 layernorms rename_keys.append( ( f"{src_prefix}.layers.{i}.self_attn.out_proj.weight", f"{dst_prefix}.layers.{i}.self_attn.out_proj.weight", ) ) rename_keys.append( ( f"{src_prefix}.layers.{i}.self_attn.out_proj.bias", f"{dst_prefix}.layers.{i}.self_attn.out_proj.bias", ) ) rename_keys.append( ( f"{src_prefix}.layers.{i}.multihead_attn.out_proj.weight", f"{dst_prefix}.layers.{i}.encoder_attn.out_proj.weight", ) ) rename_keys.append( ( f"{src_prefix}.layers.{i}.multihead_attn.out_proj.bias", f"{dst_prefix}.layers.{i}.encoder_attn.out_proj.bias", ) ) rename_keys.append((f"{src_prefix}.layers.{i}.linear1.weight", f"{dst_prefix}.layers.{i}.fc1.weight")) rename_keys.append((f"{src_prefix}.layers.{i}.linear1.bias", f"{dst_prefix}.layers.{i}.fc1.bias")) rename_keys.append((f"{src_prefix}.layers.{i}.linear2.weight", f"{dst_prefix}.layers.{i}.fc2.weight")) rename_keys.append((f"{src_prefix}.layers.{i}.linear2.bias", f"{dst_prefix}.layers.{i}.fc2.bias")) rename_keys.append( (f"{src_prefix}.layers.{i}.norm1.weight", f"{dst_prefix}.layers.{i}.self_attn_layer_norm.weight") ) rename_keys.append( (f"{src_prefix}.layers.{i}.norm1.bias", f"{dst_prefix}.layers.{i}.self_attn_layer_norm.bias") ) rename_keys.append( (f"{src_prefix}.layers.{i}.norm2.weight", f"{dst_prefix}.layers.{i}.encoder_attn_layer_norm.weight") ) rename_keys.append( (f"{src_prefix}.layers.{i}.norm2.bias", f"{dst_prefix}.layers.{i}.encoder_attn_layer_norm.bias") ) rename_keys.append( (f"{src_prefix}.layers.{i}.norm3.weight", f"{dst_prefix}.layers.{i}.final_layer_norm.weight") ) rename_keys.append( (f"{src_prefix}.layers.{i}.norm3.bias", f"{dst_prefix}.layers.{i}.final_layer_norm.bias") ) return rename_keys def replace_q_k_v_in_detr_decoder(self, dst_state_dict: StateDict, src_state_dict: StateDict): dst_prefix: str = "transformer_module.decoder" src_prefix: str = "sem_seg_head.predictor.transformer.decoder" for i in range(self.config.decoder_config.decoder_layers): # read in weights + bias of input projection layer of self-attention in_proj_weight = src_state_dict.pop(f"{src_prefix}.layers.{i}.self_attn.in_proj_weight") in_proj_bias = src_state_dict.pop(f"{src_prefix}.layers.{i}.self_attn.in_proj_bias") # next, add query, keys and values (in that order) to the state dict dst_state_dict[f"{dst_prefix}.layers.{i}.self_attn.q_proj.weight"] = in_proj_weight[:256, :] dst_state_dict[f"{dst_prefix}.layers.{i}.self_attn.q_proj.bias"] = in_proj_bias[:256] dst_state_dict[f"{dst_prefix}.layers.{i}.self_attn.k_proj.weight"] = in_proj_weight[256:512, :] dst_state_dict[f"{dst_prefix}.layers.{i}.self_attn.k_proj.bias"] = in_proj_bias[256:512] dst_state_dict[f"{dst_prefix}.layers.{i}.self_attn.v_proj.weight"] = in_proj_weight[-256:, :] dst_state_dict[f"{dst_prefix}.layers.{i}.self_attn.v_proj.bias"] = in_proj_bias[-256:] # read in weights + bias of input projection layer of cross-attention in_proj_weight_cross_attn = src_state_dict.pop(f"{src_prefix}.layers.{i}.multihead_attn.in_proj_weight") in_proj_bias_cross_attn = src_state_dict.pop(f"{src_prefix}.layers.{i}.multihead_attn.in_proj_bias") # next, add query, keys and values (in that order) of cross-attention to the state dict dst_state_dict[f"{dst_prefix}.layers.{i}.encoder_attn.q_proj.weight"] = in_proj_weight_cross_attn[:256, :] dst_state_dict[f"{dst_prefix}.layers.{i}.encoder_attn.q_proj.bias"] = in_proj_bias_cross_attn[:256] dst_state_dict[f"{dst_prefix}.layers.{i}.encoder_attn.k_proj.weight"] = in_proj_weight_cross_attn[ 256:512, : ] dst_state_dict[f"{dst_prefix}.layers.{i}.encoder_attn.k_proj.bias"] = in_proj_bias_cross_attn[256:512] dst_state_dict[f"{dst_prefix}.layers.{i}.encoder_attn.v_proj.weight"] = in_proj_weight_cross_attn[-256:, :] dst_state_dict[f"{dst_prefix}.layers.{i}.encoder_attn.v_proj.bias"] = in_proj_bias_cross_attn[-256:] def replace_detr_decoder(self, dst_state_dict: StateDict, src_state_dict: StateDict): dst_prefix: str = "transformer_module.decoder" src_prefix: str = "sem_seg_head.predictor.transformer.decoder" renamed_keys = self.rename_keys_in_detr_decoder(dst_state_dict, src_state_dict) # add more renamed_keys.extend( [ (f"{src_prefix}.norm.weight", f"{dst_prefix}.layernorm.weight"), (f"{src_prefix}.norm.bias", f"{dst_prefix}.layernorm.bias"), ] ) self.pop_all(renamed_keys, dst_state_dict, src_state_dict) self.replace_q_k_v_in_detr_decoder(dst_state_dict, src_state_dict) def replace_transformer_module(self, dst_state_dict: StateDict, src_state_dict: StateDict): dst_prefix: str = "transformer_module" src_prefix: str = "sem_seg_head.predictor" self.replace_detr_decoder(dst_state_dict, src_state_dict) renamed_keys = [ (f"{src_prefix}.query_embed.weight", f"{dst_prefix}.queries_embedder.weight"), (f"{src_prefix}.input_proj.weight", f"{dst_prefix}.input_projection.weight"), (f"{src_prefix}.input_proj.bias", f"{dst_prefix}.input_projection.bias"), ] self.pop_all(renamed_keys, dst_state_dict, src_state_dict) def replace_instance_segmentation_module(self, dst_state_dict: StateDict, src_state_dict: StateDict): # NOTE in our case we don't have a prefix, thus we removed the "." from the keys later on! dst_prefix: str = "" src_prefix: str = "sem_seg_head.predictor" renamed_keys = [ (f"{src_prefix}.class_embed.weight", f"{dst_prefix}class_predictor.weight"), (f"{src_prefix}.class_embed.bias", f"{dst_prefix}class_predictor.bias"), ] mlp_len = 3 for i in range(mlp_len): renamed_keys.extend( [ (f"{src_prefix}.mask_embed.layers.{i}.weight", f"{dst_prefix}mask_embedder.{i}.0.weight"), (f"{src_prefix}.mask_embed.layers.{i}.bias", f"{dst_prefix}mask_embedder.{i}.0.bias"), ] ) logger.info(f"Replacing keys {pformat(renamed_keys)}") self.pop_all(renamed_keys, dst_state_dict, src_state_dict) def convert(self, mask_former: MaskFormerModel) -> MaskFormerModel: dst_state_dict = TrackedStateDict(mask_former.state_dict()) src_state_dict = self.original_model.state_dict() self.replace_pixel_module(dst_state_dict, src_state_dict) self.replace_transformer_module(dst_state_dict, src_state_dict) logger.info(f"Missed keys are {pformat(dst_state_dict.diff())}") logger.info(f"Not copied keys are {pformat(src_state_dict.keys())}") logger.info("🙌 Done") mask_former.load_state_dict(dst_state_dict) return mask_former def convert_instance_segmentation( self, mask_former: MaskFormerForInstanceSegmentation ) -> MaskFormerForInstanceSegmentation: dst_state_dict = TrackedStateDict(mask_former.state_dict()) src_state_dict = self.original_model.state_dict() self.replace_instance_segmentation_module(dst_state_dict, src_state_dict) mask_former.load_state_dict(dst_state_dict) return mask_former @staticmethod def using_dirs(checkpoints_dir: Path, config_dir: Path) -> Iterator[Tuple[object, Path, Path]]: checkpoints: List[Path] = checkpoints_dir.glob("**/*.pkl") for checkpoint in checkpoints: logger.info(f"💪 Converting {checkpoint.stem}") # find associated config file config: Path = config_dir / checkpoint.parents[0].stem / "swin" / f"{checkpoint.stem}.yaml" yield config, checkpoint def test(original_model, our_model: MaskFormerForInstanceSegmentation, image_processor: MaskFormerImageProcessor): with torch.no_grad(): original_model = original_model.eval() our_model = our_model.eval() im = prepare_img() tr = T.Compose( [ T.Resize((384, 384)), T.ToTensor(), T.Normalize( mean=torch.tensor([123.675, 116.280, 103.530]) / 255.0, std=torch.tensor([58.395, 57.120, 57.375]) / 255.0, ), ], ) x = tr(im).unsqueeze(0) original_model_backbone_features = original_model.backbone(x.clone()) our_model_output: MaskFormerModelOutput = our_model.model(x.clone(), output_hidden_states=True) for original_model_feature, our_model_feature in zip( original_model_backbone_features.values(), our_model_output.encoder_hidden_states ): assert torch.allclose( original_model_feature, our_model_feature, atol=1e-3 ), "The backbone features are not the same." original_model_pixel_out = original_model.sem_seg_head.pixel_decoder.forward_features( original_model_backbone_features ) assert torch.allclose( original_model_pixel_out[0], our_model_output.pixel_decoder_last_hidden_state, atol=1e-4 ), "The pixel decoder feature are not the same" # let's test the full model original_model_out = original_model([{"image": x.squeeze(0)}]) original_segmentation = original_model_out[0]["sem_seg"] our_model_out: MaskFormerForInstanceSegmentationOutput = our_model(x) our_segmentation = image_processor.post_process_segmentation(our_model_out, target_size=(384, 384)) assert torch.allclose( original_segmentation, our_segmentation, atol=1e-3 ), "The segmentation image is not the same." logger.info("✅ Test passed!") def get_name(checkpoint_file: Path): model_name_raw: str = checkpoint_file.stem # model_name_raw is something like maskformer_panoptic_swin_base_IN21k_384_bs64_554k parent_name: str = checkpoint_file.parents[0].stem backbone = "swin" dataset = "" if "coco" in parent_name: dataset = "coco" elif "ade" in parent_name: dataset = "ade" else: raise ValueError(f"{parent_name} must be wrong since we didn't find 'coco' or 'ade' in it ") backbone_types = ["tiny", "small", "base", "large"] backbone_type = list(filter(lambda x: x in model_name_raw, backbone_types))[0] model_name = f"maskformer-{backbone}-{backbone_type}-{dataset}" return model_name if __name__ == "__main__": parser = ArgumentParser( description="Command line to convert the original maskformers (with swin backbone) to our implementations." ) parser.add_argument( "--checkpoints_dir", type=Path, help=( "A directory containing the model's checkpoints. The directory has to have the following structure:" " <DIR_NAME>/<DATASET_NAME>/<CONFIG_NAME>.pkl" ), ) parser.add_argument( "--configs_dir", type=Path, help=( "A directory containing the model's configs, see detectron2 doc. The directory has to have the following" " structure: <DIR_NAME>/<DATASET_NAME>/<CONFIG_NAME>.yaml" ), ) parser.add_argument( "--pytorch_dump_folder_path", required=True, type=Path, help="Path to the folder to output PyTorch models.", ) parser.add_argument( "--maskformer_dir", required=True, type=Path, help=( "A path to MaskFormer's original implementation directory. You can download from here:" " https://github.com/facebookresearch/MaskFormer" ), ) args = parser.parse_args() checkpoints_dir: Path = args.checkpoints_dir config_dir: Path = args.configs_dir save_directory: Path = args.pytorch_dump_folder_path maskformer_dir: Path = args.maskformer_dir # append the path to the parents to maskformer dir sys.path.append(str(maskformer_dir.parent)) # and import what's needed from MaskFormer.mask_former import add_mask_former_config from MaskFormer.mask_former.mask_former_model import MaskFormer as OriginalMaskFormer if not save_directory.exists(): save_directory.mkdir(parents=True) for config_file, checkpoint_file in OriginalMaskFormerCheckpointToOursConverter.using_dirs( checkpoints_dir, config_dir ): image_processor = OriginalMaskFormerConfigToImageProcessorConverter()(setup_cfg(Args(config_file=config_file))) original_config = setup_cfg(Args(config_file=config_file)) mask_former_kwargs = OriginalMaskFormer.from_config(original_config) original_model = OriginalMaskFormer(**mask_former_kwargs).eval() DetectionCheckpointer(original_model).load(str(checkpoint_file)) config: MaskFormerConfig = OriginalMaskFormerConfigToOursConverter()(original_config) mask_former = MaskFormerModel(config=config).eval() converter = OriginalMaskFormerCheckpointToOursConverter(original_model, config) maskformer = converter.convert(mask_former) mask_former_for_instance_segmentation = MaskFormerForInstanceSegmentation(config=config).eval() mask_former_for_instance_segmentation.model = mask_former mask_former_for_instance_segmentation = converter.convert_instance_segmentation( mask_former_for_instance_segmentation ) test(original_model, mask_former_for_instance_segmentation, image_processor) model_name = get_name(checkpoint_file) logger.info(f"🪄 Saving {model_name}") image_processor.save_pretrained(save_directory / model_name) mask_former_for_instance_segmentation.save_pretrained(save_directory / model_name) image_processor.push_to_hub( repo_path_or_name=save_directory / model_name, commit_message="Add model", use_temp_dir=True, ) mask_former_for_instance_segmentation.push_to_hub( repo_path_or_name=save_directory / model_name, commit_message="Add model", use_temp_dir=True, )
transformers/src/transformers/models/maskformer/convert_maskformer_original_pytorch_checkpoint_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/maskformer/convert_maskformer_original_pytorch_checkpoint_to_pytorch.py", "repo_id": "transformers", "token_count": 16104 }
361
# coding=utf-8 # Copyright 2021 The Facebook AI Research Team Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from shutil import copyfile from typing import Any, Dict, List, Optional, Tuple import sentencepiece as spm from ...tokenization_utils import AddedToken, BatchEncoding, PreTrainedTokenizer from ...utils import logging logger = logging.get_logger(__name__) SPIECE_UNDERLINE = "▁" VOCAB_FILES_NAMES = {"vocab_file": "sentencepiece.bpe.model"} FAIRSEQ_LANGUAGE_CODES = ["ar_AR", "cs_CZ", "de_DE", "en_XX", "es_XX", "et_EE", "fi_FI", "fr_XX", "gu_IN", "hi_IN", "it_IT", "ja_XX", "kk_KZ", "ko_KR", "lt_LT", "lv_LV", "my_MM", "ne_NP", "nl_XX", "ro_RO", "ru_RU", "si_LK", "tr_TR", "vi_VN", "zh_CN", "af_ZA", "az_AZ", "bn_IN", "fa_IR", "he_IL", "hr_HR", "id_ID", "ka_GE", "km_KH", "mk_MK", "ml_IN", "mn_MN", "mr_IN", "pl_PL", "ps_AF", "pt_XX", "sv_SE", "sw_KE", "ta_IN", "te_IN", "th_TH", "tl_XX", "uk_UA", "ur_PK", "xh_ZA", "gl_ES", "sl_SI"] # fmt: skip class MBart50Tokenizer(PreTrainedTokenizer): """ Construct a MBart50 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. src_lang (`str`, *optional*): A string representing the source language. tgt_lang (`str`, *optional*): A string representing the target language. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (`str`, *optional*, defaults to `"<s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. mask_token (`str`, *optional*, defaults to `"<mask>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Examples: ```python >>> from transformers import MBart50Tokenizer >>> tokenizer = MBart50Tokenizer.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO") >>> src_text = " UN Chief Says There Is No Military Solution in Syria" >>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria" >>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") >>> # model(**model_inputs) should work ```""" vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] prefix_tokens: List[int] = [] suffix_tokens: List[int] = [] def __init__( self, vocab_file, src_lang=None, tgt_lang=None, eos_token="</s>", sep_token="</s>", cls_token="<s>", unk_token="<unk>", pad_token="<pad>", mask_token="<mask>", sp_model_kwargs: Optional[Dict[str, Any]] = None, **kwargs, ) -> None: # Mask token behave like a normal word, i.e. include the space before it mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs kwargs["additional_special_tokens"] = kwargs.get("additional_special_tokens", []) or [] kwargs["additional_special_tokens"] += [ code for code in FAIRSEQ_LANGUAGE_CODES if code not in kwargs["additional_special_tokens"] ] self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.Load(str(vocab_file)) self.vocab_file = vocab_file # Original fairseq vocab and spm vocab must be "aligned": # Vocab | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 # -------- | ------- | ------- | ------ | ------- | --- | --- | --- | ----- | ----- | ---- # fairseq | '<s>' | '<pad>' | '</s>' | '<unk>' | ',' | '.' | '▁' | 's' | '▁de' | '-' # spm | '<unk>' | '<s>' | '</s>' | ',' | '.' | '▁' | 's' | '▁de' | '-' | '▁a' # Mimic fairseq token-to-id alignment for the first 4 token self.fairseq_tokens_to_ids = {"<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3} # The first "real" token "," has position 4 in the original fairseq vocab and position 3 in the spm vocab self.fairseq_offset = 1 self.sp_model_size = len(self.sp_model) self.lang_code_to_id = { code: self.sp_model_size + i + self.fairseq_offset for i, code in enumerate(FAIRSEQ_LANGUAGE_CODES) } self.id_to_lang_code = {v: k for k, v in self.lang_code_to_id.items()} self.fairseq_tokens_to_ids["<mask>"] = len(self.sp_model) + len(self.lang_code_to_id) + self.fairseq_offset self.fairseq_tokens_to_ids.update(self.lang_code_to_id) self.fairseq_ids_to_tokens = {v: k for k, v in self.fairseq_tokens_to_ids.items()} super().__init__( src_lang=src_lang, tgt_lang=tgt_lang, eos_token=eos_token, unk_token=unk_token, sep_token=sep_token, cls_token=cls_token, pad_token=pad_token, mask_token=mask_token, sp_model_kwargs=self.sp_model_kwargs, **kwargs, ) self._src_lang = src_lang if src_lang is not None else "en_XX" self.cur_lang_code_id = self.lang_code_to_id[self._src_lang] self.tgt_lang = tgt_lang self.set_src_lang_special_tokens(self._src_lang) @property def vocab_size(self) -> int: return len(self.sp_model) + len(self.lang_code_to_id) + self.fairseq_offset + 1 # Plus 1 for the mask token @property def src_lang(self) -> str: return self._src_lang @src_lang.setter def src_lang(self, new_src_lang: str) -> None: self._src_lang = new_src_lang self.set_src_lang_special_tokens(self._src_lang) def __getstate__(self) -> Dict: state = self.__dict__.copy() state["sp_model"] = None return state def __setstate__(self, d: Dict) -> None: self.__dict__ = d # for backward compatibility if not hasattr(self, "sp_model_kwargs"): self.sp_model_kwargs = {} self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.Load(self.vocab_file) def get_vocab(self) -> Dict: vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} vocab.update(self.added_tokens_encoder) return vocab def _tokenize(self, text: str) -> List[str]: return self.sp_model.encode(text, out_type=str) def _convert_token_to_id(self, token: str) -> int: """Converts a token (str) in an id using the vocab.""" if token in self.fairseq_tokens_to_ids: return self.fairseq_tokens_to_ids[token] spm_id = self.sp_model.PieceToId(token) # Need to return unknown token if the SP model returned 0 return spm_id + self.fairseq_offset if spm_id else self.unk_token_id def _convert_id_to_token(self, index: int) -> str: """Converts an index (integer) in a token (str) using the vocab.""" if index in self.fairseq_ids_to_tokens: return self.fairseq_ids_to_tokens[index] return self.sp_model.IdToPiece(index - self.fairseq_offset) # Copied from transformers.models.albert.tokenization_albert.AlbertTokenizer.convert_tokens_to_string def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" current_sub_tokens = [] out_string = "" prev_is_special = False for token in tokens: # make sure that special tokens are not decoded using sentencepiece model if token in self.all_special_tokens: if not prev_is_special: out_string += " " out_string += self.sp_model.decode(current_sub_tokens) + token prev_is_special = True current_sub_tokens = [] else: current_sub_tokens.append(token) prev_is_special = False out_string += self.sp_model.decode(current_sub_tokens) return out_string.strip() def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return out_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): copyfile(self.vocab_file, out_vocab_file) elif not os.path.isfile(self.vocab_file): with open(out_vocab_file, "wb") as fi: content_spiece_model = self.sp_model.serialized_model_proto() fi.write(content_spiece_model) return (out_vocab_file,) def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) prefix_ones = [1] * len(self.prefix_tokens) suffix_ones = [1] * len(self.suffix_tokens) if token_ids_1 is None: return prefix_ones + ([0] * len(token_ids_0)) + suffix_ones return prefix_ones + ([0] * len(token_ids_0)) + ([0] * len(token_ids_1)) + suffix_ones def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An MBART-50 sequence has the following format, where `X` represents the sequence: - `input_ids` (for encoder) `[src_lang_code] X [eos]` - `labels`: (for decoder) `[tgt_lang_code] X [eos]` BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator. Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ if token_ids_1 is None: return self.prefix_tokens + token_ids_0 + self.suffix_tokens # We don't expect to process pairs, but leave the pair logic for API consistency return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens def _build_translation_inputs( self, raw_inputs, return_tensors: str, src_lang: Optional[str], tgt_lang: Optional[str], **extra_kwargs ): """Used by translation pipeline, to prepare inputs for the generate function""" if src_lang is None or tgt_lang is None: raise ValueError("Translation requires a `src_lang` and a `tgt_lang` for this model") self.src_lang = src_lang inputs = self(raw_inputs, add_special_tokens=True, return_tensors=return_tensors, **extra_kwargs) tgt_lang_id = self.convert_tokens_to_ids(tgt_lang) inputs["forced_bos_token_id"] = tgt_lang_id return inputs def prepare_seq2seq_batch( self, src_texts: List[str], src_lang: str = "en_XX", tgt_texts: Optional[List[str]] = None, tgt_lang: str = "ro_RO", **kwargs, ) -> BatchEncoding: self.src_lang = src_lang self.tgt_lang = tgt_lang return super().prepare_seq2seq_batch(src_texts, tgt_texts, **kwargs) def _switch_to_input_mode(self): return self.set_src_lang_special_tokens(self.src_lang) def _switch_to_target_mode(self): return self.set_tgt_lang_special_tokens(self.tgt_lang) def set_src_lang_special_tokens(self, src_lang: str) -> None: """Reset the special tokens to the source lang setting. prefix=[src_lang_code] and suffix=[eos].""" self.cur_lang_code_id = self.lang_code_to_id[src_lang] self.prefix_tokens = [self.cur_lang_code_id] self.suffix_tokens = [self.eos_token_id] def set_tgt_lang_special_tokens(self, tgt_lang: str) -> None: """Reset the special tokens to the target language setting. prefix=[tgt_lang_code] and suffix=[eos].""" self.cur_lang_code_id = self.lang_code_to_id[tgt_lang] self.prefix_tokens = [self.cur_lang_code_id] self.suffix_tokens = [self.eos_token_id]
transformers/src/transformers/models/mbart50/tokenization_mbart50.py/0
{ "file_path": "transformers/src/transformers/models/mbart50/tokenization_mbart50.py", "repo_id": "transformers", "token_count": 7106 }
362
# Copyright 2023 Mistral AI and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import gc import json import os import shutil import warnings import torch from safetensors.torch import load_file as safe_load_file from transformers import ( LlamaTokenizer, MistralConfig, MistralForCausalLM, ) try: from transformers import LlamaTokenizerFast tokenizer_class = LlamaTokenizerFast except ImportError as e: warnings.warn(e) warnings.warn( "The converted tokenizer will be the `slow` tokenizer. To use the fast, update your `tokenizers` library and re-run the tokenizer conversion" ) tokenizer_class = LlamaTokenizer """ Sample usage: ``` python src/transformers/models/mistral/convert_mistral_weights_to_hf.py \ --input_dir /path/to/downloaded/mistral/weights --model_size 7B --output_dir /output/path ``` Thereafter, models can be loaded via: ```py from transformers import MistralForCausalLM, LlamaTokenizer model = MistralForCausalLM.from_pretrained("/output/path") tokenizer = LlamaTokenizer.from_pretrained("/output/path") ``` Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). """ NUM_SHARDS = {"7B": 1} def compute_intermediate_size(n, ffn_dim_multiplier=1, multiple_of=256): return multiple_of * ((int(ffn_dim_multiplier * int(8 * n / 3)) + multiple_of - 1) // multiple_of) def read_json(path): with open(path, "r") as f: return json.load(f) def write_json(text, path): with open(path, "w") as f: json.dump(text, f) def write_model(model_path, input_base_path, model_size, tokenizer_path=None, safe_serialization=True, is_v3=False): # for backward compatibility, before you needed the repo to be called `my_repo/model_size` if not os.path.isfile(os.path.join(input_base_path, "params.json")): input_base_path = os.path.join(input_base_path, model_size) os.makedirs(model_path, exist_ok=True) tmp_model_path = os.path.join(model_path, "tmp") os.makedirs(tmp_model_path, exist_ok=True) params = read_json(os.path.join(input_base_path, "params.json")) num_shards = NUM_SHARDS[model_size] sliding_window = params.get("sliding_window", None) # For some reason this is a string in the params.json if sliding_window is not None: sliding_window = int(sliding_window) n_layers = params["n_layers"] n_heads = params["n_heads"] n_heads_per_shard = n_heads // num_shards dim = params["dim"] dims_per_head = dim // n_heads base = params.get("rope_theta", 10000.0) inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head)) max_position_embeddings = 4096 * 8 if tokenizer_path is not None: tokenizer = tokenizer_class(tokenizer_path + ".v3" if is_v3 else "") tokenizer.save_pretrained(model_path) vocab_size = tokenizer.vocab_size if tokenizer_path is not None else 32000 if "n_kv_heads" in params: num_key_value_heads = params["n_kv_heads"] # for GQA / MQA num_local_key_value_heads = num_key_value_heads // num_shards key_value_dim = dims_per_head * num_local_key_value_heads else: # compatibility with other checkpoints num_key_value_heads = n_heads num_local_key_value_heads = n_heads_per_shard key_value_dim = dim # permute for sliced rotary def permute(w, n_heads=n_heads, dim1=dim, dim2=dim): return w.view(n_heads, dim1 // n_heads // 2, 2, dim2).transpose(1, 2).reshape(dim1, dim2) print(f"Fetching all parameters from the checkpoint at {input_base_path}.") # Load weights - for v3 models the consolidated weights are in a single file format in safetensors if is_v3: loaded = [safe_load_file(os.path.join(input_base_path, "consolidated.safetensors"))] else: loaded = [ torch.load(os.path.join(input_base_path, f"consolidated.{i:02d}.pth"), map_location="cpu") for i in range(num_shards) ] param_count = 0 index_dict = {"weight_map": {}} for layer_i in range(n_layers): filename = f"pytorch_model-{layer_i + 1}-of-{n_layers + 1}.bin" # Sharded # Note that attention.w{q,k,v,o}, feed_fordward.w[1,2,3], attention_norm.weight and ffn_norm.weight share # the same storage object, saving attention_norm and ffn_norm will save other weights too, which is # redundant as other weights will be stitched from multiple shards. To avoid that, they are cloned. state_dict = { f"model.layers.{layer_i}.input_layernorm.weight": loaded[0][ f"layers.{layer_i}.attention_norm.weight" ].clone(), f"model.layers.{layer_i}.post_attention_layernorm.weight": loaded[0][ f"layers.{layer_i}.ffn_norm.weight" ].clone(), } state_dict[f"model.layers.{layer_i}.self_attn.q_proj.weight"] = permute( torch.cat( [ loaded[i][f"layers.{layer_i}.attention.wq.weight"].view(n_heads_per_shard, dims_per_head, dim) for i in range(num_shards) ], dim=0, ).reshape(dim, dim) ) state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"] = permute( torch.cat( [ loaded[i][f"layers.{layer_i}.attention.wk.weight"].view( num_local_key_value_heads, dims_per_head, dim ) for i in range(num_shards) ], dim=0, ).reshape(key_value_dim, dim), num_key_value_heads, key_value_dim, dim, ) state_dict[f"model.layers.{layer_i}.self_attn.v_proj.weight"] = torch.cat( [ loaded[i][f"layers.{layer_i}.attention.wv.weight"].view(num_local_key_value_heads, dims_per_head, dim) for i in range(num_shards) ], dim=0, ).reshape(key_value_dim, dim) state_dict[f"model.layers.{layer_i}.self_attn.o_proj.weight"] = torch.cat( [loaded[i][f"layers.{layer_i}.attention.wo.weight"] for i in range(num_shards)], dim=1 ) state_dict[f"model.layers.{layer_i}.mlp.gate_proj.weight"] = torch.cat( [loaded[i][f"layers.{layer_i}.feed_forward.w1.weight"] for i in range(num_shards)], dim=0 ) state_dict[f"model.layers.{layer_i}.mlp.down_proj.weight"] = torch.cat( [loaded[i][f"layers.{layer_i}.feed_forward.w2.weight"] for i in range(num_shards)], dim=1 ) state_dict[f"model.layers.{layer_i}.mlp.up_proj.weight"] = torch.cat( [loaded[i][f"layers.{layer_i}.feed_forward.w3.weight"] for i in range(num_shards)], dim=0 ) state_dict[f"model.layers.{layer_i}.self_attn.rotary_emb.inv_freq"] = inv_freq for k, v in state_dict.items(): index_dict["weight_map"][k] = filename param_count += v.numel() torch.save(state_dict, os.path.join(tmp_model_path, filename)) filename = f"pytorch_model-{n_layers + 1}-of-{n_layers + 1}.bin" state_dict = { "model.norm.weight": loaded[0]["norm.weight"], "model.embed_tokens.weight": torch.cat([loaded[i]["tok_embeddings.weight"] for i in range(num_shards)], dim=1), "lm_head.weight": torch.cat([loaded[i]["output.weight"] for i in range(num_shards)], dim=0), } for k, v in state_dict.items(): index_dict["weight_map"][k] = filename param_count += v.numel() torch.save(state_dict, os.path.join(tmp_model_path, filename)) # Write configs index_dict["metadata"] = {"total_size": param_count * 2} write_json(index_dict, os.path.join(tmp_model_path, "pytorch_model.bin.index.json")) config = MistralConfig( hidden_size=dim, intermediate_size=params["hidden_dim"], num_attention_heads=params["n_heads"], num_hidden_layers=params["n_layers"], rms_norm_eps=params["norm_eps"], num_key_value_heads=num_key_value_heads, vocab_size=vocab_size, rope_theta=base, max_position_embeddings=max_position_embeddings, sliding_window=sliding_window, ) config.save_pretrained(tmp_model_path) # Make space so we can load the model properly now. del state_dict del loaded gc.collect() print("Loading the checkpoint in a Mistral model.") model = MistralForCausalLM.from_pretrained(tmp_model_path, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True) # Avoid saving this as part of the config. del model.config._name_or_path model.config.torch_dtype = torch.float16 print("Saving in the Transformers format.") model.save_pretrained(model_path, safe_serialization=safe_serialization) shutil.rmtree(tmp_model_path) def write_tokenizer(tokenizer_path, input_tokenizer_path): # Initialize the tokenizer based on the `spm` model print(f"Saving a {tokenizer_class.__name__} to {tokenizer_path}.") tokenizer = tokenizer_class(input_tokenizer_path) tokenizer.save_pretrained(tokenizer_path) def main(): parser = argparse.ArgumentParser() parser.add_argument( "--input_dir", help="Location of Mistral weights, which contains tokenizer.model and model folders", ) parser.add_argument( "--model_size", choices=["7B", "tokenizer_only"], help="'f' models correspond to the finetuned versions, and are specific to the Mistral2 official release. For more details on Mistral2, checkout the original repo: https://huggingface.co/meta-mistral", ) parser.add_argument( "--output_dir", help="Location to write HF model and tokenizer", ) parser.add_argument("--safe_serialization", type=bool, help="Whether or not to save using `safetensors`.") parser.add_argument( "--is_v3", action="store_true", help="Whether the checkpoints correspond to the 3rd version or not." ) args = parser.parse_args() spm_path = os.path.join(args.input_dir, "tokenizer.model") if args.model_size != "tokenizer_only": write_model( model_path=args.output_dir, input_base_path=args.input_dir, model_size=args.model_size, safe_serialization=args.safe_serialization, tokenizer_path=spm_path, is_v3=args.is_v3, ) else: write_tokenizer(args.output_dir, spm_path) if __name__ == "__main__": main()
transformers/src/transformers/models/mistral/convert_mistral_weights_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/mistral/convert_mistral_weights_to_hf.py", "repo_id": "transformers", "token_count": 4851 }
363
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert MobileViT checkpoints from the ml-cvnets library.""" import argparse import json from pathlib import Path import requests import torch from huggingface_hub import hf_hub_download from PIL import Image from transformers import ( MobileViTConfig, MobileViTForImageClassification, MobileViTForSemanticSegmentation, MobileViTImageProcessor, ) from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(__name__) def get_mobilevit_config(mobilevit_name): config = MobileViTConfig() # size of the architecture if "mobilevit_s" in mobilevit_name: config.hidden_sizes = [144, 192, 240] config.neck_hidden_sizes = [16, 32, 64, 96, 128, 160, 640] elif "mobilevit_xs" in mobilevit_name: config.hidden_sizes = [96, 120, 144] config.neck_hidden_sizes = [16, 32, 48, 64, 80, 96, 384] elif "mobilevit_xxs" in mobilevit_name: config.hidden_sizes = [64, 80, 96] config.neck_hidden_sizes = [16, 16, 24, 48, 64, 80, 320] config.hidden_dropout_prob = 0.05 config.expand_ratio = 2.0 if mobilevit_name.startswith("deeplabv3_"): config.image_size = 512 config.output_stride = 16 config.num_labels = 21 filename = "pascal-voc-id2label.json" else: config.num_labels = 1000 filename = "imagenet-1k-id2label.json" repo_id = "huggingface/label-files" id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) id2label = {int(k): v for k, v in id2label.items()} config.id2label = id2label config.label2id = {v: k for k, v in id2label.items()} return config def rename_key(name, base_model=False): for i in range(1, 6): if f"layer_{i}." in name: name = name.replace(f"layer_{i}.", f"encoder.layer.{i - 1}.") if "conv_1." in name: name = name.replace("conv_1.", "conv_stem.") if ".block." in name: name = name.replace(".block.", ".") if "exp_1x1" in name: name = name.replace("exp_1x1", "expand_1x1") if "red_1x1" in name: name = name.replace("red_1x1", "reduce_1x1") if ".local_rep.conv_3x3." in name: name = name.replace(".local_rep.conv_3x3.", ".conv_kxk.") if ".local_rep.conv_1x1." in name: name = name.replace(".local_rep.conv_1x1.", ".conv_1x1.") if ".norm." in name: name = name.replace(".norm.", ".normalization.") if ".conv." in name: name = name.replace(".conv.", ".convolution.") if ".conv_proj." in name: name = name.replace(".conv_proj.", ".conv_projection.") for i in range(0, 2): for j in range(0, 4): if f".{i}.{j}." in name: name = name.replace(f".{i}.{j}.", f".{i}.layer.{j}.") for i in range(2, 6): for j in range(0, 4): if f".{i}.{j}." in name: name = name.replace(f".{i}.{j}.", f".{i}.") if "expand_1x1" in name: name = name.replace("expand_1x1", "downsampling_layer.expand_1x1") if "conv_3x3" in name: name = name.replace("conv_3x3", "downsampling_layer.conv_3x3") if "reduce_1x1" in name: name = name.replace("reduce_1x1", "downsampling_layer.reduce_1x1") for i in range(2, 5): if f".global_rep.{i}.weight" in name: name = name.replace(f".global_rep.{i}.weight", ".layernorm.weight") if f".global_rep.{i}.bias" in name: name = name.replace(f".global_rep.{i}.bias", ".layernorm.bias") if ".global_rep." in name: name = name.replace(".global_rep.", ".transformer.") if ".pre_norm_mha.0." in name: name = name.replace(".pre_norm_mha.0.", ".layernorm_before.") if ".pre_norm_mha.1.out_proj." in name: name = name.replace(".pre_norm_mha.1.out_proj.", ".attention.output.dense.") if ".pre_norm_ffn.0." in name: name = name.replace(".pre_norm_ffn.0.", ".layernorm_after.") if ".pre_norm_ffn.1." in name: name = name.replace(".pre_norm_ffn.1.", ".intermediate.dense.") if ".pre_norm_ffn.4." in name: name = name.replace(".pre_norm_ffn.4.", ".output.dense.") if ".transformer." in name: name = name.replace(".transformer.", ".transformer.layer.") if ".aspp_layer." in name: name = name.replace(".aspp_layer.", ".") if ".aspp_pool." in name: name = name.replace(".aspp_pool.", ".") if "seg_head." in name: name = name.replace("seg_head.", "segmentation_head.") if "segmentation_head.classifier.classifier." in name: name = name.replace("segmentation_head.classifier.classifier.", "segmentation_head.classifier.") if "classifier.fc." in name: name = name.replace("classifier.fc.", "classifier.") elif (not base_model) and ("segmentation_head." not in name): name = "mobilevit." + name return name def convert_state_dict(orig_state_dict, model, base_model=False): if base_model: model_prefix = "" else: model_prefix = "mobilevit." for key in orig_state_dict.copy().keys(): val = orig_state_dict.pop(key) if key[:8] == "encoder.": key = key[8:] if "qkv" in key: key_split = key.split(".") layer_num = int(key_split[0][6:]) - 1 transformer_num = int(key_split[3]) layer = model.get_submodule(f"{model_prefix}encoder.layer.{layer_num}") dim = layer.transformer.layer[transformer_num].attention.attention.all_head_size prefix = ( f"{model_prefix}encoder.layer.{layer_num}.transformer.layer.{transformer_num}.attention.attention." ) if "weight" in key: orig_state_dict[prefix + "query.weight"] = val[:dim, :] orig_state_dict[prefix + "key.weight"] = val[dim : dim * 2, :] orig_state_dict[prefix + "value.weight"] = val[-dim:, :] else: orig_state_dict[prefix + "query.bias"] = val[:dim] orig_state_dict[prefix + "key.bias"] = val[dim : dim * 2] orig_state_dict[prefix + "value.bias"] = val[-dim:] else: orig_state_dict[rename_key(key, base_model)] = val return orig_state_dict # We will verify our results on an image of cute cats def prepare_img(): url = "http://images.cocodataset.org/val2017/000000039769.jpg" im = Image.open(requests.get(url, stream=True).raw) return im @torch.no_grad() def convert_movilevit_checkpoint(mobilevit_name, checkpoint_path, pytorch_dump_folder_path, push_to_hub=False): """ Copy/paste/tweak model's weights to our MobileViT structure. """ config = get_mobilevit_config(mobilevit_name) # load original state_dict state_dict = torch.load(checkpoint_path, map_location="cpu") # load 🀗 model if mobilevit_name.startswith("deeplabv3_"): model = MobileViTForSemanticSegmentation(config).eval() else: model = MobileViTForImageClassification(config).eval() new_state_dict = convert_state_dict(state_dict, model) model.load_state_dict(new_state_dict) # Check outputs on an image, prepared by MobileViTImageProcessor image_processor = MobileViTImageProcessor(crop_size=config.image_size, size=config.image_size + 32) encoding = image_processor(images=prepare_img(), return_tensors="pt") outputs = model(**encoding) logits = outputs.logits if mobilevit_name.startswith("deeplabv3_"): assert logits.shape == (1, 21, 32, 32) if mobilevit_name == "deeplabv3_mobilevit_s": expected_logits = torch.tensor( [ [[6.2065, 6.1292, 6.2070], [6.1079, 6.1254, 6.1747], [6.0042, 6.1071, 6.1034]], [[-6.9253, -6.8653, -7.0398], [-7.3218, -7.3983, -7.3670], [-7.1961, -7.2482, -7.1569]], [[-4.4723, -4.4348, -4.3769], [-5.3629, -5.4632, -5.4598], [-5.1587, -5.3402, -5.5059]], ] ) elif mobilevit_name == "deeplabv3_mobilevit_xs": expected_logits = torch.tensor( [ [[5.4449, 5.5733, 5.6314], [5.1815, 5.3930, 5.5963], [5.1656, 5.4333, 5.4853]], [[-9.4423, -9.7766, -9.6714], [-9.1581, -9.5720, -9.5519], [-9.1006, -9.6458, -9.5703]], [[-7.7721, -7.3716, -7.1583], [-8.4599, -8.0624, -7.7944], [-8.4172, -7.8366, -7.5025]], ] ) elif mobilevit_name == "deeplabv3_mobilevit_xxs": expected_logits = torch.tensor( [ [[6.9811, 6.9743, 7.3123], [7.1777, 7.1931, 7.3938], [7.5633, 7.8050, 7.8901]], [[-10.5536, -10.2332, -10.2924], [-10.2336, -9.8624, -9.5964], [-10.8840, -10.8158, -10.6659]], [[-3.4938, -3.0631, -2.8620], [-3.4205, -2.8135, -2.6875], [-3.4179, -2.7945, -2.8750]], ] ) else: raise ValueError(f"Unknown mobilevit_name: {mobilevit_name}") assert torch.allclose(logits[0, :3, :3, :3], expected_logits, atol=1e-4) else: assert logits.shape == (1, 1000) if mobilevit_name == "mobilevit_s": expected_logits = torch.tensor([-0.9866, 0.2392, -1.1241]) elif mobilevit_name == "mobilevit_xs": expected_logits = torch.tensor([-2.4761, -0.9399, -1.9587]) elif mobilevit_name == "mobilevit_xxs": expected_logits = torch.tensor([-1.9364, -1.2327, -0.4653]) else: raise ValueError(f"Unknown mobilevit_name: {mobilevit_name}") assert torch.allclose(logits[0, :3], expected_logits, atol=1e-4) Path(pytorch_dump_folder_path).mkdir(exist_ok=True) print(f"Saving model {mobilevit_name} to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) print(f"Saving image processor to {pytorch_dump_folder_path}") image_processor.save_pretrained(pytorch_dump_folder_path) if push_to_hub: model_mapping = { "mobilevit_s": "mobilevit-small", "mobilevit_xs": "mobilevit-x-small", "mobilevit_xxs": "mobilevit-xx-small", "deeplabv3_mobilevit_s": "deeplabv3-mobilevit-small", "deeplabv3_mobilevit_xs": "deeplabv3-mobilevit-x-small", "deeplabv3_mobilevit_xxs": "deeplabv3-mobilevit-xx-small", } print("Pushing to the hub...") model_name = model_mapping[mobilevit_name] image_processor.push_to_hub(model_name, organization="apple") model.push_to_hub(model_name, organization="apple") if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--mobilevit_name", default="mobilevit_s", type=str, help=( "Name of the MobileViT model you'd like to convert. Should be one of 'mobilevit_s', 'mobilevit_xs'," " 'mobilevit_xxs', 'deeplabv3_mobilevit_s', 'deeplabv3_mobilevit_xs', 'deeplabv3_mobilevit_xxs'." ), ) parser.add_argument( "--checkpoint_path", required=True, type=str, help="Path to the original state dict (.pt file)." ) parser.add_argument( "--pytorch_dump_folder_path", required=True, type=str, help="Path to the output PyTorch model directory." ) parser.add_argument( "--push_to_hub", action="store_true", help="Whether or not to push the converted model to the 🀗 hub." ) args = parser.parse_args() convert_movilevit_checkpoint( args.mobilevit_name, args.checkpoint_path, args.pytorch_dump_folder_path, args.push_to_hub )
transformers/src/transformers/models/mobilevit/convert_mlcvnets_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/mobilevit/convert_mlcvnets_to_pytorch.py", "repo_id": "transformers", "token_count": 5868 }
364
# coding=utf-8 # Copyright 2023 HuggingFace Inc. team and MosaicML NLP team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Mpt configuration""" from typing import TYPE_CHECKING, Optional, Union if TYPE_CHECKING: pass from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class MptAttentionConfig(PretrainedConfig): """ This is the configuration class to store the configuration of a [`MptAttention`] class. It is used to instantiate attention layers according to the specified arguments, defining the layers architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MPT [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) architecture. Most of the arguments are kept for backward compatibility with previous MPT models that are hosted on the Hub (previously with `trust_remote_code=True`). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: attn_type (`str`, *optional*, defaults to `"multihead_attention"`): type of attention to use. Options: `"multihead_attention"`, `"multiquery_attention"`. attn_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability for the attention layers. attn_impl (`str`, *optional*, defaults to `"torch"`): The attention implementation to use. One of `"torch"`, `"flash"`, or `"triton"`. clip_qkv (`float`, *optional*): If not `None`, clip the queries, keys, and values in the attention layer to this value. softmax_scale (`float`, *optional*, defaults to `None`): If not `None`, scale the softmax in the attention layer by this value. If `None`, will default to `1/sqrt(hidden_size)`. prefix_lm (`bool`, *optional*, defaults to `False`)): Whether the model should operate as a Prefix LM. This requires passing an extra `prefix_mask` argument which indicates which tokens belong to the prefix. Tokens in the prefix can attend to one another bi-directionally. Tokens outside the prefix use causal attention. qk_ln (`bool`, *optional*, defaults to `False`): Whether to apply layer normalization to the queries and keys in the attention layer. attn_uses_sequence_id (`bool`, *optional*, defaults to `False`)): Whether to restrict attention to tokens that have the same token_type_ids. When the model is in `train` mode, this requires passing an extra *token_type_ids* argument which indicates which sub-sequence each token belongs to. Defaults to `False` meaning any provided *token_type_ids* will be ignored. alibi (`bool`, *optional*, defaults to `True`): Whether or not to use the alibi bias instead of positional embedding. alibi_bias_max (`int`, *optional*, defaults to 8): The maximum value of the alibi bias. """ def __init__( self, attn_type="multihead_attention", attn_pdrop=0, attn_impl="torch", clip_qkv=None, softmax_scale=None, prefix_lm=False, qk_ln=False, attn_uses_sequence_id=False, alibi=True, alibi_bias_max=8, **kwargs, ): super().__init__() self.attn_type = attn_type self.attn_pdrop = attn_pdrop self.attn_impl = attn_impl self.clip_qkv = clip_qkv self.softmax_scale = softmax_scale self.prefix_lm = prefix_lm self.attn_uses_sequence_id = attn_uses_sequence_id self.alibi = alibi self.qk_ln = qk_ln self.alibi_bias_max = alibi_bias_max if attn_type not in ["multihead_attention", "multiquery_attention"]: raise ValueError( f"`attn_type` has to be either `multihead_attention` or `multiquery_attention`. Received: {attn_type}" ) @classmethod def from_pretrained(cls, pretrained_model_name_or_path, **kwargs) -> "PretrainedConfig": cls._set_token_in_kwargs(kwargs) config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) if config_dict.get("model_type") == "mpt": config_dict = config_dict["attn_config"] if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: logger.warning( f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." ) return cls.from_dict(config_dict, **kwargs) class MptConfig(PretrainedConfig): """ This is the configuration class to store the configuration of a [`MptModel`]. It is used to instantiate a Mpt model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to the Mpt-7b architecture [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: d_model (`int`, *optional*, defaults to 2048): Dimensionality of the embeddings and hidden states. n_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. n_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. expansion_ratio (`int`, *optional*, defaults to 4): The ratio of the up/down scale in the MLP. max_seq_len (`int`, *optional*, defaults to 2048): The maximum sequence length of the model. vocab_size (`int`, *optional*, defaults to 50368): Vocabulary size of the Mpt model. Defines the maximum number of different tokens that can be represented by the `inputs_ids` passed when calling [`MptModel`]. Check [this discussion](https://huggingface.co/bigscience/mpt/discussions/120#633d28389addb8530b406c2a) on how the `vocab_size` has been defined. resid_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability applied to the attention output before combining with residual. layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): The epsilon to use in the layer normalization layers. emb_pdrop (`float`, *optional*, defaults to 0.0): The dropout probability for the embedding layer. learned_pos_emb (`bool`, *optional*, defaults to `True`): Whether to use learned positional embeddings. attn_config (`dict`, *optional*): A dictionary used to configure the model's attention module. init_device (`str`, *optional*, defaults to `"cpu"`): The device to use for parameter initialization. Defined for backward compatibility logit_scale (`float`, *optional*): If not None, scale the logits by this value. no_bias (`bool`, *optional*, defaults to `True`): Whether to use bias in all linear layers. verbose (`int`, *optional*, defaults to 0): The verbosity level to use for logging. Used in the previous versions of MPT models for logging. This argument is deprecated. embedding_fraction (`float`, *optional*, defaults to 1.0): The fraction to scale the gradients of the embedding layer by. norm_type (`str`, *optional*, defaults to `"low_precision_layernorm"`): Type of layer norm to use. All MPT models uses the same layer norm implementation. Defined for backward compatibility. use_cache (`bool`, *optional*, defaults to `False`): Whether or not the model should return the last key/values attentions (not used by all models). initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import MptConfig, MptModel >>> # Initializing a Mpt configuration >>> configuration = MptConfig() >>> # Initializing a model (with random weights) from the configuration >>> model = MptModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` """ model_type = "mpt" attribute_map = { "num_attention_heads": "n_heads", "hidden_size": "d_model", "num_hidden_layers": "n_layers", } def __init__( self, d_model: int = 2048, n_heads: int = 16, n_layers: int = 24, expansion_ratio: int = 4, max_seq_len: int = 2048, vocab_size: int = 50368, resid_pdrop: float = 0.0, layer_norm_epsilon: float = 1e-5, emb_pdrop: float = 0.0, learned_pos_emb: bool = True, attn_config: MptAttentionConfig = None, init_device: str = "cpu", logit_scale: Optional[Union[float, str]] = None, no_bias: bool = True, verbose: int = 0, embedding_fraction: float = 1.0, norm_type: str = "low_precision_layernorm", use_cache: bool = False, initializer_range=0.02, **kwargs, ): if attn_config is None: self.attn_config = MptAttentionConfig() elif isinstance(attn_config, dict): self.attn_config = MptAttentionConfig(**attn_config) else: self.attn_config = attn_config self.d_model = d_model self.n_heads = n_heads self.n_layers = n_layers self.expansion_ratio = expansion_ratio self.max_seq_len = max_seq_len self.vocab_size = vocab_size self.resid_pdrop = resid_pdrop self.emb_pdrop = emb_pdrop self.learned_pos_emb = learned_pos_emb self.init_device = init_device self.logit_scale = logit_scale self.no_bias = no_bias self.verbose = verbose self.embedding_fraction = embedding_fraction self.norm_type = norm_type self.layer_norm_epsilon = layer_norm_epsilon self.use_cache = use_cache self.initializer_range = initializer_range super().__init__(**kwargs)
transformers/src/transformers/models/mpt/configuration_mpt.py/0
{ "file_path": "transformers/src/transformers/models/mpt/configuration_mpt.py", "repo_id": "transformers", "token_count": 4400 }
365
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Image processor class for OwlViT""" import warnings from typing import Dict, List, Optional, Tuple, Union import numpy as np from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict from ...image_transforms import ( center_crop, center_to_corners_format, rescale, resize, to_channel_dimension_format, ) from ...image_utils import ( OPENAI_CLIP_MEAN, OPENAI_CLIP_STD, ChannelDimension, ImageInput, PILImageResampling, infer_channel_dimension_format, is_scaled_image, make_list_of_images, to_numpy_array, valid_images, validate_preprocess_arguments, ) from ...utils import TensorType, filter_out_non_signature_kwargs, is_torch_available, logging if is_torch_available(): import torch logger = logging.get_logger(__name__) def _upcast(t): # Protects from numerical overflows in multiplications by upcasting to the equivalent higher type if t.is_floating_point(): return t if t.dtype in (torch.float32, torch.float64) else t.float() else: return t if t.dtype in (torch.int32, torch.int64) else t.int() def box_area(boxes): """ Computes the area of a set of bounding boxes, which are specified by its (x1, y1, x2, y2) coordinates. Args: boxes (`torch.FloatTensor` of shape `(number_of_boxes, 4)`): Boxes for which the area will be computed. They are expected to be in (x1, y1, x2, y2) format with `0 <= x1 < x2` and `0 <= y1 < y2`. Returns: `torch.FloatTensor`: a tensor containing the area for each box. """ boxes = _upcast(boxes) return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1]) def box_iou(boxes1, boxes2): area1 = box_area(boxes1) area2 = box_area(boxes2) left_top = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] right_bottom = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] width_height = (right_bottom - left_top).clamp(min=0) # [N,M,2] inter = width_height[:, :, 0] * width_height[:, :, 1] # [N,M] union = area1[:, None] + area2 - inter iou = inter / union return iou, union class OwlViTImageProcessor(BaseImageProcessor): r""" Constructs an OWL-ViT image processor. This image processor inherits from [`ImageProcessingMixin`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the shorter edge of the input to a certain `size`. size (`Dict[str, int]`, *optional*, defaults to {"height": 768, "width": 768}): The size to use for resizing the image. Only has an effect if `do_resize` is set to `True`. If `size` is a sequence like (h, w), output size will be matched to this. If `size` is an int, then image will be resized to (size, size). resample (`int`, *optional*, defaults to `Resampling.BICUBIC`): An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`, `PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`, `PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set to `True`. do_center_crop (`bool`, *optional*, defaults to `False`): Whether to crop the input at the center. If the input size is smaller than `crop_size` along any edge, the image is padded with 0's and then center cropped. crop_size (`int`, *optional*, defaults to {"height": 768, "width": 768}): The size to use for center cropping the image. Only has an effect if `do_center_crop` is set to `True`. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the input by a certain factor. rescale_factor (`float`, *optional*, defaults to `1/255`): The factor to use for rescaling the image. Only has an effect if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `True`): Whether or not to normalize the input with `image_mean` and `image_std`. Desired output size when applying center-cropping. Only has an effect if `do_center_crop` is set to `True`. image_mean (`List[int]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): The sequence of means for each channel, to be used when normalizing images. image_std (`List[int]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`): The sequence of standard deviations for each channel, to be used when normalizing images. """ model_input_names = ["pixel_values"] def __init__( self, do_resize=True, size=None, resample=PILImageResampling.BICUBIC, do_center_crop=False, crop_size=None, do_rescale=True, rescale_factor=1 / 255, do_normalize=True, image_mean=None, image_std=None, **kwargs, ): size = size if size is not None else {"height": 768, "width": 768} size = get_size_dict(size, default_to_square=True) crop_size = crop_size if crop_size is not None else {"height": 768, "width": 768} crop_size = get_size_dict(crop_size, default_to_square=True) # Early versions of the OWL-ViT config on the hub had "rescale" as a flag. This clashes with the # vision image processor method `rescale` as it would be set as an attribute during the super().__init__ # call. This is for backwards compatibility. if "rescale" in kwargs: rescale_val = kwargs.pop("rescale") kwargs["do_rescale"] = rescale_val super().__init__(**kwargs) self.do_resize = do_resize self.size = size self.resample = resample self.do_center_crop = do_center_crop self.crop_size = crop_size self.do_rescale = do_rescale self.rescale_factor = rescale_factor self.do_normalize = do_normalize self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD def resize( self, image: np.ndarray, size: Dict[str, int], resample: PILImageResampling.BICUBIC, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> np.ndarray: """ Resize an image to a certain size. Args: image (`np.ndarray`): Image to resize. size (`Dict[str, int]`): The size to resize the image to. Must contain height and width keys. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): The resampling filter to use when resizing the input. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred. """ size = get_size_dict(size, default_to_square=True) if "height" not in size or "width" not in size: raise ValueError("size dictionary must contain height and width keys") return resize( image, (size["height"], size["width"]), resample=resample, data_format=data_format, input_data_format=input_data_format, **kwargs, ) def center_crop( self, image: np.ndarray, crop_size: Dict[str, int], data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> np.ndarray: """ Center crop an image to a certain size. Args: image (`np.ndarray`): Image to center crop. crop_size (`Dict[str, int]`): The size to center crop the image to. Must contain height and width keys. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred. """ crop_size = get_size_dict(crop_size, default_to_square=True) if "height" not in crop_size or "width" not in crop_size: raise ValueError("crop_size dictionary must contain height and width keys") return center_crop( image, (crop_size["height"], crop_size["width"]), data_format=data_format, input_data_format=input_data_format, **kwargs, ) # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.rescale def rescale( self, image: np.ndarray, rescale_factor: float, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, ) -> np.ndarray: """ Rescale the image by the given factor. image = image * rescale_factor. Args: image (`np.ndarray`): Image to rescale. rescale_factor (`float`): The value to use for rescaling. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. If unset, is inferred from the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. """ return rescale(image, rescale_factor, data_format=data_format, input_data_format=input_data_format) @filter_out_non_signature_kwargs() def preprocess( self, images: ImageInput, do_resize: Optional[bool] = None, size: Optional[Dict[str, int]] = None, resample: PILImageResampling = None, do_center_crop: Optional[bool] = None, crop_size: Optional[Dict[str, int]] = None, do_rescale: Optional[bool] = None, rescale_factor: Optional[float] = None, do_normalize: Optional[bool] = None, image_mean: Optional[Union[float, List[float]]] = None, image_std: Optional[Union[float, List[float]]] = None, return_tensors: Optional[Union[TensorType, str]] = None, data_format: Union[str, ChannelDimension] = ChannelDimension.FIRST, input_data_format: Optional[Union[str, ChannelDimension]] = None, ) -> BatchFeature: """ Prepares an image or batch of images for the model. Args: images (`ImageInput`): The image or batch of images to be prepared. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set `do_rescale=False`. do_resize (`bool`, *optional*, defaults to `self.do_resize`): Whether or not to resize the input. If `True`, will resize the input to the size specified by `size`. size (`Dict[str, int]`, *optional*, defaults to `self.size`): The size to resize the input to. Only has an effect if `do_resize` is set to `True`. resample (`PILImageResampling`, *optional*, defaults to `self.resample`): The resampling filter to use when resizing the input. Only has an effect if `do_resize` is set to `True`. do_center_crop (`bool`, *optional*, defaults to `self.do_center_crop`): Whether or not to center crop the input. If `True`, will center crop the input to the size specified by `crop_size`. crop_size (`Dict[str, int]`, *optional*, defaults to `self.crop_size`): The size to center crop the input to. Only has an effect if `do_center_crop` is set to `True`. do_rescale (`bool`, *optional*, defaults to `self.do_rescale`): Whether or not to rescale the input. If `True`, will rescale the input by dividing it by `rescale_factor`. rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`): The factor to rescale the input by. Only has an effect if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `self.do_normalize`): Whether or not to normalize the input. If `True`, will normalize the input by subtracting `image_mean` and dividing by `image_std`. image_mean (`Union[float, List[float]]`, *optional*, defaults to `self.image_mean`): The mean to subtract from the input when normalizing. Only has an effect if `do_normalize` is set to `True`. image_std (`Union[float, List[float]]`, *optional*, defaults to `self.image_std`): The standard deviation to divide the input by when normalizing. Only has an effect if `do_normalize` is set to `True`. return_tensors (`str` or `TensorType`, *optional*): The type of tensors to return. Can be one of: - Unset: Return a list of `np.ndarray`. - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`): The channel dimension format for the output image. Can be one of: - `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `ChannelDimension.LAST`: image in (height, width, num_channels) format. - Unset: defaults to the channel dimension format of the input image. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. """ do_resize = do_resize if do_resize is not None else self.do_resize size = size if size is not None else self.size resample = resample if resample is not None else self.resample do_center_crop = do_center_crop if do_center_crop is not None else self.do_center_crop crop_size = crop_size if crop_size is not None else self.crop_size do_rescale = do_rescale if do_rescale is not None else self.do_rescale rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor do_normalize = do_normalize if do_normalize is not None else self.do_normalize image_mean = image_mean if image_mean is not None else self.image_mean image_std = image_std if image_std is not None else self.image_std images = make_list_of_images(images) if not valid_images(images): raise ValueError( "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " "torch.Tensor, tf.Tensor or jax.ndarray." ) validate_preprocess_arguments( do_rescale=do_rescale, rescale_factor=rescale_factor, do_normalize=do_normalize, image_mean=image_mean, image_std=image_std, do_center_crop=do_center_crop, crop_size=crop_size, do_resize=do_resize, size=size, resample=resample, ) # All transformations expect numpy arrays images = [to_numpy_array(image) for image in images] if is_scaled_image(images[0]) and do_rescale: logger.warning_once( "It looks like you are trying to rescale already rescaled images. If the input" " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again." ) if input_data_format is None: # We assume that all images have the same channel dimension format. input_data_format = infer_channel_dimension_format(images[0]) if do_resize: images = [ self.resize(image, size=size, resample=resample, input_data_format=input_data_format) for image in images ] if do_center_crop: images = [ self.center_crop(image, crop_size=crop_size, input_data_format=input_data_format) for image in images ] if do_rescale: images = [ self.rescale(image, rescale_factor=rescale_factor, input_data_format=input_data_format) for image in images ] if do_normalize: images = [ self.normalize(image, mean=image_mean, std=image_std, input_data_format=input_data_format) for image in images ] images = [ to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images ] encoded_inputs = BatchFeature(data={"pixel_values": images}, tensor_type=return_tensors) return encoded_inputs def post_process(self, outputs, target_sizes): """ Converts the raw output of [`OwlViTForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Args: outputs ([`OwlViTObjectDetectionOutput`]): Raw outputs of the model. target_sizes (`torch.Tensor` of shape `(batch_size, 2)`): Tensor containing the size (h, w) of each image of the batch. For evaluation, this must be the original image size (before any data augmentation). For visualization, this should be the image size after data augment, but before padding. Returns: `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. """ # TODO: (amy) add support for other frameworks warnings.warn( "`post_process` is deprecated and will be removed in v5 of Transformers, please use" " `post_process_object_detection` instead, with `threshold=0.` for equivalent results.", FutureWarning, ) logits, boxes = outputs.logits, outputs.pred_boxes if len(logits) != len(target_sizes): raise ValueError("Make sure that you pass in as many target sizes as the batch dimension of the logits") if target_sizes.shape[1] != 2: raise ValueError("Each element of target_sizes must contain the size (h, w) of each image of the batch") probs = torch.max(logits, dim=-1) scores = torch.sigmoid(probs.values) labels = probs.indices # Convert to [x0, y0, x1, y1] format boxes = center_to_corners_format(boxes) # Convert from relative [0, 1] to absolute [0, height] coordinates img_h, img_w = target_sizes.unbind(1) scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(boxes.device) boxes = boxes * scale_fct[:, None, :] results = [{"scores": s, "labels": l, "boxes": b} for s, l, b in zip(scores, labels, boxes)] return results def post_process_object_detection( self, outputs, threshold: float = 0.1, target_sizes: Union[TensorType, List[Tuple]] = None ): """ Converts the raw output of [`OwlViTForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Args: outputs ([`OwlViTObjectDetectionOutput`]): Raw outputs of the model. threshold (`float`, *optional*): Score threshold to keep object detection predictions. target_sizes (`torch.Tensor` or `List[Tuple[int, int]]`, *optional*): Tensor of shape `(batch_size, 2)` or list of tuples (`Tuple[int, int]`) containing the target size `(height, width)` of each image in the batch. If unset, predictions will not be resized. Returns: `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. """ # TODO: (amy) add support for other frameworks logits, boxes = outputs.logits, outputs.pred_boxes if target_sizes is not None: if len(logits) != len(target_sizes): raise ValueError( "Make sure that you pass in as many target sizes as the batch dimension of the logits" ) probs = torch.max(logits, dim=-1) scores = torch.sigmoid(probs.values) labels = probs.indices # Convert to [x0, y0, x1, y1] format boxes = center_to_corners_format(boxes) # Convert from relative [0, 1] to absolute [0, height] coordinates if target_sizes is not None: if isinstance(target_sizes, List): img_h = torch.Tensor([i[0] for i in target_sizes]) img_w = torch.Tensor([i[1] for i in target_sizes]) else: img_h, img_w = target_sizes.unbind(1) scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(boxes.device) boxes = boxes * scale_fct[:, None, :] results = [] for s, l, b in zip(scores, labels, boxes): score = s[s > threshold] label = l[s > threshold] box = b[s > threshold] results.append({"scores": score, "labels": label, "boxes": box}) return results # TODO: (Amy) Make compatible with other frameworks def post_process_image_guided_detection(self, outputs, threshold=0.0, nms_threshold=0.3, target_sizes=None): """ Converts the output of [`OwlViTForObjectDetection.image_guided_detection`] into the format expected by the COCO api. Args: outputs ([`OwlViTImageGuidedObjectDetectionOutput`]): Raw outputs of the model. threshold (`float`, *optional*, defaults to 0.0): Minimum confidence threshold to use to filter out predicted boxes. nms_threshold (`float`, *optional*, defaults to 0.3): IoU threshold for non-maximum suppression of overlapping boxes. target_sizes (`torch.Tensor`, *optional*): Tensor of shape (batch_size, 2) where each entry is the (height, width) of the corresponding image in the batch. If set, predicted normalized bounding boxes are rescaled to the target sizes. If left to None, predictions will not be unnormalized. Returns: `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. All labels are set to None as `OwlViTForObjectDetection.image_guided_detection` perform one-shot object detection. """ logits, target_boxes = outputs.logits, outputs.target_pred_boxes if target_sizes is not None and len(logits) != len(target_sizes): raise ValueError("Make sure that you pass in as many target sizes as the batch dimension of the logits") if target_sizes is not None and target_sizes.shape[1] != 2: raise ValueError("Each element of target_sizes must contain the size (h, w) of each image of the batch") probs = torch.max(logits, dim=-1) scores = torch.sigmoid(probs.values) # Convert to [x0, y0, x1, y1] format target_boxes = center_to_corners_format(target_boxes) # Apply non-maximum suppression (NMS) if nms_threshold < 1.0: for idx in range(target_boxes.shape[0]): for i in torch.argsort(-scores[idx]): if not scores[idx][i]: continue ious = box_iou(target_boxes[idx][i, :].unsqueeze(0), target_boxes[idx])[0][0] ious[i] = -1.0 # Mask self-IoU. scores[idx][ious > nms_threshold] = 0.0 # Convert from relative [0, 1] to absolute [0, height] coordinates if target_sizes is not None: if isinstance(target_sizes, List): img_h = torch.tensor([i[0] for i in target_sizes]) img_w = torch.tensor([i[1] for i in target_sizes]) else: img_h, img_w = target_sizes.unbind(1) scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(target_boxes.device) target_boxes = target_boxes * scale_fct[:, None, :] # Compute box display alphas based on prediction scores results = [] alphas = torch.zeros_like(scores) for idx in range(target_boxes.shape[0]): # Select scores for boxes matching the current query: query_scores = scores[idx] if not query_scores.nonzero().numel(): continue # Apply threshold on scores before scaling query_scores[query_scores < threshold] = 0.0 # Scale box alpha such that the best box for each query has alpha 1.0 and the worst box has alpha 0.1. # All other boxes will either belong to a different query, or will not be shown. max_score = torch.max(query_scores) + 1e-6 query_alphas = (query_scores - (max_score * 0.1)) / (max_score * 0.9) query_alphas = torch.clip(query_alphas, 0.0, 1.0) alphas[idx] = query_alphas mask = alphas[idx] > 0 box_scores = alphas[idx][mask] boxes = target_boxes[idx][mask] results.append({"scores": box_scores, "labels": None, "boxes": boxes}) return results
transformers/src/transformers/models/owlvit/image_processing_owlvit.py/0
{ "file_path": "transformers/src/transformers/models/owlvit/image_processing_owlvit.py", "repo_id": "transformers", "token_count": 12193 }
366
# coding=utf-8 # Copyright 2020 Google and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os from pathlib import Path from typing import Dict import tensorflow as tf import torch from tqdm import tqdm from transformers import PegasusConfig, PegasusForConditionalGeneration, PegasusTokenizer from transformers.models.pegasus.configuration_pegasus import DEFAULTS, task_specific_params PATTERNS = [ # replace left string with right string to get the relevant state_dict key (identical state dict to bart) ["memory_attention", "encoder_attn"], ["attention", "attn"], ["/", "."], [".LayerNorm.gamma", "_layer_norm.weight"], [".LayerNorm.beta", "_layer_norm.bias"], ["r.layer_", "r.layers."], ["output_proj", "out_proj"], ["ffn.dense_1.", "fc2."], ["ffn.dense.", "fc1."], ["ffn_layer_norm", "final_layer_norm"], ["kernel", "weight"], ["encoder_layer_norm.", "encoder.layer_norm."], ["decoder_layer_norm.", "decoder.layer_norm."], ["embeddings.weights", "shared.weight"], ] def rename_state_dict_key(k): for pegasus_name, hf_name in PATTERNS: k = k.replace(pegasus_name, hf_name) return k # See appendix C of paper for all hyperparams def convert_pegasus(tf_weights: dict, cfg_updates: dict) -> PegasusForConditionalGeneration: cfg_kwargs = DEFAULTS.copy() cfg_kwargs.update(cfg_updates) cfg = PegasusConfig(**cfg_kwargs) torch_model = PegasusForConditionalGeneration(cfg) sd = torch_model.model.state_dict() mapping = {} for k, v in tf_weights.items(): new_k = rename_state_dict_key(k) if new_k not in sd: raise ValueError(f"could not find new key {new_k} in state dict. (converted from {k})") if "dense" in k or "proj" in new_k: v = v.T mapping[new_k] = torch.tensor(v, dtype=sd[new_k].dtype) assert v.shape == sd[new_k].shape, f"{new_k}, {k}, {v.shape}, {sd[new_k].shape}" # make sure embedding.padding_idx is respected mapping["shared.weight"][cfg.pad_token_id] = torch.zeros_like(mapping["shared.weight"][cfg.pad_token_id + 1]) mapping["encoder.embed_tokens.weight"] = mapping["shared.weight"] mapping["decoder.embed_tokens.weight"] = mapping["shared.weight"] empty_biases = {k: torch.zeros_like(v) for k, v in sd.items() if k.endswith("bias") and k not in mapping} mapping.update(**empty_biases) missing, extra = torch_model.model.load_state_dict(mapping, strict=False) unexpected_missing = [ k for k in missing if k not in ["encoder.embed_positions.weight", "decoder.embed_positions.weight"] ] assert unexpected_missing == [], f"no matches found for the following torch keys {unexpected_missing}" assert extra == [], f"no matches found for the following tf keys {extra}" return torch_model def get_tf_weights_as_numpy(path="./ckpt/aeslc/model.ckpt-32000") -> Dict: init_vars = tf.train.list_variables(path) tf_weights = {} ignore_name = ["Adafactor", "global_step"] for name, shape in tqdm(init_vars, desc="converting tf checkpoint to dict"): skip_key = any(pat in name for pat in ignore_name) if skip_key: continue array = tf.train.load_variable(path, name) tf_weights[name] = array return tf_weights def convert_pegasus_ckpt_to_pytorch(ckpt_path: str, save_dir: str): # save tokenizer first dataset = Path(ckpt_path).parent.name desired_max_model_length = task_specific_params[f"summarization_{dataset}"]["max_position_embeddings"] tok = PegasusTokenizer.from_pretrained("sshleifer/pegasus", model_max_length=desired_max_model_length) assert tok.model_max_length == desired_max_model_length tok.save_pretrained(save_dir) # convert model tf_weights = get_tf_weights_as_numpy(ckpt_path) cfg_updates = task_specific_params[f"summarization_{dataset}"] if dataset == "large": cfg_updates["task_specific_params"] = task_specific_params torch_model = convert_pegasus(tf_weights, cfg_updates) torch_model.save_pretrained(save_dir) sd = torch_model.state_dict() sd.pop("model.decoder.embed_positions.weight") sd.pop("model.encoder.embed_positions.weight") torch.save(sd, Path(save_dir) / "pytorch_model.bin") if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument("tf_ckpt_path", type=str, help="passed to tf.train.list_variables") parser.add_argument("save_dir", default=None, type=str, help="Path to the output PyTorch model.") args = parser.parse_args() if args.save_dir is None: dataset = Path(args.tf_ckpt_path).parent.name args.save_dir = os.path.join("pegasus", dataset) convert_pegasus_ckpt_to_pytorch(args.tf_ckpt_path, args.save_dir)
transformers/src/transformers/models/pegasus/convert_pegasus_tf_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/pegasus/convert_pegasus_tf_to_pytorch.py", "repo_id": "transformers", "token_count": 2026 }
367
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Image processor class for Pix2Struct.""" import io import math from typing import Dict, Optional, Union import numpy as np from huggingface_hub import hf_hub_download from ...image_processing_utils import BaseImageProcessor, BatchFeature from ...image_transforms import convert_to_rgb, normalize, to_channel_dimension_format, to_pil_image from ...image_utils import ( ChannelDimension, ImageInput, get_image_size, infer_channel_dimension_format, make_list_of_images, to_numpy_array, valid_images, ) from ...utils import TensorType, is_torch_available, is_vision_available, logging from ...utils.import_utils import requires_backends if is_vision_available(): import textwrap from PIL import Image, ImageDraw, ImageFont if is_torch_available(): import torch logger = logging.get_logger(__name__) DEFAULT_FONT_PATH = "ybelkada/fonts" # adapted from: https://discuss.pytorch.org/t/tf-image-extract-patches-in-pytorch/171409/2 def torch_extract_patches(image_tensor, patch_height, patch_width): """ Utiliy function to extract patches from a given image tensor. Returns a tensor of shape (1, `patch_height`, `patch_width`, `num_channels`x `patch_height` x `patch_width`) Args: image_tensor (torch.Tensor): The image tensor to extract patches from. patch_height (int): The height of the patches to extract. patch_width (int): The width of the patches to extract. """ requires_backends(torch_extract_patches, ["torch"]) image_tensor = image_tensor.unsqueeze(0) patches = torch.nn.functional.unfold(image_tensor, (patch_height, patch_width), stride=(patch_height, patch_width)) patches = patches.reshape(image_tensor.size(0), image_tensor.size(1), patch_height, patch_width, -1) patches = patches.permute(0, 4, 2, 3, 1).reshape( image_tensor.size(2) // patch_height, image_tensor.size(3) // patch_width, image_tensor.size(1) * patch_height * patch_width, ) return patches.unsqueeze(0) # Adapted from https://github.com/google-research/pix2struct/blob/0e1779af0f4db4b652c1d92b3bbd2550a7399123/pix2struct/preprocessing/preprocessing_utils.py#L106 def render_text( text: str, text_size: int = 36, text_color: str = "black", background_color: str = "white", left_padding: int = 5, right_padding: int = 5, top_padding: int = 5, bottom_padding: int = 5, font_bytes: Optional[bytes] = None, font_path: Optional[str] = None, ) -> Image.Image: """ Render text. This script is entirely adapted from the original script that can be found here: https://github.com/google-research/pix2struct/blob/main/pix2struct/preprocessing/preprocessing_utils.py Args: text (`str`, *optional*, defaults to ): Text to render. text_size (`int`, *optional*, defaults to 36): Size of the text. text_color (`str`, *optional*, defaults to `"black"`): Color of the text. background_color (`str`, *optional*, defaults to `"white"`): Color of the background. left_padding (`int`, *optional*, defaults to 5): Padding on the left. right_padding (`int`, *optional*, defaults to 5): Padding on the right. top_padding (`int`, *optional*, defaults to 5): Padding on the top. bottom_padding (`int`, *optional*, defaults to 5): Padding on the bottom. font_bytes (`bytes`, *optional*): Bytes of the font to use. If `None`, the default font will be used. font_path (`str`, *optional*): Path to the font to use. If `None`, the default font will be used. """ requires_backends(render_text, "vision") # Add new lines so that each line is no more than 80 characters. wrapper = textwrap.TextWrapper(width=80) lines = wrapper.wrap(text=text) wrapped_text = "\n".join(lines) if font_bytes is not None and font_path is None: font = io.BytesIO(font_bytes) elif font_path is not None: font = font_path else: font = hf_hub_download(DEFAULT_FONT_PATH, "Arial.TTF") font = ImageFont.truetype(font, encoding="UTF-8", size=text_size) # Use a temporary canvas to determine the width and height in pixels when # rendering the text. temp_draw = ImageDraw.Draw(Image.new("RGB", (1, 1), background_color)) _, _, text_width, text_height = temp_draw.textbbox((0, 0), wrapped_text, font) # Create the actual image with a bit of padding around the text. image_width = text_width + left_padding + right_padding image_height = text_height + top_padding + bottom_padding image = Image.new("RGB", (image_width, image_height), background_color) draw = ImageDraw.Draw(image) draw.text(xy=(left_padding, top_padding), text=wrapped_text, fill=text_color, font=font) return image # Adapted from https://github.com/google-research/pix2struct/blob/0e1779af0f4db4b652c1d92b3bbd2550a7399123/pix2struct/preprocessing/preprocessing_utils.py#L87 def render_header( image: np.ndarray, header: str, input_data_format: Optional[Union[str, ChildProcessError]] = None, **kwargs ): """ Renders the input text as a header on the input image. Args: image (`np.ndarray`): The image to render the header on. header (`str`): The header text. data_format (`Union[ChannelDimension, str]`, *optional*): The data format of the image. Can be either "ChannelDimension.channels_first" or "ChannelDimension.channels_last". Returns: `np.ndarray`: The image with the header rendered. """ requires_backends(render_header, "vision") # Convert to PIL image if necessary image = to_pil_image(image, input_data_format=input_data_format) header_image = render_text(header, **kwargs) new_width = max(header_image.width, image.width) new_height = int(image.height * (new_width / image.width)) new_header_height = int(header_image.height * (new_width / header_image.width)) new_image = Image.new("RGB", (new_width, new_height + new_header_height), "white") new_image.paste(header_image.resize((new_width, new_header_height)), (0, 0)) new_image.paste(image.resize((new_width, new_height)), (0, new_header_height)) # Convert back to the original framework if necessary new_image = to_numpy_array(new_image) if infer_channel_dimension_format(new_image) == ChannelDimension.LAST: new_image = to_channel_dimension_format(new_image, ChannelDimension.LAST) return new_image class Pix2StructImageProcessor(BaseImageProcessor): r""" Constructs a Pix2Struct image processor. Args: do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. According to Pix2Struct paper and code, the image is normalized with its own mean and standard deviation. patch_size (`Dict[str, int]`, *optional*, defaults to `{"height": 16, "width": 16}`): The patch size to use for the image. According to Pix2Struct paper and code, the patch size is 16x16. max_patches (`int`, *optional*, defaults to 2048): The maximum number of patches to extract from the image as per the [Pix2Struct paper](https://arxiv.org/pdf/2210.03347.pdf). is_vqa (`bool`, *optional*, defaults to `False`): Whether or not the image processor is for the VQA task. If `True` and `header_text` is passed in, text is rendered onto the input images. """ model_input_names = ["flattened_patches"] def __init__( self, do_convert_rgb: bool = True, do_normalize: bool = True, patch_size: Dict[str, int] = None, max_patches: int = 2048, is_vqa: bool = False, **kwargs, ) -> None: super().__init__(**kwargs) self.patch_size = patch_size if patch_size is not None else {"height": 16, "width": 16} self.do_normalize = do_normalize self.do_convert_rgb = do_convert_rgb self.max_patches = max_patches self.is_vqa = is_vqa def extract_flattened_patches( self, image: np.ndarray, max_patches: int, patch_size: dict, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> np.ndarray: """ Extract flattened patches from an image. Args: image (`np.ndarray`): Image to extract flattened patches from. max_patches (`int`): Maximum number of patches to extract. patch_size (`dict`): Dictionary containing the patch height and width. Returns: result (`np.ndarray`): A sequence of `max_patches` flattened patches. """ requires_backends(self.extract_flattened_patches, "torch") # convert to torch image = to_channel_dimension_format(image, ChannelDimension.FIRST, input_data_format) image = torch.from_numpy(image) patch_height, patch_width = patch_size["height"], patch_size["width"] image_height, image_width = get_image_size(image, ChannelDimension.FIRST) # maximize scale s.t. scale = math.sqrt(max_patches * (patch_height / image_height) * (patch_width / image_width)) num_feasible_rows = max(min(math.floor(scale * image_height / patch_height), max_patches), 1) num_feasible_cols = max(min(math.floor(scale * image_width / patch_width), max_patches), 1) resized_height = max(num_feasible_rows * patch_height, 1) resized_width = max(num_feasible_cols * patch_width, 1) image = torch.nn.functional.interpolate( image.unsqueeze(0), size=(resized_height, resized_width), mode="bilinear", align_corners=False, antialias=True, ).squeeze(0) # [1, rows, columns, patch_height * patch_width * image_channels] patches = torch_extract_patches(image, patch_height, patch_width) patches_shape = patches.shape rows = patches_shape[1] columns = patches_shape[2] depth = patches_shape[3] # [rows * columns, patch_height * patch_width * image_channels] patches = patches.reshape([rows * columns, depth]) # [rows * columns, 1] row_ids = torch.arange(rows).reshape([rows, 1]).repeat(1, columns).reshape([rows * columns, 1]) col_ids = torch.arange(columns).reshape([1, columns]).repeat(rows, 1).reshape([rows * columns, 1]) # Offset by 1 so the ids do not contain zeros, which represent padding. row_ids += 1 col_ids += 1 # Prepare additional patch features. # [rows * columns, 1] row_ids = row_ids.to(torch.float32) col_ids = col_ids.to(torch.float32) # [rows * columns, 2 + patch_height * patch_width * image_channels] result = torch.cat([row_ids, col_ids, patches], -1) # [max_patches, 2 + patch_height * patch_width * image_channels] result = torch.nn.functional.pad(result, [0, 0, 0, max_patches - (rows * columns)]).float() result = to_numpy_array(result) return result def normalize( self, image: np.ndarray, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> np.ndarray: """ Normalize an image. image = (image - image_mean) / image_std. The image std is to mimic the tensorflow implementation of the `per_image_standardization`: https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization Args: image (`np.ndarray`): Image to normalize. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred. """ if image.dtype == np.uint8: image = image.astype(np.float32) # take mean across the whole `image` mean = np.mean(image) std = np.std(image) adjusted_stddev = max(std, 1.0 / math.sqrt(np.prod(image.shape))) return normalize( image, mean=mean, std=adjusted_stddev, data_format=data_format, input_data_format=input_data_format, **kwargs, ) def preprocess( self, images: ImageInput, header_text: Optional[str] = None, do_convert_rgb: bool = None, do_normalize: Optional[bool] = None, max_patches: Optional[int] = None, patch_size: Optional[Dict[str, int]] = None, return_tensors: Optional[Union[str, TensorType]] = None, data_format: ChannelDimension = ChannelDimension.FIRST, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> ImageInput: """ Preprocess an image or batch of images. The processor first computes the maximum possible number of aspect-ratio preserving patches of size `patch_size` that can be extracted from the image. It then pads the image with zeros to make the image respect the constraint of `max_patches`. Before extracting the patches the images are standardized following the tensorflow implementation of `per_image_standardization` (https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization). Args: images (`ImageInput`): Image to preprocess. Expects a single or batch of images. header_text (`Union[List[str], str]`, *optional*): Text to render as a header. Only has an effect if `image_processor.is_vqa` is `True`. do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`): Whether to convert the image to RGB. do_normalize (`bool`, *optional*, defaults to `self.do_normalize`): Whether to normalize the image. max_patches (`int`, *optional*, defaults to `self.max_patches`): Maximum number of patches to extract. patch_size (`dict`, *optional*, defaults to `self.patch_size`): Dictionary containing the patch height and width. return_tensors (`str` or `TensorType`, *optional*): The type of tensors to return. Can be one of: - Unset: Return a list of `np.ndarray`. - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`): The channel dimension format for the output image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - Unset: Use the channel dimension format of the input image. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. """ do_normalize = do_normalize if do_normalize is not None else self.do_normalize do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb patch_size = patch_size if patch_size is not None else self.patch_size max_patches = max_patches if max_patches is not None else self.max_patches is_vqa = self.is_vqa if kwargs.get("data_format", None) is not None: raise ValueError("data_format is not an accepted input as the outputs are ") images = make_list_of_images(images) if not valid_images(images): raise ValueError( "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " "torch.Tensor, tf.Tensor or jax.ndarray." ) # PIL RGBA images are converted to RGB if do_convert_rgb: images = [convert_to_rgb(image) for image in images] # All transformations expect numpy arrays. images = [to_numpy_array(image) for image in images] if input_data_format is None: # We assume that all images have the same channel dimension format. input_data_format = infer_channel_dimension_format(images[0]) if is_vqa: if header_text is None: raise ValueError("A header text must be provided for VQA models.") font_bytes = kwargs.pop("font_bytes", None) font_path = kwargs.pop("font_path", None) if isinstance(header_text, str): header_text = [header_text] * len(images) images = [ render_header(image, header_text[i], font_bytes=font_bytes, font_path=font_path) for i, image in enumerate(images) ] if do_normalize: images = [self.normalize(image=image, input_data_format=input_data_format) for image in images] # convert to torch tensor and permute images = [ self.extract_flattened_patches( image=image, max_patches=max_patches, patch_size=patch_size, input_data_format=input_data_format ) for image in images ] # create attention mask in numpy attention_masks = [(image.sum(axis=-1) != 0).astype(np.float32) for image in images] encoded_outputs = BatchFeature( data={"flattened_patches": images, "attention_mask": attention_masks}, tensor_type=return_tensors ) return encoded_outputs
transformers/src/transformers/models/pix2struct/image_processing_pix2struct.py/0
{ "file_path": "transformers/src/transformers/models/pix2struct/image_processing_pix2struct.py", "repo_id": "transformers", "token_count": 8162 }
368
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """File for loading the Pop2Piano model weights from the official repository and to show how tokenizer vocab was constructed""" import json import torch from transformers import Pop2PianoConfig, Pop2PianoForConditionalGeneration ########################## MODEL WEIGHTS ########################## # This weights were downloaded from the official pop2piano repository # https://huggingface.co/sweetcocoa/pop2piano/blob/main/model-1999-val_0.67311615.ckpt official_weights = torch.load("./model-1999-val_0.67311615.ckpt") state_dict = {} # load the config and init the model cfg = Pop2PianoConfig.from_pretrained("sweetcocoa/pop2piano") model = Pop2PianoForConditionalGeneration(cfg) # load relative attention bias state_dict["encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight"] = official_weights["state_dict"][ "transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight" ] state_dict["decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight"] = official_weights["state_dict"][ "transformer.decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight" ] # load embed tokens and final layer norm for both encoder and decoder state_dict["encoder.embed_tokens.weight"] = official_weights["state_dict"]["transformer.encoder.embed_tokens.weight"] state_dict["decoder.embed_tokens.weight"] = official_weights["state_dict"]["transformer.decoder.embed_tokens.weight"] state_dict["encoder.final_layer_norm.weight"] = official_weights["state_dict"][ "transformer.encoder.final_layer_norm.weight" ] state_dict["decoder.final_layer_norm.weight"] = official_weights["state_dict"][ "transformer.decoder.final_layer_norm.weight" ] # load lm_head, mel_conditioner.emb and shared state_dict["lm_head.weight"] = official_weights["state_dict"]["transformer.lm_head.weight"] state_dict["mel_conditioner.embedding.weight"] = official_weights["state_dict"]["mel_conditioner.embedding.weight"] state_dict["shared.weight"] = official_weights["state_dict"]["transformer.shared.weight"] # load each encoder blocks for i in range(cfg.num_layers): # layer 0 state_dict[f"encoder.block.{i}.layer.0.SelfAttention.q.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.0.SelfAttention.q.weight" ] state_dict[f"encoder.block.{i}.layer.0.SelfAttention.k.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.0.SelfAttention.k.weight" ] state_dict[f"encoder.block.{i}.layer.0.SelfAttention.v.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.0.SelfAttention.v.weight" ] state_dict[f"encoder.block.{i}.layer.0.SelfAttention.o.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.0.SelfAttention.o.weight" ] state_dict[f"encoder.block.{i}.layer.0.layer_norm.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.0.layer_norm.weight" ] # layer 1 state_dict[f"encoder.block.{i}.layer.1.DenseReluDense.wi_0.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.1.DenseReluDense.wi_0.weight" ] state_dict[f"encoder.block.{i}.layer.1.DenseReluDense.wi_1.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.1.DenseReluDense.wi_1.weight" ] state_dict[f"encoder.block.{i}.layer.1.DenseReluDense.wo.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.1.DenseReluDense.wo.weight" ] state_dict[f"encoder.block.{i}.layer.1.layer_norm.weight"] = official_weights["state_dict"][ f"transformer.encoder.block.{i}.layer.1.layer_norm.weight" ] # load each decoder blocks for i in range(6): # layer 0 state_dict[f"decoder.block.{i}.layer.0.SelfAttention.q.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.0.SelfAttention.q.weight" ] state_dict[f"decoder.block.{i}.layer.0.SelfAttention.k.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.0.SelfAttention.k.weight" ] state_dict[f"decoder.block.{i}.layer.0.SelfAttention.v.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.0.SelfAttention.v.weight" ] state_dict[f"decoder.block.{i}.layer.0.SelfAttention.o.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.0.SelfAttention.o.weight" ] state_dict[f"decoder.block.{i}.layer.0.layer_norm.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.0.layer_norm.weight" ] # layer 1 state_dict[f"decoder.block.{i}.layer.1.EncDecAttention.q.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.1.EncDecAttention.q.weight" ] state_dict[f"decoder.block.{i}.layer.1.EncDecAttention.k.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.1.EncDecAttention.k.weight" ] state_dict[f"decoder.block.{i}.layer.1.EncDecAttention.v.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.1.EncDecAttention.v.weight" ] state_dict[f"decoder.block.{i}.layer.1.EncDecAttention.o.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.1.EncDecAttention.o.weight" ] state_dict[f"decoder.block.{i}.layer.1.layer_norm.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.1.layer_norm.weight" ] # layer 2 state_dict[f"decoder.block.{i}.layer.2.DenseReluDense.wi_0.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.2.DenseReluDense.wi_0.weight" ] state_dict[f"decoder.block.{i}.layer.2.DenseReluDense.wi_1.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.2.DenseReluDense.wi_1.weight" ] state_dict[f"decoder.block.{i}.layer.2.DenseReluDense.wo.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.2.DenseReluDense.wo.weight" ] state_dict[f"decoder.block.{i}.layer.2.layer_norm.weight"] = official_weights["state_dict"][ f"transformer.decoder.block.{i}.layer.2.layer_norm.weight" ] model.load_state_dict(state_dict, strict=True) # save the weights torch.save(state_dict, "./pytorch_model.bin") ########################## TOKENIZER ########################## # the tokenize and detokenize methods are taken from the official implementation # link : https://github.com/sweetcocoa/pop2piano/blob/fac11e8dcfc73487513f4588e8d0c22a22f2fdc5/midi_tokenizer.py#L34 def tokenize(idx, token_type, n_special=4, n_note=128, n_velocity=2): if token_type == "TOKEN_TIME": return n_special + n_note + n_velocity + idx elif token_type == "TOKEN_VELOCITY": return n_special + n_note + idx elif token_type == "TOKEN_NOTE": return n_special + idx elif token_type == "TOKEN_SPECIAL": return idx else: return -1 # link : https://github.com/sweetcocoa/pop2piano/blob/fac11e8dcfc73487513f4588e8d0c22a22f2fdc5/midi_tokenizer.py#L48 def detokenize(idx, n_special=4, n_note=128, n_velocity=2, time_idx_offset=0): if idx >= n_special + n_note + n_velocity: return "TOKEN_TIME", (idx - (n_special + n_note + n_velocity)) + time_idx_offset elif idx >= n_special + n_note: return "TOKEN_VELOCITY", idx - (n_special + n_note) elif idx >= n_special: return "TOKEN_NOTE", idx - n_special else: return "TOKEN_SPECIAL", idx # crate the decoder and then the encoder of the tokenizer decoder = {} for i in range(cfg.vocab_size): decoder.update({i: f"{detokenize(i)[1]}_{detokenize(i)[0]}"}) encoder = {v: k for k, v in decoder.items()} # save the vocab with open("./vocab.json", "w") as file: file.write(json.dumps(encoder))
transformers/src/transformers/models/pop2piano/convert_pop2piano_weights_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/pop2piano/convert_pop2piano_weights_to_hf.py", "repo_id": "transformers", "token_count": 3447 }
369
# coding=utf-8 # Copyright 2024 Authors: Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, # Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and The HuggingFace Inc. team. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Pvt V2 model configuration""" from typing import Callable, List, Tuple, Union from ...configuration_utils import PretrainedConfig from ...utils import logging from ...utils.backbone_utils import BackboneConfigMixin, get_aligned_output_features_output_indices logger = logging.get_logger(__name__) class PvtV2Config(BackboneConfigMixin, PretrainedConfig): r""" This is the configuration class to store the configuration of a [`PvtV2Model`]. It is used to instantiate a Pvt V2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Pvt V2 B0 [OpenGVLab/pvt_v2_b0](https://huggingface.co/OpenGVLab/pvt_v2_b0) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`Union[int, Tuple[int, int]]`, *optional*, defaults to 224): The input image size. Pass int value for square image, or tuple of (height, width). num_channels (`int`, *optional*, defaults to 3): The number of input channels. num_encoder_blocks (`[int]`, *optional*, defaults to 4): The number of encoder blocks (i.e. stages in the Mix Transformer encoder). depths (`List[int]`, *optional*, defaults to `[2, 2, 2, 2]`): The number of layers in each encoder block. sr_ratios (`List[int]`, *optional*, defaults to `[8, 4, 2, 1]`): Spatial reduction ratios in each encoder block. hidden_sizes (`List[int]`, *optional*, defaults to `[32, 64, 160, 256]`): Dimension of each of the encoder blocks. patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3, 3]`): Patch size for overlapping patch embedding before each encoder block. strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`): Stride for overlapping patch embedding before each encoder block. num_attention_heads (`List[int]`, *optional*, defaults to `[1, 2, 5, 8]`): Number of attention heads for each attention layer in each block of the Transformer encoder. mlp_ratios (`List[int]`, *optional*, defaults to `[8, 8, 4, 4]`): Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the encoder blocks. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. drop_path_rate (`float`, *optional*, defaults to 0.0): The dropout probability for stochastic depth, used in the blocks of the Transformer encoder. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values. linear_attention (`bool`, *optional*, defaults to `False`): Use linear attention complexity. If set to True, `sr_ratio` is ignored and average pooling is used for dimensionality reduction in the attention layers rather than strided convolution. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Example: ```python >>> from transformers import PvtV2Model, PvtV2Config >>> # Initializing a pvt_v2_b0 style configuration >>> configuration = PvtV2Config() >>> # Initializing a model from the OpenGVLab/pvt_v2_b0 style configuration >>> model = PvtV2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "pvt_v2" def __init__( self, image_size: Union[int, Tuple[int, int]] = 224, num_channels: int = 3, num_encoder_blocks: int = 4, depths: List[int] = [2, 2, 2, 2], sr_ratios: List[int] = [8, 4, 2, 1], hidden_sizes: List[int] = [32, 64, 160, 256], patch_sizes: List[int] = [7, 3, 3, 3], strides: List[int] = [4, 2, 2, 2], num_attention_heads: List[int] = [1, 2, 5, 8], mlp_ratios: List[int] = [8, 8, 4, 4], hidden_act: Union[str, Callable] = "gelu", hidden_dropout_prob: float = 0.0, attention_probs_dropout_prob: float = 0.0, initializer_range: float = 0.02, drop_path_rate: float = 0.0, layer_norm_eps: float = 1e-6, qkv_bias: bool = True, linear_attention: bool = False, out_features=None, out_indices=None, **kwargs, ): super().__init__(**kwargs) image_size = (image_size, image_size) if isinstance(image_size, int) else image_size self.image_size = image_size self.num_channels = num_channels self.num_encoder_blocks = num_encoder_blocks self.depths = depths self.sr_ratios = sr_ratios self.hidden_sizes = hidden_sizes self.patch_sizes = patch_sizes self.strides = strides self.mlp_ratios = mlp_ratios self.num_attention_heads = num_attention_heads self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.initializer_range = initializer_range self.drop_path_rate = drop_path_rate self.layer_norm_eps = layer_norm_eps self.qkv_bias = qkv_bias self.linear_attention = linear_attention self.stage_names = [f"stage{idx}" for idx in range(1, len(depths) + 1)] self._out_features, self._out_indices = get_aligned_output_features_output_indices( out_features=out_features, out_indices=out_indices, stage_names=self.stage_names )
transformers/src/transformers/models/pvt_v2/configuration_pvt_v2.py/0
{ "file_path": "transformers/src/transformers/models/pvt_v2/configuration_pvt_v2.py", "repo_id": "transformers", "token_count": 3076 }
370
# coding=utf-8 # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Qwen2VL model configuration""" import os from typing import Union from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class Qwen2VLVisionConfig(PretrainedConfig): model_type = "qwen2_vl" def __init__( self, depth=32, embed_dim=1280, hidden_size=3584, hidden_act="quick_gelu", mlp_ratio=4, num_heads=16, in_channels=3, patch_size=14, spatial_merge_size=2, temporal_patch_size=2, **kwargs, ): super().__init__(**kwargs) self.depth = depth self.embed_dim = embed_dim self.hidden_size = hidden_size self.hidden_act = hidden_act self.mlp_ratio = mlp_ratio self.num_heads = num_heads self.in_channels = in_channels self.patch_size = patch_size self.spatial_merge_size = spatial_merge_size self.temporal_patch_size = temporal_patch_size @classmethod def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig": cls._set_token_in_kwargs(kwargs) config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) if config_dict.get("model_type") == "qwen2_vl": config_dict = config_dict["vision_config"] if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: logger.warning( f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." ) return cls.from_dict(config_dict, **kwargs) class Qwen2VLConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`Qwen2VLModel`]. It is used to instantiate a Qwen2-VL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of Qwen2-VL-7B-Instruct [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 152064): Vocabulary size of the Qwen2VL model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Qwen2VLModel`] hidden_size (`int`, *optional*, defaults to 8192): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 29568): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 80): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 64): Number of attention heads for each attention layer in the Transformer encoder. num_key_value_heads (`int`, *optional*, defaults to 8): This is the number of key_value heads that should be used to implement Grouped Query Attention. If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `32`. hidden_act (`str` or `function`, *optional*, defaults to `"silu"`): The non-linear activation function (function or string) in the decoder. max_position_embeddings (`int`, *optional*, defaults to 32768): The maximum sequence length that this model might ever be used with. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. rms_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the rms normalization layers. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether the model's input and output word embeddings should be tied. rope_theta (`float`, *optional*, defaults to 1000000.0): The base period of the RoPE embeddings. use_sliding_window (`bool`, *optional*, defaults to `False`): Whether to use sliding window attention. sliding_window (`int`, *optional*, defaults to 4096): Sliding window attention (SWA) window size. If not specified, will default to `4096`. max_window_layers (`int`, *optional*, defaults to 80): The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. vision_config (`Dict`, *optional*): The config for the visual encoder initialization. rope_scaling (`Dict`, *optional*): Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update `max_position_embeddings` to the expected new maximum. See the following thread for more information on how these scaling strategies behave: https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an experimental feature, subject to breaking API changes in future versions. ```python >>> from transformers import Qwen2VLForConditionalGeneration, Qwen2VLConfig >>> # Initializing a Qwen2VL style configuration >>> configuration = Qwen2VLConfig() >>> # Initializing a model from the Qwen2-VL-7B style configuration >>> model = Qwen2VLForConditionalGeneration(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "qwen2_vl" keys_to_ignore_at_inference = ["past_key_values"] def __init__( self, vocab_size=152064, hidden_size=8192, intermediate_size=29568, num_hidden_layers=80, num_attention_heads=64, num_key_value_heads=8, hidden_act="silu", max_position_embeddings=32768, initializer_range=0.02, rms_norm_eps=1e-05, use_cache=True, tie_word_embeddings=False, rope_theta=1000000.0, use_sliding_window=False, sliding_window=4096, max_window_layers=80, attention_dropout=0.0, vision_config=None, rope_scaling=None, **kwargs, ): if isinstance(vision_config, dict): self.vision_config = Qwen2VLVisionConfig(**vision_config) elif vision_config is None: self.vision_config = Qwen2VLVisionConfig() self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.intermediate_size = intermediate_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.use_sliding_window = use_sliding_window self.sliding_window = sliding_window self.max_window_layers = max_window_layers # for backward compatibility if num_key_value_heads is None: num_key_value_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.hidden_act = hidden_act self.initializer_range = initializer_range self.rms_norm_eps = rms_norm_eps self.use_cache = use_cache self.rope_theta = rope_theta self.attention_dropout = attention_dropout self.rope_scaling = rope_scaling super().__init__(tie_word_embeddings=tie_word_embeddings, **kwargs)
transformers/src/transformers/models/qwen2_vl/configuration_qwen2_vl.py/0
{ "file_path": "transformers/src/transformers/models/qwen2_vl/configuration_qwen2_vl.py", "repo_id": "transformers", "token_count": 3670 }
371
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert Reformer checkpoint.""" import argparse import pickle import numpy as np import torch from torch import nn from transformers import ReformerConfig, ReformerModelWithLMHead from transformers.utils import logging logging.set_verbosity_info() def set_param(torch_layer, weight, bias=None): # set parameter of one layer assert torch_layer.weight.shape == weight.shape, f"{torch_layer} layer.weight does not match" torch_layer.weight = nn.Parameter(weight) if bias is not None: assert torch_layer.bias.shape == bias.shape, f"{torch_layer} layer.bias does not match" torch_layer.bias = nn.Parameter(bias) def set_layer_weights_in_torch_lsh(weights, torch_layer, hidden_size): # set torch weights for 1-to-1 comparison np_query_key = np.asarray(weights[0]) np_value = np.asarray(weights[1]) np_dense = np.asarray(weights[2]) set_param( torch_layer.self_attention.query_key, torch.tensor(np_query_key).transpose(1, 2).contiguous().view(-1, hidden_size), ) set_param( torch_layer.self_attention.value, torch.tensor(np_value).transpose(1, 2).contiguous().view(-1, hidden_size), ) set_param( torch_layer.output.dense, torch.tensor(np_dense).view(-1, hidden_size).contiguous().transpose(0, 1), ) def set_layer_weights_in_torch_local(weights, torch_layer, hidden_size): # set torch weights for 1-to-1 comparison np_query = np.asarray(weights[0]) np_key = np.asarray(weights[1]) np_value = np.asarray(weights[2]) np_dense = np.asarray(weights[3]) set_param( torch_layer.self_attention.query, torch.tensor(np_query).transpose(1, 2).contiguous().view(-1, hidden_size), ) set_param( torch_layer.self_attention.key, torch.tensor(np_key).transpose(1, 2).contiguous().view(-1, hidden_size), ) set_param( torch_layer.self_attention.value, torch.tensor(np_value).transpose(1, 2).contiguous().view(-1, hidden_size), ) set_param( torch_layer.output.dense, torch.tensor(np_dense).view(-1, hidden_size).contiguous().transpose(0, 1), ) def set_block_weights_in_torch(weights, torch_block, hidden_size): # layernorm 1 layer_norm_1 = weights[0][0][0] layer_norm_1_weight = np.asarray(layer_norm_1[0]) layer_norm_1_bias = np.asarray(layer_norm_1[1]) set_param( torch_block.attention.layer_norm, torch.tensor(layer_norm_1_weight), torch.tensor(layer_norm_1_bias), ) # lsh weights + output attn_weights = weights[0][1] if len(attn_weights) < 4: set_layer_weights_in_torch_lsh(attn_weights, torch_block.attention, hidden_size) else: set_layer_weights_in_torch_local(attn_weights, torch_block.attention, hidden_size) # intermediate weighs intermediate_weights = weights[2][0][1][2] # Chunked Feed Forward if len(intermediate_weights) == 4: intermediate_weights = intermediate_weights[2] # layernorm 2 layer_norm_2_weight = np.asarray(intermediate_weights[0][0]) layer_norm_2_bias = np.asarray(intermediate_weights[0][1]) set_param( torch_block.feed_forward.layer_norm, torch.tensor(layer_norm_2_weight), torch.tensor(layer_norm_2_bias), ) # intermediate dense inter_dense_weight = np.asarray(intermediate_weights[1][0]) inter_dense_bias = np.asarray(intermediate_weights[1][1]) set_param( torch_block.feed_forward.dense.dense, torch.tensor(inter_dense_weight).transpose(0, 1).contiguous(), torch.tensor(inter_dense_bias), ) # intermediate out out_dense_weight = np.asarray(intermediate_weights[4][0]) out_dense_bias = np.asarray(intermediate_weights[4][1]) set_param( torch_block.feed_forward.output.dense, torch.tensor(out_dense_weight).transpose(0, 1).contiguous(), torch.tensor(out_dense_bias), ) def set_model_weights_in_torch(weights, torch_model, hidden_size): # reformer model torch_model_reformer = torch_model.reformer # word embeds word_embeddings = np.asarray(weights[1]) set_param( torch_model_reformer.embeddings.word_embeddings, torch.tensor(word_embeddings), ) if isinstance(weights[3], tuple): position_embeddings = torch_model_reformer.embeddings.position_embeddings for emb_idx in range(len(position_embeddings.weights)): emb_weights = np.asarray(weights[3][emb_idx][0]) assert ( position_embeddings.weights[emb_idx].shape == emb_weights.shape ), f"{position_embeddings[emb_idx]} emb does not match" position_embeddings.weights[emb_idx] = nn.Parameter(torch.tensor(emb_weights)) trax_layer_weights = weights[5] assert len(torch_model_reformer.encoder.layers) * 4 == len( trax_layer_weights ), "HF and trax model do not have the same number of layers" for layer_idx, layer in enumerate(torch_model_reformer.encoder.layers): block_weights = trax_layer_weights[4 * layer_idx : 4 * (layer_idx + 1)] set_block_weights_in_torch(block_weights, layer, hidden_size) # output layer norm layer_norm_out_weight = np.asarray(weights[7][0]) layer_norm_out_bias = np.asarray(weights[7][1]) set_param( torch_model_reformer.encoder.layer_norm, torch.tensor(layer_norm_out_weight), torch.tensor(layer_norm_out_bias), ) # output embeddings output_embed_weights = np.asarray(weights[9][0]) output_embed_bias = np.asarray(weights[9][1]) set_param( torch_model.lm_head.decoder, torch.tensor(output_embed_weights).transpose(0, 1).contiguous(), torch.tensor(output_embed_bias), ) def convert_trax_checkpoint_to_pytorch(trax_model_pkl_path, config_file, pytorch_dump_path): # Initialise PyTorch model config = ReformerConfig.from_json_file(config_file) print(f"Building PyTorch model from configuration: {config}") model = ReformerModelWithLMHead(config) with open(trax_model_pkl_path, "rb") as f: model_weights = pickle.load(f)["weights"] set_model_weights_in_torch(model_weights, model, config.hidden_size) # Save pytorch-model print(f"Save PyTorch model to {pytorch_dump_path}") torch.save(model.state_dict(), pytorch_dump_path) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--trax_model_pkl_path", default=None, type=str, required=True, help="Path to the TensorFlow checkpoint path." ) parser.add_argument( "--config_file", default=None, type=str, required=True, help=( "The config json file corresponding to the pre-trained Reformer model. \n" "This specifies the model architecture." ), ) parser.add_argument( "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model." ) args = parser.parse_args() convert_trax_checkpoint_to_pytorch(args.trax_model_pkl_path, args.config_file, args.pytorch_dump_path)
transformers/src/transformers/models/reformer/convert_reformer_trax_checkpoint_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/reformer/convert_reformer_trax_checkpoint_to_pytorch.py", "repo_id": "transformers", "token_count": 3213 }
372
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING from ...utils import ( OptionalDependencyNotAvailable, _LazyModule, is_flax_available, is_tf_available, is_torch_available, ) _import_structure = { "configuration_roberta_prelayernorm": [ "RobertaPreLayerNormConfig", "RobertaPreLayerNormOnnxConfig", ], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_roberta_prelayernorm"] = [ "RobertaPreLayerNormForCausalLM", "RobertaPreLayerNormForMaskedLM", "RobertaPreLayerNormForMultipleChoice", "RobertaPreLayerNormForQuestionAnswering", "RobertaPreLayerNormForSequenceClassification", "RobertaPreLayerNormForTokenClassification", "RobertaPreLayerNormModel", "RobertaPreLayerNormPreTrainedModel", ] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_roberta_prelayernorm"] = [ "TFRobertaPreLayerNormForCausalLM", "TFRobertaPreLayerNormForMaskedLM", "TFRobertaPreLayerNormForMultipleChoice", "TFRobertaPreLayerNormForQuestionAnswering", "TFRobertaPreLayerNormForSequenceClassification", "TFRobertaPreLayerNormForTokenClassification", "TFRobertaPreLayerNormMainLayer", "TFRobertaPreLayerNormModel", "TFRobertaPreLayerNormPreTrainedModel", ] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_roberta_prelayernorm"] = [ "FlaxRobertaPreLayerNormForCausalLM", "FlaxRobertaPreLayerNormForMaskedLM", "FlaxRobertaPreLayerNormForMultipleChoice", "FlaxRobertaPreLayerNormForQuestionAnswering", "FlaxRobertaPreLayerNormForSequenceClassification", "FlaxRobertaPreLayerNormForTokenClassification", "FlaxRobertaPreLayerNormModel", "FlaxRobertaPreLayerNormPreTrainedModel", ] if TYPE_CHECKING: from .configuration_roberta_prelayernorm import ( RobertaPreLayerNormConfig, RobertaPreLayerNormOnnxConfig, ) try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_roberta_prelayernorm import ( RobertaPreLayerNormForCausalLM, RobertaPreLayerNormForMaskedLM, RobertaPreLayerNormForMultipleChoice, RobertaPreLayerNormForQuestionAnswering, RobertaPreLayerNormForSequenceClassification, RobertaPreLayerNormForTokenClassification, RobertaPreLayerNormModel, RobertaPreLayerNormPreTrainedModel, ) try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_roberta_prelayernorm import ( TFRobertaPreLayerNormForCausalLM, TFRobertaPreLayerNormForMaskedLM, TFRobertaPreLayerNormForMultipleChoice, TFRobertaPreLayerNormForQuestionAnswering, TFRobertaPreLayerNormForSequenceClassification, TFRobertaPreLayerNormForTokenClassification, TFRobertaPreLayerNormMainLayer, TFRobertaPreLayerNormModel, TFRobertaPreLayerNormPreTrainedModel, ) try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_roberta_prelayernorm import ( FlaxRobertaPreLayerNormForCausalLM, FlaxRobertaPreLayerNormForMaskedLM, FlaxRobertaPreLayerNormForMultipleChoice, FlaxRobertaPreLayerNormForQuestionAnswering, FlaxRobertaPreLayerNormForSequenceClassification, FlaxRobertaPreLayerNormForTokenClassification, FlaxRobertaPreLayerNormModel, FlaxRobertaPreLayerNormPreTrainedModel, ) else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
transformers/src/transformers/models/roberta_prelayernorm/__init__.py/0
{ "file_path": "transformers/src/transformers/models/roberta_prelayernorm/__init__.py", "repo_id": "transformers", "token_count": 2041 }
373
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Convert SAM checkpoints from the original repository. URL: https://github.com/facebookresearch/segment-anything. Also supports converting the SlimSAM checkpoints from https://github.com/czg1225/SlimSAM/tree/master. """ import argparse import re import numpy as np import requests import torch from huggingface_hub import hf_hub_download from PIL import Image from transformers import ( SamConfig, SamImageProcessor, SamModel, SamProcessor, SamVisionConfig, ) def get_config(model_name): if "slimsam-50" in model_name: vision_config = SamVisionConfig( hidden_size=384, mlp_dim=1536, num_hidden_layers=12, num_attention_heads=12, global_attn_indexes=[2, 5, 8, 11], ) elif "slimsam-77" in model_name: vision_config = SamVisionConfig( hidden_size=168, mlp_dim=696, num_hidden_layers=12, num_attention_heads=12, global_attn_indexes=[2, 5, 8, 11], ) elif "sam_vit_b" in model_name: vision_config = SamVisionConfig() elif "sam_vit_l" in model_name: vision_config = SamVisionConfig( hidden_size=1024, num_hidden_layers=24, num_attention_heads=16, global_attn_indexes=[5, 11, 17, 23], ) elif "sam_vit_h" in model_name: vision_config = SamVisionConfig( hidden_size=1280, num_hidden_layers=32, num_attention_heads=16, global_attn_indexes=[7, 15, 23, 31], ) config = SamConfig( vision_config=vision_config, ) return config KEYS_TO_MODIFY_MAPPING = { "iou_prediction_head.layers.0": "iou_prediction_head.proj_in", "iou_prediction_head.layers.1": "iou_prediction_head.layers.0", "iou_prediction_head.layers.2": "iou_prediction_head.proj_out", "mask_decoder.output_upscaling.0": "mask_decoder.upscale_conv1", "mask_decoder.output_upscaling.1": "mask_decoder.upscale_layer_norm", "mask_decoder.output_upscaling.3": "mask_decoder.upscale_conv2", "mask_downscaling.0": "mask_embed.conv1", "mask_downscaling.1": "mask_embed.layer_norm1", "mask_downscaling.3": "mask_embed.conv2", "mask_downscaling.4": "mask_embed.layer_norm2", "mask_downscaling.6": "mask_embed.conv3", "point_embeddings": "point_embed", "pe_layer.positional_encoding_gaussian_matrix": "shared_embedding.positional_embedding", "image_encoder": "vision_encoder", "neck.0": "neck.conv1", "neck.1": "neck.layer_norm1", "neck.2": "neck.conv2", "neck.3": "neck.layer_norm2", "patch_embed.proj": "patch_embed.projection", ".norm": ".layer_norm", "blocks": "layers", } def replace_keys(state_dict): model_state_dict = {} state_dict.pop("pixel_mean", None) state_dict.pop("pixel_std", None) output_hypernetworks_mlps_pattern = r".*.output_hypernetworks_mlps.(\d+).layers.(\d+).*" for key, value in state_dict.items(): for key_to_modify, new_key in KEYS_TO_MODIFY_MAPPING.items(): if key_to_modify in key: key = key.replace(key_to_modify, new_key) if re.match(output_hypernetworks_mlps_pattern, key): layer_nb = int(re.match(output_hypernetworks_mlps_pattern, key).group(2)) if layer_nb == 0: key = key.replace("layers.0", "proj_in") elif layer_nb == 1: key = key.replace("layers.1", "layers.0") elif layer_nb == 2: key = key.replace("layers.2", "proj_out") model_state_dict[key] = value model_state_dict["shared_image_embedding.positional_embedding"] = model_state_dict[ "prompt_encoder.shared_embedding.positional_embedding" ] return model_state_dict def convert_sam_checkpoint(model_name, checkpoint_path, pytorch_dump_folder, push_to_hub): config = get_config(model_name) state_dict = torch.load(checkpoint_path, map_location="cpu") state_dict = replace_keys(state_dict) image_processor = SamImageProcessor() processor = SamProcessor(image_processor=image_processor) hf_model = SamModel(config) hf_model.eval() device = "cuda" if torch.cuda.is_available() else "cpu" hf_model.load_state_dict(state_dict) hf_model = hf_model.to(device) img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[500, 375]]] input_labels = [[1]] inputs = processor(images=np.array(raw_image), return_tensors="pt").to(device) with torch.no_grad(): output = hf_model(**inputs) scores = output.iou_scores.squeeze() if model_name == "sam_vit_b_01ec64": inputs = processor( images=np.array(raw_image), input_points=input_points, input_labels=input_labels, return_tensors="pt" ).to(device) with torch.no_grad(): output = hf_model(**inputs) scores = output.iou_scores.squeeze() elif model_name == "sam_vit_h_4b8939": inputs = processor( images=np.array(raw_image), input_points=input_points, input_labels=input_labels, return_tensors="pt" ).to(device) with torch.no_grad(): output = hf_model(**inputs) scores = output.iou_scores.squeeze() assert scores[-1].item() == 0.9712603092193604 input_boxes = ((75, 275, 1725, 850),) inputs = processor(images=np.array(raw_image), input_boxes=input_boxes, return_tensors="pt").to(device) with torch.no_grad(): output = hf_model(**inputs) scores = output.iou_scores.squeeze() assert scores[-1].item() == 0.8686015605926514 # Test with 2 points and 1 image. input_points = [[[400, 650], [800, 650]]] input_labels = [[1, 1]] inputs = processor( images=np.array(raw_image), input_points=input_points, input_labels=input_labels, return_tensors="pt" ).to(device) with torch.no_grad(): output = hf_model(**inputs) scores = output.iou_scores.squeeze() assert scores[-1].item() == 0.9936047792434692 if pytorch_dump_folder is not None: processor.save_pretrained(pytorch_dump_folder) hf_model.save_pretrained(pytorch_dump_folder) if push_to_hub: repo_id = f"nielsr/{model_name}" if "slimsam" in model_name else f"meta/{model_name}" processor.push_to_hub(repo_id) hf_model.push_to_hub(repo_id) if __name__ == "__main__": parser = argparse.ArgumentParser() choices = ["sam_vit_b_01ec64", "sam_vit_h_4b8939", "sam_vit_l_0b3195", "slimsam-50-uniform", "slimsam-77-uniform"] parser.add_argument( "--model_name", default="sam_vit_h_4b8939", choices=choices, type=str, help="Name of the original model to convert", ) parser.add_argument( "--checkpoint_path", type=str, required=False, help="Path to the original checkpoint", ) parser.add_argument("--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model.") parser.add_argument( "--push_to_hub", action="store_true", help="Whether to push the model and processor to the hub after converting", ) args = parser.parse_args() if "slimsam" in args.model_name: checkpoint_path = args.checkpoint_path if checkpoint_path is None: raise ValueError("You need to provide a checkpoint path for SlimSAM models.") else: checkpoint_path = hf_hub_download("ybelkada/segment-anything", f"checkpoints/{args.model_name}.pth") convert_sam_checkpoint(args.model_name, checkpoint_path, args.pytorch_dump_folder_path, args.push_to_hub)
transformers/src/transformers/models/sam/convert_sam_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/sam/convert_sam_to_hf.py", "repo_id": "transformers", "token_count": 3753 }
374
# coding=utf-8 # Copyright 2023 The Fairseq Authors, Microsoft Research, and the HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Number Normalizer class for SpeechT5.""" import re class EnglishNumberNormalizer: def __init__(self): self.ones = ["", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"] self.teens = [ "", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", ] self.tens = ["", "ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"] self.thousands = [ "", "thousand", "million", "billion", "trillion", "quadrillion", "quintillion", "sextillion", "septillion", "octillion", "nonillion", "decillion", ] # Define a dictionary to map currency symbols to their names # Top most traded currencies according to # https://en.wikipedia.org/wiki/Template:Most_traded_currencies self.currency_symbols = { "$": " dollars", "€": " euros", "£": " pounds", "¢": " cents", "Â¥": " japanese yen", "ï·Œ": " saudi riyal", "₹": " indian rupees", "â‚œ": " russian rubles", "àž¿": " thai baht", "₺": " turkish liras", "₮": " ukrainian hryvnia", "₣": " swiss francs", "₡": " costa rican colon", "₱": " philippine peso", "₪": " israeli shekels", "₮": " mongolian tögrög", "₩": " south korean won", "₩": " nigerian naira", "₫": " vietnamese Đồng", } def spell_number(self, num): if num == 0: return "zero" parts = [] for i in range(0, len(self.thousands)): if num % 1000 != 0: part = "" hundreds = num % 1000 // 100 tens_units = num % 100 if hundreds > 0: part += self.ones[hundreds] + " hundred" if tens_units > 0: part += " and " if tens_units > 10 and tens_units < 20: part += self.teens[tens_units - 10] else: tens_digit = self.tens[tens_units // 10] ones_digit = self.ones[tens_units % 10] if tens_digit: part += tens_digit if ones_digit: if tens_digit: part += " " part += ones_digit parts.append(part) num //= 1000 return " ".join(reversed(parts)) def convert(self, number): """ Converts an individual number passed in string form to spelt-out form """ if "." in number: integer_part, decimal_part = number.split(".") else: integer_part, decimal_part = number, "00" # Extract currency symbol if present currency_symbol = "" for symbol, name in self.currency_symbols.items(): if integer_part.startswith(symbol): currency_symbol = name integer_part = integer_part[len(symbol) :] break if integer_part.startswith("-"): if integer_part[1:].startswith(symbol): currency_symbol = name integer_part = "-" + integer_part[len(symbol) + 1 :] break # Extract 'minus' prefix for negative numbers minus_prefix = "" if integer_part.startswith("-"): minus_prefix = "minus " integer_part = integer_part[1:] elif integer_part.startswith("minus"): minus_prefix = "minus " integer_part = integer_part[len("minus") :] percent_suffix = "" if "%" in integer_part or "%" in decimal_part: percent_suffix = " percent" integer_part = integer_part.replace("%", "") decimal_part = decimal_part.replace("%", "") integer_part = integer_part.zfill(3 * ((len(integer_part) - 1) // 3 + 1)) parts = [] for i in range(0, len(integer_part), 3): chunk = int(integer_part[i : i + 3]) if chunk > 0: part = self.spell_number(chunk) unit = self.thousands[len(integer_part[i:]) // 3 - 1] if unit: part += " " + unit parts.append(part) spelled_integer = " ".join(parts) # Format the spelt-out number based on conditions, such as: # If it has decimal parts, currency symbol, minus prefix, etc if decimal_part == "00": return ( f"{minus_prefix}{spelled_integer}{percent_suffix}{currency_symbol}" if minus_prefix or currency_symbol else f"{spelled_integer}{percent_suffix}" ) else: spelled_decimal = " ".join([self.spell_number(int(digit)) for digit in decimal_part]) return ( f"{minus_prefix}{spelled_integer} point {spelled_decimal}{percent_suffix}{currency_symbol}" if minus_prefix or currency_symbol else f"{minus_prefix}{spelled_integer} point {spelled_decimal}{percent_suffix}" ) def __call__(self, text): """ Convert numbers / number-like quantities in a string to their spelt-out counterparts """ # Form part of the pattern for all currency symbols pattern = r"(?<!\w)(-?\$?\€?\£?\¢?\Â¥?\₹?\â‚œ?\àž¿?\₺?\₮?\₣?\₡?\₱?\₪?\₮?\₩?\₩?\₫?\ï·Œ?\d+(?:\.\d{1,2})?%?)(?!\w)" # Find and replace commas in numbers (15,000 -> 15000, etc) text = re.sub(r"(\d+,\d+)", lambda match: match.group(1).replace(",", ""), text) # Use regex to find and replace numbers in the text converted_text = re.sub(pattern, lambda match: self.convert(match.group(1)), text) converted_text = re.sub(" +", " ", converted_text) return converted_text
transformers/src/transformers/models/speecht5/number_normalizer.py/0
{ "file_path": "transformers/src/transformers/models/speecht5/number_normalizer.py", "repo_id": "transformers", "token_count": 3534 }
375
import argparse import json import requests import timm import torch from huggingface_hub import hf_hub_download from PIL import Image from transformers import AutoImageProcessor, SwinConfig, SwinForImageClassification def get_swin_config(swin_name): config = SwinConfig() name_split = swin_name.split("_") model_size = name_split[1] img_size = int(name_split[4]) window_size = int(name_split[3][-1]) if model_size == "tiny": embed_dim = 96 depths = (2, 2, 6, 2) num_heads = (3, 6, 12, 24) elif model_size == "small": embed_dim = 96 depths = (2, 2, 18, 2) num_heads = (3, 6, 12, 24) elif model_size == "base": embed_dim = 128 depths = (2, 2, 18, 2) num_heads = (4, 8, 16, 32) else: embed_dim = 192 depths = (2, 2, 18, 2) num_heads = (6, 12, 24, 48) if "in22k" in swin_name: num_classes = 21841 else: num_classes = 1000 repo_id = "huggingface/label-files" filename = "imagenet-1k-id2label.json" id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) id2label = {int(k): v for k, v in id2label.items()} config.id2label = id2label config.label2id = {v: k for k, v in id2label.items()} config.image_size = img_size config.num_labels = num_classes config.embed_dim = embed_dim config.depths = depths config.num_heads = num_heads config.window_size = window_size return config def rename_key(name): if "patch_embed.proj" in name: name = name.replace("patch_embed.proj", "embeddings.patch_embeddings.projection") if "patch_embed.norm" in name: name = name.replace("patch_embed.norm", "embeddings.norm") if "layers" in name: name = "encoder." + name if "attn.proj" in name: name = name.replace("attn.proj", "attention.output.dense") if "attn" in name: name = name.replace("attn", "attention.self") if "norm1" in name: name = name.replace("norm1", "layernorm_before") if "norm2" in name: name = name.replace("norm2", "layernorm_after") if "mlp.fc1" in name: name = name.replace("mlp.fc1", "intermediate.dense") if "mlp.fc2" in name: name = name.replace("mlp.fc2", "output.dense") if name == "norm.weight": name = "layernorm.weight" if name == "norm.bias": name = "layernorm.bias" if "head" in name: name = name.replace("head", "classifier") else: name = "swin." + name return name def convert_state_dict(orig_state_dict, model): for key in orig_state_dict.copy().keys(): val = orig_state_dict.pop(key) if "mask" in key: continue elif "qkv" in key: key_split = key.split(".") layer_num = int(key_split[1]) block_num = int(key_split[3]) dim = model.swin.encoder.layers[layer_num].blocks[block_num].attention.self.all_head_size if "weight" in key: orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.query.weight"] = ( val[:dim, :] ) orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.key.weight"] = val[ dim : dim * 2, : ] orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.value.weight"] = ( val[-dim:, :] ) else: orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.query.bias"] = val[ :dim ] orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.key.bias"] = val[ dim : dim * 2 ] orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.value.bias"] = val[ -dim: ] else: orig_state_dict[rename_key(key)] = val return orig_state_dict def convert_swin_checkpoint(swin_name, pytorch_dump_folder_path): timm_model = timm.create_model(swin_name, pretrained=True) timm_model.eval() config = get_swin_config(swin_name) model = SwinForImageClassification(config) model.eval() new_state_dict = convert_state_dict(timm_model.state_dict(), model) model.load_state_dict(new_state_dict) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image_processor = AutoImageProcessor.from_pretrained("microsoft/{}".format(swin_name.replace("_", "-"))) image = Image.open(requests.get(url, stream=True).raw) inputs = image_processor(images=image, return_tensors="pt") timm_outs = timm_model(inputs["pixel_values"]) hf_outs = model(**inputs).logits assert torch.allclose(timm_outs, hf_outs, atol=1e-3) print(f"Saving model {swin_name} to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) print(f"Saving image processor to {pytorch_dump_folder_path}") image_processor.save_pretrained(pytorch_dump_folder_path) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--swin_name", default="swin_tiny_patch4_window7_224", type=str, help="Name of the Swin timm model you'd like to convert.", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory." ) args = parser.parse_args() convert_swin_checkpoint(args.swin_name, args.pytorch_dump_folder_path)
transformers/src/transformers/models/swin/convert_swin_timm_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/swin/convert_swin_timm_to_pytorch.py", "repo_id": "transformers", "token_count": 2720 }
376
# coding=utf-8 # Copyright 2022 SwitchTransformers Authors and HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch SwitchTransformers model.""" import copy import math import warnings from typing import Optional, Tuple, Union import torch import torch.nn as nn from torch.nn import CrossEntropyLoss from ...activations import ACT2FN from ...modeling_outputs import ( MoEModelOutput, MoEModelOutputWithPastAndCrossAttentions, Seq2SeqMoEModelOutput, Seq2SeqMoEOutput, ) from ...modeling_utils import PreTrainedModel from ...pytorch_utils import ALL_LAYERNORM_LAYERS, find_pruneable_heads_and_indices, prune_linear_layer from ...utils import ( DUMMY_INPUTS, DUMMY_MASK, add_start_docstrings, add_start_docstrings_to_model_forward, is_torch_fx_proxy, logging, replace_return_docstrings, ) from .configuration_switch_transformers import SwitchTransformersConfig logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "SwitchTransformersConfig" _CHECKPOINT_FOR_DOC = "google/switch-base-8" #################################################### # This dict contains ids and associated url # for the pretrained weights provided with the models #################################################### def router_z_loss_func(router_logits: torch.Tensor) -> float: r""" Compute the router z-loss implemented in PyTorch. The router z-loss was introduced in [Designing Effective Sparse Expert Models](https://arxiv.org/abs/2202.08906). It encourages router logits to remain small in an effort to improve stability. Args: router_logits (`float`): Input logits of shape [batch_size, sequence_length, num_experts] Returns: Scalar router z-loss. """ num_groups, tokens_per_group, _ = router_logits.shape log_z = torch.logsumexp(router_logits, dim=-1) z_loss = log_z**2 return torch.sum(z_loss) / (num_groups * tokens_per_group) def load_balancing_loss_func(router_probs: torch.Tensor, expert_indices: torch.Tensor) -> float: r""" Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch. See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between experts is too unbalanced. Args: router_probs (`torch.Tensor`): Probability assigned to each expert per token. Shape: [batch_size, seqeunce_length, num_experts]. expert_indices (`torch.Tensor`): Indices tensor of shape [batch_size, seqeunce_length] identifying the selected expert for a given token. Returns: The auxiliary loss. """ num_experts = router_probs.shape[-1] # cast the expert indices to int64, otherwise one-hot encoding will fail if expert_indices.dtype != torch.int64: expert_indices = expert_indices.to(torch.int64) if len(expert_indices.shape) == 2: expert_indices = expert_indices.unsqueeze(2) expert_mask = torch.nn.functional.one_hot(expert_indices, num_experts) # For a given token, determine if it was routed to a given expert. expert_mask = torch.max(expert_mask, axis=-2).values # cast to float32 otherwise mean will fail expert_mask = expert_mask.to(torch.float32) tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2) router_prob_per_group_and_expert = torch.mean(router_probs, axis=-2) return torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert) * (num_experts**2) class SwitchTransformersTop1Router(nn.Module): """ Router using tokens choose top-1 experts assignment. This router uses the same mechanism as in Switch Transformer (https://arxiv.org/abs/2101.03961) and V-MoE (https://arxiv.org/abs/2106.05974): tokens choose their top experts. Items are sorted by router_probs and then routed to their choice of expert until the expert's expert_capacity is reached. **There is no guarantee that each token is processed by an expert**, or that each expert receives at least one token. """ def __init__(self, config: SwitchTransformersConfig): super().__init__() self.num_experts = config.num_experts self.expert_capacity = config.expert_capacity self.classifier = nn.Linear(config.hidden_size, self.num_experts, bias=config.router_bias) self.jitter_noise = config.router_jitter_noise self.ignore_padding_tokens = config.router_ignore_padding_tokens self.dtype = getattr(torch, config.router_dtype) def _compute_router_probabilities(self, hidden_states: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: r""" Computes router probabilities from input hidden states. Args: hidden_states (`torch.Tensor`): (batch_size, sequence_length, hidden_dim) from which router probabilities are computed. Returns: router_probabilities (`torch.Tensor`): Tensor of shape (batch_size, sequence_length, num_experts) corresponding to the probabilities for each token and expert. Used for routing tokens to experts. router_logits (`torch.Tensor`): Logits tensor of shape (batch_size, sequence_length, num_experts) corresponding to raw router logits. This is used later for computing router z-loss. """ # float32 is used to ensure stability. See the discussion of "selective precision" in # https://arxiv.org/abs/2101.03961. # We also store the previous dtype to cast back the output to the previous dtype self.input_dtype = hidden_states.dtype hidden_states = hidden_states.to(self.dtype) if self.training and self.jitter_noise > 0: # Multiply the token inputs by the uniform distribution - adding some noise hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise) # Shape: [num_groups, tokens_per_group, num_experts] self._cast_classifier() router_logits = self.classifier(hidden_states) # Apply Softmax and cast back to the original `dtype` router_probabilities = nn.functional.softmax(router_logits, dim=-1, dtype=self.dtype).to(self.input_dtype) return router_probabilities, router_logits def _cast_classifier(self): r""" `bitsandbytes` `Linear8bitLt` layers does not support manual casting Therefore we need to check if they are an instance of the `Linear8bitLt` class by checking special attributes. """ if not (hasattr(self.classifier, "SCB") or hasattr(self.classifier, "CB")): self.classifier = self.classifier.to(self.dtype) def forward(self, hidden_states: torch.Tensor) -> Tuple: r""" Generic forward function for every Router class. Each Router expects to have the same input hidden states (`hidden_states`) corresponding to the hidden states for each token, the `expert_capacity` corresponding to the number of tokens the Router will send to each expert, some Routers can send up to few tokens to each expert. Each Router works as the following: it expects the hidden states for each token, gets the `router_probs` and `router_logits` from the `router_weights`. This will assign for each token, the raw probability to be assigned to an expert. Then each Router class will have to define its own `_compute_routing_instructions`. Args: hidden_states (`torch.Tensor`) : [num_groups, tokens_per_group, hidden_dim] inputs to send to experts. Returns: Tuple[`torch.Tensor`, `torch.Tensor`, `torch.Tensor`] Tuple containing the expert index, the router probs and the router logits. The router probabilities and logits are required to compute the loss. """ router_probs, router_logits = self._compute_router_probabilities(hidden_states) expert_index = torch.argmax(router_probs, dim=-1) expert_index = torch.nn.functional.one_hot(expert_index, num_classes=self.num_experts) # Mask tokens outside expert capacity. Sum over each sequence token_priority = torch.cumsum(expert_index, dim=-2) # mask if the token routed to to the expert will overflow expert_capacity_mask = token_priority <= self.expert_capacity expert_index = expert_index * expert_capacity_mask router_probs = torch.max(router_probs, dim=-1).values.unsqueeze(-1) return expert_index, router_probs, router_logits # Copied from transformers.models.t5.modeling_t5.T5LayerNorm with T5->SwitchTransformers class SwitchTransformersLayerNorm(nn.Module): def __init__(self, hidden_size, eps=1e-6): """ Construct a layernorm module in the SwitchTransformers style. No bias and no subtraction of mean. """ super().__init__() self.weight = nn.Parameter(torch.ones(hidden_size)) self.variance_epsilon = eps def forward(self, hidden_states): # SwitchTransformers uses a layer_norm which only scales and doesn't shift, which is also known as Root Mean # Square Layer Normalization https://arxiv.org/abs/1910.07467 thus varience is calculated # w/o mean and there is no bias. Additionally we want to make sure that the accumulation for # half-precision inputs is done in fp32 variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) # convert into half-precision if necessary if self.weight.dtype in [torch.float16, torch.bfloat16]: hidden_states = hidden_states.to(self.weight.dtype) return self.weight * hidden_states ALL_LAYERNORM_LAYERS.append(SwitchTransformersLayerNorm) # Copied from transformers.models.t5.modeling_t5.T5DenseActDense with T5->SwitchTransformers class SwitchTransformersDenseActDense(nn.Module): def __init__(self, config: SwitchTransformersConfig): super().__init__() self.wi = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.act = ACT2FN[config.dense_act_fn] def forward(self, hidden_states): hidden_states = self.wi(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.dropout(hidden_states) if ( isinstance(self.wo.weight, torch.Tensor) and hidden_states.dtype != self.wo.weight.dtype and self.wo.weight.dtype != torch.int8 ): hidden_states = hidden_states.to(self.wo.weight.dtype) hidden_states = self.wo(hidden_states) return hidden_states class SwitchTransformersSparseMLP(nn.Module): r""" Implementation of the Switch Transformers Sparse MLP module. """ def __init__(self, config: SwitchTransformersConfig, expert_class: nn.Module = SwitchTransformersDenseActDense): super().__init__() # Step 1: Get the correct router according to its class self.router = SwitchTransformersTop1Router(config) # Step 2: Get the experts self.experts = nn.ModuleDict() for idx in range(config.num_experts): self.experts[f"expert_{idx}"] = expert_class(config) def forward(self, hidden_states): r""" Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following: 1- Gets the `router_mask` from the router. The shape of the mask is `(batch_size, sequence_length, num_expert)` and corresponds to the argmax of the `router_probs`. The probabilities are needed in the computation of the hidden states : they are broadcasted to the hidden states values (can be interpreted as a scaling factor). 2- Dispatch the tokens to its associated experts. We do a classic for loop over the experts and assign for each expert the corresponding hidden states. """ # Step 1: Get the router_mask from the router as wel as the probabilities router_mask, router_probs, router_logits = self.router(hidden_states) expert_index = torch.argmax(router_mask, dim=-1) # The routers introduced might not always map all the tokens, to a router, which means that some hidden states # can be unchanged from one layer to another. That is why the hidden states are cloned before updating only the seleced ones. next_states = hidden_states.clone() router_mask = router_mask.bool() batch_size, seq_len, num_experts = router_mask.shape idx_mask = router_mask.transpose(1, 2).reshape(batch_size * seq_len, num_experts).sum(dim=0) idx_mask = torch.nonzero(idx_mask, as_tuple=True)[ 0 ].tolist() # length: number of "activated" expert / value: index for idx in idx_mask: next_states[router_mask[:, :, idx]] = getattr(self.experts, "expert_{}".format(idx))( hidden_states[router_mask[:, :, idx]] ) hidden_states = router_probs * next_states return hidden_states, (router_logits, expert_index) class SwitchTransformersLayerFF(nn.Module): r""" Switch Transformers Feed Forward layer module. This is a wrapper around the Mixture of Experts module. Parameters: config : ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. is_sparse (`bool`): Whether the MLP layer is a `Sparse` layer (contains a Mixture of Experts) or not """ def __init__(self, config: SwitchTransformersConfig, is_sparse=False): super().__init__() self.is_sparse = is_sparse # Check if it is a sparse layer, if not then it is a dense layer if not self.is_sparse: self.mlp = SwitchTransformersDenseActDense(config) else: self.mlp = SwitchTransformersSparseMLP(config) self.layer_norm = SwitchTransformersLayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, hidden_states, output_router_logits): forwarded_states = self.layer_norm(hidden_states) forwarded_states = self.mlp(forwarded_states) if isinstance(forwarded_states, tuple): forwarded_states, router_tuple = forwarded_states else: router_tuple = None output = hidden_states + self.dropout(forwarded_states) if output_router_logits and router_tuple is not None: output = (output, router_tuple) return output # Copied from transformers.models.t5.modeling_t5.T5Attention with T5->SwitchTransformers class SwitchTransformersAttention(nn.Module): def __init__(self, config: SwitchTransformersConfig, has_relative_attention_bias=False): super().__init__() self.is_decoder = config.is_decoder self.has_relative_attention_bias = has_relative_attention_bias self.relative_attention_num_buckets = config.relative_attention_num_buckets self.relative_attention_max_distance = config.relative_attention_max_distance self.d_model = config.d_model self.key_value_proj_dim = config.d_kv self.n_heads = config.num_heads self.dropout = config.dropout_rate self.inner_dim = self.n_heads * self.key_value_proj_dim # Mesh TensorFlow initialization to avoid scaling before softmax self.q = nn.Linear(self.d_model, self.inner_dim, bias=False) self.k = nn.Linear(self.d_model, self.inner_dim, bias=False) self.v = nn.Linear(self.d_model, self.inner_dim, bias=False) self.o = nn.Linear(self.inner_dim, self.d_model, bias=False) if self.has_relative_attention_bias: self.relative_attention_bias = nn.Embedding(self.relative_attention_num_buckets, self.n_heads) self.pruned_heads = set() self.gradient_checkpointing = False def prune_heads(self, heads): if len(heads) == 0: return heads, index = find_pruneable_heads_and_indices( heads, self.n_heads, self.key_value_proj_dim, self.pruned_heads ) # Prune linear layers self.q = prune_linear_layer(self.q, index) self.k = prune_linear_layer(self.k, index) self.v = prune_linear_layer(self.v, index) self.o = prune_linear_layer(self.o, index, dim=1) # Update hyper params self.n_heads = self.n_heads - len(heads) self.inner_dim = self.key_value_proj_dim * self.n_heads self.pruned_heads = self.pruned_heads.union(heads) @staticmethod def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128): """ Adapted from Mesh Tensorflow: https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593 Translate relative position to a bucket number for relative attention. The relative position is defined as memory_position - query_position, i.e. the distance in tokens from the attending position to the attended-to position. If bidirectional=False, then positive relative positions are invalid. We use smaller buckets for small absolute relative_position and larger buckets for larger absolute relative_positions. All relative positions >=max_distance map to the same bucket. All relative positions <=-max_distance map to the same bucket. This should allow for more graceful generalization to longer sequences than the model has been trained on Args: relative_position: an int32 Tensor bidirectional: a boolean - whether the attention is bidirectional num_buckets: an integer max_distance: an integer Returns: a Tensor with the same shape as relative_position, containing int32 values in the range [0, num_buckets) """ relative_buckets = 0 if bidirectional: num_buckets //= 2 relative_buckets += (relative_position > 0).to(torch.long) * num_buckets relative_position = torch.abs(relative_position) else: relative_position = -torch.min(relative_position, torch.zeros_like(relative_position)) # now relative_position is in the range [0, inf) # half of the buckets are for exact increments in positions max_exact = num_buckets // 2 is_small = relative_position < max_exact # The other half of the buckets are for logarithmically bigger bins in positions up to max_distance relative_position_if_large = max_exact + ( torch.log(relative_position.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact) ).to(torch.long) relative_position_if_large = torch.min( relative_position_if_large, torch.full_like(relative_position_if_large, num_buckets - 1) ) relative_buckets += torch.where(is_small, relative_position, relative_position_if_large) return relative_buckets def compute_bias(self, query_length, key_length, device=None): """Compute binned relative position bias""" if device is None: device = self.relative_attention_bias.weight.device context_position = torch.arange(query_length, dtype=torch.long, device=device)[:, None] memory_position = torch.arange(key_length, dtype=torch.long, device=device)[None, :] relative_position = memory_position - context_position # shape (query_length, key_length) relative_position_bucket = self._relative_position_bucket( relative_position, # shape (query_length, key_length) bidirectional=(not self.is_decoder), num_buckets=self.relative_attention_num_buckets, max_distance=self.relative_attention_max_distance, ) values = self.relative_attention_bias(relative_position_bucket) # shape (query_length, key_length, num_heads) values = values.permute([2, 0, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length) return values def forward( self, hidden_states, mask=None, key_value_states=None, position_bias=None, past_key_value=None, layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, ): """ Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states). """ # Input is (batch_size, seq_length, dim) # Mask is (batch_size, key_length) (non-causal) or (batch_size, key_length, key_length) # past_key_value[0] is (batch_size, n_heads, q_len - 1, dim_per_head) batch_size, seq_length = hidden_states.shape[:2] real_seq_length = seq_length if past_key_value is not None: if len(past_key_value) != 2: raise ValueError( f"past_key_value should have 2 past states: keys and values. Got { len(past_key_value)} past states" ) real_seq_length += past_key_value[0].shape[2] if query_length is None else query_length key_length = real_seq_length if key_value_states is None else key_value_states.shape[1] def shape(states): """projection""" return states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2) def unshape(states): """reshape""" return states.transpose(1, 2).contiguous().view(batch_size, -1, self.inner_dim) def project(hidden_states, proj_layer, key_value_states, past_key_value): """projects hidden states correctly to key/query states""" if key_value_states is None: # self-attn # (batch_size, n_heads, seq_length, dim_per_head) hidden_states = shape(proj_layer(hidden_states)) elif past_key_value is None: # cross-attn # (batch_size, n_heads, seq_length, dim_per_head) hidden_states = shape(proj_layer(key_value_states)) if past_key_value is not None: if key_value_states is None: # self-attn # (batch_size, n_heads, key_length, dim_per_head) hidden_states = torch.cat([past_key_value, hidden_states], dim=2) elif past_key_value.shape[2] != key_value_states.shape[1]: # checking that the `sequence_length` of the `past_key_value` is the same as # the provided `key_value_states` to support prefix tuning # cross-attn # (batch_size, n_heads, seq_length, dim_per_head) hidden_states = shape(proj_layer(key_value_states)) else: # cross-attn hidden_states = past_key_value return hidden_states # get query states query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head) # get key/value states key_states = project( hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None ) value_states = project( hidden_states, self.v, key_value_states, past_key_value[1] if past_key_value is not None else None ) # compute scores scores = torch.matmul( query_states, key_states.transpose(3, 2) ) # equivalent of torch.einsum("bnqd,bnkd->bnqk", query_states, key_states), compatible with onnx op>9 if position_bias is None: if not self.has_relative_attention_bias: position_bias = torch.zeros( (1, self.n_heads, real_seq_length, key_length), device=scores.device, dtype=scores.dtype ) if self.gradient_checkpointing and self.training: position_bias.requires_grad = True else: position_bias = self.compute_bias(real_seq_length, key_length, device=scores.device) # if key and values are already calculated # we want only the last query position bias if past_key_value is not None: position_bias = position_bias[:, :, -hidden_states.size(1) :, :] if mask is not None: position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) if self.pruned_heads: mask = torch.ones(position_bias.shape[1]) mask[list(self.pruned_heads)] = 0 position_bias_masked = position_bias[:, mask.bool()] else: position_bias_masked = position_bias scores += position_bias_masked attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as( scores ) # (batch_size, n_heads, seq_length, key_length) attn_weights = nn.functional.dropout( attn_weights, p=self.dropout, training=self.training ) # (batch_size, n_heads, seq_length, key_length) # Mask heads if we want to if layer_head_mask is not None: attn_weights = attn_weights * layer_head_mask attn_output = unshape(torch.matmul(attn_weights, value_states)) # (batch_size, seq_length, dim) attn_output = self.o(attn_output) present_key_value_state = (key_states, value_states) if (self.is_decoder and use_cache) else None outputs = (attn_output,) + (present_key_value_state,) + (position_bias,) if output_attentions: outputs = outputs + (attn_weights,) return outputs # Copied from transformers.models.t5.modeling_t5.T5LayerSelfAttention with T5->SwitchTransformers class SwitchTransformersLayerSelfAttention(nn.Module): def __init__(self, config, has_relative_attention_bias=False): super().__init__() self.SelfAttention = SwitchTransformersAttention( config, has_relative_attention_bias=has_relative_attention_bias ) self.layer_norm = SwitchTransformersLayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) def forward( self, hidden_states, attention_mask=None, position_bias=None, layer_head_mask=None, past_key_value=None, use_cache=False, output_attentions=False, ): normed_hidden_states = self.layer_norm(hidden_states) attention_output = self.SelfAttention( normed_hidden_states, mask=attention_mask, position_bias=position_bias, layer_head_mask=layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states = hidden_states + self.dropout(attention_output[0]) outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them return outputs # Copied from transformers.models.t5.modeling_t5.T5LayerCrossAttention with T5->SwitchTransformers class SwitchTransformersLayerCrossAttention(nn.Module): def __init__(self, config): super().__init__() self.EncDecAttention = SwitchTransformersAttention(config, has_relative_attention_bias=False) self.layer_norm = SwitchTransformersLayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) def forward( self, hidden_states, key_value_states, attention_mask=None, position_bias=None, layer_head_mask=None, past_key_value=None, use_cache=False, query_length=None, output_attentions=False, ): normed_hidden_states = self.layer_norm(hidden_states) attention_output = self.EncDecAttention( normed_hidden_states, mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, layer_head_mask=layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, query_length=query_length, output_attentions=output_attentions, ) layer_output = hidden_states + self.dropout(attention_output[0]) outputs = (layer_output,) + attention_output[1:] # add attentions if we output them return outputs class SwitchTransformersBlock(nn.Module): def __init__(self, config, has_relative_attention_bias=False, is_sparse=False): super().__init__() self.is_decoder = config.is_decoder self.is_sparse = is_sparse self.layer = nn.ModuleList() self.layer.append( SwitchTransformersLayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias) ) if self.is_decoder: self.layer.append(SwitchTransformersLayerCrossAttention(config)) self.layer.append(SwitchTransformersLayerFF(config, is_sparse=self.is_sparse)) def forward( self, hidden_states, attention_mask=None, position_bias=None, encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, layer_head_mask=None, cross_attn_layer_head_mask=None, past_key_value=None, use_cache=False, output_attentions=False, output_router_logits=True, return_dict=True, ): if past_key_value is not None: if not self.is_decoder: logger.warning("`past_key_values` is passed to the encoder. Please make sure this is intended.") expected_num_past_key_values = 2 if encoder_hidden_states is None else 4 if len(past_key_value) != expected_num_past_key_values: raise ValueError( f"There should be {expected_num_past_key_values} past states. " f"{'2 (past / key) for cross attention. ' if expected_num_past_key_values == 4 else ''}" f"Got {len(past_key_value)} past key / value states" ) self_attn_past_key_value = past_key_value[:2] cross_attn_past_key_value = past_key_value[2:] else: self_attn_past_key_value, cross_attn_past_key_value = None, None self_attention_outputs = self.layer[0]( hidden_states, attention_mask=attention_mask, position_bias=position_bias, layer_head_mask=layer_head_mask, past_key_value=self_attn_past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states, present_key_value_state = self_attention_outputs[:2] attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) do_cross_attention = self.is_decoder and encoder_hidden_states is not None if do_cross_attention: # the actual query length is unknown for cross attention # if using past key value states. Need to inject it here if present_key_value_state is not None: query_length = present_key_value_state[0].shape[2] else: query_length = None cross_attention_outputs = self.layer[1]( hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, layer_head_mask=cross_attn_layer_head_mask, past_key_value=cross_attn_past_key_value, query_length=query_length, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states = cross_attention_outputs[0] # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) # Combine self attn and cross attn key value states if present_key_value_state is not None: present_key_value_state = present_key_value_state + cross_attention_outputs[1] # Keep cross-attention outputs and relative position weights attention_outputs = attention_outputs + cross_attention_outputs[2:] # Apply Feed Forward layer hidden_states = self.layer[-1](hidden_states, output_router_logits) if isinstance(hidden_states, tuple): hidden_states, router_tuple = hidden_states else: router_tuple = (torch.zeros((1,), device=hidden_states.device, dtype=torch.int64),) # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states,) if use_cache: outputs = outputs + (present_key_value_state,) + attention_outputs + (router_tuple,) else: outputs = outputs + attention_outputs + (router_tuple,) return outputs # hidden-states, present_key_value_states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights), (router_tuple) class SwitchTransformersPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = SwitchTransformersConfig base_model_prefix = "switch_transformers" supports_gradient_checkpointing = True _no_split_modules = ["SwitchTransformersBlock"] @property def dummy_inputs(self): input_ids = torch.tensor(DUMMY_INPUTS) input_mask = torch.tensor(DUMMY_MASK) dummy_inputs = { "decoder_input_ids": input_ids, "input_ids": input_ids, "decoder_attention_mask": input_mask, } return dummy_inputs def _init_weights(self, module): """Initialize the weights""" factor = self.config.initializer_factor # Used for testing weights initialization if isinstance(module, SwitchTransformersLayerNorm): module.weight.data.fill_(factor * 1.0) elif isinstance( module, (SwitchTransformersModel, SwitchTransformersForConditionalGeneration, SwitchTransformersEncoderModel), ): # Mesh TensorFlow embeddings initialization # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L1624 module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0) if hasattr(module, "lm_head") and not self.config.tie_word_embeddings: module.lm_head.weight.data.normal_(mean=0.0, std=factor * 1.0) elif isinstance(module, SwitchTransformersDenseActDense): # Mesh TensorFlow FF initialization # See https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L56 # and https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L89 module.wi.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5)) if hasattr(module.wi, "bias") and module.wi.bias is not None: module.wi.bias.data.zero_() module.wo.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_ff) ** -0.5)) if hasattr(module.wo, "bias") and module.wo.bias is not None: module.wo.bias.data.zero_() elif isinstance(module, SwitchTransformersAttention): # Mesh TensorFlow attention initialization to avoid scaling before softmax # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/attention.py#L136 d_model = self.config.d_model key_value_proj_dim = self.config.d_kv n_heads = self.config.num_heads module.q.weight.data.normal_(mean=0.0, std=factor * ((d_model * key_value_proj_dim) ** -0.5)) module.k.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5)) module.v.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5)) module.o.weight.data.normal_(mean=0.0, std=factor * ((n_heads * key_value_proj_dim) ** -0.5)) if module.has_relative_attention_bias: module.relative_attention_bias.weight.data.normal_(mean=0.0, std=factor * ((d_model) ** -0.5)) elif isinstance(module, SwitchTransformersSparseMLP): # Mesh TensorFlow attention initialization to avoid scaling before softmax # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/attention.py#L136 d_model = self.config.d_model key_value_proj_dim = self.config.d_kv n_heads = self.config.num_heads module.router.classifier.weight.data.normal_(mean=0.0, std=factor * 1) for idx in range(self.config.num_experts): module.experts[f"expert_{idx}"].wi.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5)) module.experts[f"expert_{idx}"].wo.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5)) def _shift_right(self, input_ids): decoder_start_token_id = self.config.decoder_start_token_id pad_token_id = self.config.pad_token_id if decoder_start_token_id is None: raise ValueError( "self.model.config.decoder_start_token_id has to be defined. In SwitchTransformers it is usually set" " to the pad_token_id. See SwitchTransformers docs for more information" ) # shift inputs to the right if is_torch_fx_proxy(input_ids): # Item assignment is not supported natively for proxies. shifted_input_ids = torch.full(input_ids.shape[:-1] + (1,), decoder_start_token_id) shifted_input_ids = torch.cat([shifted_input_ids, input_ids[..., :-1]], dim=-1) else: shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[..., 1:] = input_ids[..., :-1].clone() shifted_input_ids[..., 0] = decoder_start_token_id if pad_token_id is None: raise ValueError("self.model.config.pad_token_id has to be defined.") # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids class SwitchTransformersStack(SwitchTransformersPreTrainedModel): def __init__(self, config, embed_tokens=None): super().__init__(config) self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model) if embed_tokens is not None: self.embed_tokens.weight = embed_tokens.weight self.is_decoder = config.is_decoder sparse_step = config.decoder_sparse_step if self.is_decoder else config.encoder_sparse_step config.num_layers = config.num_decoder_layers if self.is_decoder else config.num_layers self.block = nn.ModuleList() for i in range(config.num_layers): is_sparse = (i % sparse_step == 1 or sparse_step == 1) if sparse_step > 0 else False self.block.append( SwitchTransformersBlock(config, has_relative_attention_bias=bool(i == 0), is_sparse=is_sparse) ) self.final_layer_norm = SwitchTransformersLayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) # Initialize weights and apply final processing self.post_init() self.device_map = None self.gradient_checkpointing = False def get_input_embeddings(self): return self.embed_tokens def set_input_embeddings(self, new_embeddings): self.embed_tokens = new_embeddings def forward( self, input_ids=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, head_mask=None, cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, output_router_logits=True, return_dict=None, ): use_cache = use_cache if use_cache is not None else self.config.use_cache output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and inputs_embeds is not None: err_msg_prefix = "decoder_" if self.is_decoder else "" raise ValueError( f"You cannot specify both {err_msg_prefix}input_ids and {err_msg_prefix}inputs_embeds at the same time" ) elif input_ids is not None: input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: err_msg_prefix = "decoder_" if self.is_decoder else "" raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") if inputs_embeds is None: if self.embed_tokens is None: raise ValueError("You have to initialize the model with valid token embeddings") inputs_embeds = self.embed_tokens(input_ids) batch_size, seq_length = input_shape # required mask seq length can be calculated via length of past mask_seq_length = past_key_values[0][0].shape[2] + seq_length if past_key_values is not None else seq_length if use_cache is True: if not self.is_decoder: raise ValueError(f"`use_cache` can only be set to `True` if {self} is used as a decoder") if attention_mask is None: attention_mask = torch.ones(batch_size, mask_seq_length, device=inputs_embeds.device) if self.is_decoder and encoder_attention_mask is None and encoder_hidden_states is not None: encoder_seq_length = encoder_hidden_states.shape[1] encoder_attention_mask = torch.ones( batch_size, encoder_seq_length, device=inputs_embeds.device, dtype=torch.long ) # initialize past_key_values with `None` if past does not exist if past_key_values is None: past_key_values = [None] * len(self.block) # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) # If a 2D or 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] if self.is_decoder and encoder_hidden_states is not None: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) if encoder_attention_mask is None: encoder_attention_mask = torch.ones(encoder_hidden_shape, device=inputs_embeds.device) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = None if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False # Prepare head mask if needed head_mask = self.get_head_mask(head_mask, self.config.num_layers) cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) present_key_value_states = () if use_cache else None all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_router_probs = () if output_router_logits else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None position_bias = None encoder_decoder_position_bias = None hidden_states = self.dropout(inputs_embeds) for i, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)): layer_head_mask = head_mask[i] cross_attn_layer_head_mask = cross_attn_head_mask[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( layer_module.forward, hidden_states, extended_attention_mask, position_bias, encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, None, # past_key_value is always None with gradient checkpointing use_cache, output_attentions, ) else: layer_outputs = layer_module( hidden_states, attention_mask=extended_attention_mask, position_bias=position_bias, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, encoder_decoder_position_bias=encoder_decoder_position_bias, layer_head_mask=layer_head_mask, cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, output_router_logits=output_router_logits, ) router_probs = layer_outputs[-1] layer_outputs = layer_outputs[:-1] # layer_outputs is a tuple with: # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights) if use_cache is False: layer_outputs = layer_outputs[:1] + (None,) + layer_outputs[1:] hidden_states, present_key_value_state = layer_outputs[:2] # We share the position biases between the layers - the first layer store them # layer_outputs = hidden-states, key-value-states (self-attention position bias), (self-attention weights), # (cross-attention position bias), (cross-attention weights) position_bias = layer_outputs[2] if self.is_decoder and encoder_hidden_states is not None: encoder_decoder_position_bias = layer_outputs[4 if output_attentions else 3] # append next layer key value states if use_cache: present_key_value_states = present_key_value_states + (present_key_value_state,) if output_attentions: all_attentions = all_attentions + (layer_outputs[3],) if self.is_decoder: all_cross_attentions = all_cross_attentions + (layer_outputs[5],) if output_router_logits: all_router_probs = all_router_probs + (router_probs,) hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.dropout(hidden_states) # Add last layer if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, present_key_value_states, all_hidden_states, all_attentions, all_cross_attentions, all_router_probs, ] if v is not None ) return MoEModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=present_key_value_states, hidden_states=all_hidden_states, attentions=all_attentions, cross_attentions=all_cross_attentions, router_probs=all_router_probs, ) SWITCH_TRANSFORMERS_START_DOCSTRING = r""" The SWITCH_TRANSFORMERS model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by [William Fedus](https://arxiv.org/search/cs?searchtype=author&query=Fedus%2C+W), [Barret Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam Shazeer](https://arxiv.org/search/cs?searchtype=author&query=Shazeer%2C+N). It's an encoder-decoder T5-like model with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. """ SWITCH_TRANSFORMERS_INPUTS_DOCSTRING = r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for detail. [What are input IDs?](../glossary#input-ids) To know more on how to prepare `input_ids` for pretraining take a look a [SWITCH_TRANSFORMERS Training](./switch_transformers#training). attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are decoder input IDs?](../glossary#decoder-input-ids) SWITCH_TRANSFORMERS uses the `pad_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). To know more on how to prepare `decoder_input_ids` for pretraining take a look at [SWITCH_TRANSFORMERS Training](./switch_transformers#training). decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of (`last_hidden_state`, `optional`: *hidden_states*, `optional`: *attentions*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` is a sequence of hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be input (see `past_key_values`). This is useful if you want more control over how to convert `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix. If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value of `inputs_embeds`. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. output_router_logits (`bool`, *optional*): Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ SWITCH_TRANSFORMERS_ENCODER_INPUTS_DOCSTRING = r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for detail. To know more on how to prepare `input_ids` for pretraining take a look a [SWITCH_TRANSFORMERS Training](./switch_transformers#training). attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. output_router_logits (`bool`, *optional*): Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ # Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask __HEAD_MASK_WARNING_MSG = """ The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently, `decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions. If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers, num_heads)`. """ @add_start_docstrings( "The bare SWITCH_TRANSFORMERS Model transformer outputting raw hidden-states without any specific head on top.", SWITCH_TRANSFORMERS_START_DOCSTRING, ) class SwitchTransformersModel(SwitchTransformersPreTrainedModel): _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"] def __init__(self, config: SwitchTransformersConfig): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.d_model) encoder_config = copy.deepcopy(config) encoder_config.is_decoder = False encoder_config.use_cache = False encoder_config.is_encoder_decoder = False self.encoder = SwitchTransformersStack(encoder_config, self.shared) decoder_config = copy.deepcopy(config) decoder_config.is_decoder = True decoder_config.is_encoder_decoder = False self.decoder = SwitchTransformersStack(decoder_config, self.shared) # Initialize weights and apply final processing self.post_init() # Model parallel self.device_map = None def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared = new_embeddings self.encoder.set_input_embeddings(new_embeddings) self.decoder.set_input_embeddings(new_embeddings) def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.decoder.embed_tokens, self.shared) def get_encoder(self): return self.encoder def get_decoder(self): return self.decoder def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) @add_start_docstrings_to_model_forward(SWITCH_TRANSFORMERS_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqMoEModelOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, head_mask: Optional[torch.FloatTensor] = None, decoder_head_mask: Optional[torch.FloatTensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.Tensor] = None, decoder_inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.FloatTensor], Seq2SeqMoEModelOutput]: r""" Returns: Example: ```python >>> from transformers import AutoTokenizer, SwitchTransformersModel >>> tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") >>> model = SwitchTransformersModel.from_pretrained("google/switch-base-8") >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids # Batch size 1 >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 >>> # preprocess: Prepend decoder_input_ids with start token which is pad token for SwitchTransformersModel. >>> # This is not needed for torch's SwitchTransformersForConditionalGeneration as it does this internally using labels arg. >>> decoder_input_ids = model._shift_right(decoder_input_ids) >>> # forward pass >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) >>> last_hidden_states = outputs.last_hidden_state ```""" use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask if head_mask is not None and decoder_head_mask is None: if self.config.num_layers == self.config.num_decoder_layers: warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) decoder_head_mask = head_mask if ( output_router_logits and self.config.num_sparse_encoder_layers == 0 and self.config.num_sparse_encoder_layers == 0 ): raise ValueError( "You asked to return `output_router_logits` but the transformer in dense, and does " " not contain any sparse MLP Layers. Set `output_router_logits = False` and restart" ) # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) elif return_dict and not isinstance(encoder_outputs, MoEModelOutput): encoder_outputs = MoEModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, router_probs=encoder_outputs[3] if len(encoder_outputs) > 3 else None, ) hidden_states = encoder_outputs[0] # Decode decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, inputs_embeds=decoder_inputs_embeds, past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) if not return_dict: return decoder_outputs + encoder_outputs return Seq2SeqMoEModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, decoder_router_logits=decoder_outputs.router_probs, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, encoder_router_logits=encoder_outputs.router_probs, ) @add_start_docstrings( """SWITCH_TRANSFORMERS Model with a `language modeling` head on top.""", SWITCH_TRANSFORMERS_START_DOCSTRING ) class SwitchTransformersForConditionalGeneration(SwitchTransformersPreTrainedModel): _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight", "lm_head.weight"] def __init__(self, config: SwitchTransformersConfig): super().__init__(config) self.model_dim = config.d_model self.shared = nn.Embedding(config.vocab_size, config.d_model) encoder_config = copy.deepcopy(config) encoder_config.is_decoder = False encoder_config.use_cache = False encoder_config.is_encoder_decoder = False self.encoder = SwitchTransformersStack(encoder_config, self.shared) decoder_config = copy.deepcopy(config) decoder_config.is_decoder = True decoder_config.is_encoder_decoder = False decoder_config.num_layers = config.num_decoder_layers self.decoder = SwitchTransformersStack(decoder_config, self.shared) self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False) self.router_z_loss_coef = config.router_z_loss_coef self.router_aux_loss_coef = config.router_aux_loss_coef # Initialize weights and apply final processing self.post_init() # Model parallel self.device_map = None def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared = new_embeddings self.encoder.set_input_embeddings(new_embeddings) self.decoder.set_input_embeddings(new_embeddings) def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.decoder.embed_tokens, self.shared) def set_output_embeddings(self, new_embeddings): self.lm_head = new_embeddings def get_output_embeddings(self): return self.lm_head def get_encoder(self): return self.encoder def get_decoder(self): return self.decoder @add_start_docstrings_to_model_forward(SWITCH_TRANSFORMERS_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqMoEOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, head_mask: Optional[torch.FloatTensor] = None, decoder_head_mask: Optional[torch.FloatTensor] = None, cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None, past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = True, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.FloatTensor], Seq2SeqMoEOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` Returns: Examples: ```python >>> from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration >>> tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") >>> model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-8") >>> # training >>> input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids >>> labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids >>> outputs = model(input_ids=input_ids, labels=labels) >>> loss = outputs.loss >>> logits = outputs.logits >>> # inference >>> input_ids = tokenizer( ... "summarize: studies have shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids # Batch size 1 >>> outputs = model.generate(input_ids) >>> # . To, let’s say you have a dog. To summarize: >>> # Since the model has been trained on MLM, this will output gibberish ```""" use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask if head_mask is not None and decoder_head_mask is None: if self.config.num_layers == self.config.num_decoder_layers: warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) decoder_head_mask = head_mask # Encode if needed (training, first prediction pass) if encoder_outputs is None: # Convert encoder inputs in embeddings if needed encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) elif return_dict and not isinstance(encoder_outputs, MoEModelOutput): encoder_outputs = MoEModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, router_probs=encoder_outputs[3] if len(encoder_outputs) > 3 else None, ) hidden_states = encoder_outputs[0] if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None: # get decoder inputs from shifting lm labels to the right decoder_input_ids = self._shift_right(labels) # Decode decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, inputs_embeds=decoder_inputs_embeds, past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, head_mask=decoder_head_mask, cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) sequence_output = decoder_outputs[0] if self.config.tie_word_embeddings: # Rescale output before projecting on vocab # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586 sequence_output = sequence_output * (self.model_dim**-0.5) lm_logits = self.lm_head(sequence_output) loss = None encoder_z_loss = None encoder_aux_loss = None decoder_z_loss = None decoder_aux_loss = None if output_router_logits: # Compute the router loss (z_loss + auxiliary loss) for each router in the encoder and decoder if self.encoder.config.encoder_sparse_step > 1: encoder_router_logits, encoder_expert_indexes = self._unpack_router_logits(encoder_outputs[-1]) encoder_z_loss = router_z_loss_func(encoder_router_logits) encoder_router_probs = nn.Softmax(dim=-1)(encoder_router_logits) encoder_aux_loss = load_balancing_loss_func(encoder_router_probs, encoder_expert_indexes) else: encoder_z_loss = 0 encoder_aux_loss = 0 if self.decoder.config.decoder_sparse_step > 1: decoder_router_logits, decoder_expert_indexes = self._unpack_router_logits(decoder_outputs[-1]) decoder_z_loss = router_z_loss_func(decoder_router_logits) decoder_router_probs = nn.Softmax(dim=-1)(decoder_router_logits) decoder_aux_loss = load_balancing_loss_func(decoder_router_probs, decoder_expert_indexes) else: decoder_z_loss = 0 decoder_aux_loss = 0 if labels is not None: loss_fct = CrossEntropyLoss(ignore_index=-100) # move labels to correct device to enable PP labels = labels.to(lm_logits.device) loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) if output_router_logits: z_loss = self.router_z_loss_coef * (encoder_z_loss + decoder_z_loss) aux_loss = self.router_aux_loss_coef * (encoder_aux_loss + decoder_aux_loss) loss = loss + z_loss + aux_loss if not return_dict: output = (lm_logits,) if output_router_logits: output += (encoder_z_loss, encoder_aux_loss, decoder_z_loss, decoder_aux_loss) output += (*decoder_outputs[1:], *encoder_outputs) return ((loss,) + output) if loss is not None else output return Seq2SeqMoEOutput( loss=loss, logits=lm_logits, encoder_z_loss=encoder_z_loss, encoder_aux_loss=encoder_aux_loss, decoder_z_loss=decoder_z_loss, decoder_aux_loss=decoder_aux_loss, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, decoder_router_logits=decoder_outputs.router_probs, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, encoder_router_logits=encoder_outputs.router_probs, ) def _unpack_router_logits(self, router_outputs): total_router_logits = [] total_expert_indexes = [] for router_output in router_outputs: if len(router_output[0].shape) > 1: router_logits, expert_indexes = router_output total_router_logits.append(router_logits) total_expert_indexes.append(expert_indexes) return torch.cat(total_router_logits, dim=1), torch.cat(total_expert_indexes, dim=1) def prepare_inputs_for_generation( self, input_ids, past_key_values=None, attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, use_cache=None, encoder_outputs=None, **kwargs, ): # cut decoder_input_ids if past_key_values is used if past_key_values is not None: past_length = past_key_values[0][0].shape[2] # Some generation methods already pass only the last input ID if input_ids.shape[1] > past_length: remove_prefix_length = past_length else: # Default to old behavior: keep only final ID remove_prefix_length = input_ids.shape[1] - 1 input_ids = input_ids[:, remove_prefix_length:] output_router_logits = kwargs.get("output_router_logits", True) return { "decoder_input_ids": input_ids, "past_key_values": past_key_values, "encoder_outputs": encoder_outputs, "attention_mask": attention_mask, "head_mask": head_mask, "decoder_head_mask": decoder_head_mask, "cross_attn_head_mask": cross_attn_head_mask, "use_cache": use_cache, "output_router_logits": output_router_logits, } def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor): return self._shift_right(labels) def _reorder_cache(self, past_key_values, beam_idx): # if decoder past is not included in output # speedy decoding is disabled and no need to reorder if past_key_values is None: logger.warning("You might want to consider setting `use_cache=True` to speed up decoding") return past_key_values reordered_decoder_past = () for layer_past_states in past_key_values: # get the correct batch idx from layer past batch dim # batch dim of `past` is at 2nd position reordered_layer_past_states = () for layer_past_state in layer_past_states: # need to set correct `past` for each of the four key / value states reordered_layer_past_states = reordered_layer_past_states + ( layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)), ) if reordered_layer_past_states[0].shape != layer_past_states[0].shape: raise ValueError( "expected reordered_layer_past_states to have the same shape than layer_past_states, " f"but got {reordered_layer_past_states[0].shape} and {layer_past_states[0].shape}" ) if len(reordered_layer_past_states) != len(layer_past_states): raise ValueError( "expected layer_past_states to have the same length as reordered_layer_past_states, " f"but got {len(layer_past_states)} and {len(reordered_layer_past_states)}" ) reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,) return reordered_decoder_past @add_start_docstrings( "The bare SWITCH_TRANSFORMERS Model transformer outputting encoder's raw hidden-states without any specific head" " on top.", SWITCH_TRANSFORMERS_START_DOCSTRING, ) class SwitchTransformersEncoderModel(SwitchTransformersPreTrainedModel): _tied_weights_keys = ["encoder.embed_tokens.weight"] def __init__(self, config: SwitchTransformersConfig): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.d_model) encoder_config = copy.deepcopy(config) encoder_config.use_cache = False encoder_config.is_encoder_decoder = False self.encoder = SwitchTransformersStack(encoder_config, self.shared) # Initialize weights and apply final processing self.post_init() # Model parallel self.device_map = None def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared = new_embeddings self.encoder.set_input_embeddings(new_embeddings) def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.encoder.embed_tokens, self.shared) def get_encoder(self): return self.encoder def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.block[layer].layer[0].SelfAttention.prune_heads(heads) @add_start_docstrings_to_model_forward(SWITCH_TRANSFORMERS_ENCODER_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=MoEModelOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_router_logits: Optional[bool] = True, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.FloatTensor], MoEModelOutput]: r""" Returns: Example: ```python >>> from transformers import AutoTokenizer, SwitchTransformersEncoderModel >>> tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8") >>> model = SwitchTransformersEncoderModel.from_pretrained("google/switch-base-8") >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids # Batch size 1 >>> outputs = model(input_ids=input_ids) >>> last_hidden_states = outputs.last_hidden_state ```""" return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, return_dict=return_dict, ) return encoder_outputs
transformers/src/transformers/models/switch_transformers/modeling_switch_transformers.py/0
{ "file_path": "transformers/src/transformers/models/switch_transformers/modeling_switch_transformers.py", "repo_id": "transformers", "token_count": 37641 }
377
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Optional, Tuple, Union import torch from ...modeling_outputs import BackboneOutput from ...modeling_utils import PreTrainedModel from ...utils import is_timm_available, is_torch_available, requires_backends from ...utils.backbone_utils import BackboneMixin from .configuration_timm_backbone import TimmBackboneConfig if is_timm_available(): import timm if is_torch_available(): from torch import Tensor class TimmBackbone(PreTrainedModel, BackboneMixin): """ Wrapper class for timm models to be used as backbones. This enables using the timm models interchangeably with the other models in the library keeping the same API. """ main_input_name = "pixel_values" supports_gradient_checkpointing = False config_class = TimmBackboneConfig def __init__(self, config, **kwargs): requires_backends(self, "timm") super().__init__(config) self.config = config if config.backbone is None: raise ValueError("backbone is not set in the config. Please set it to a timm model name.") # Certain timm models have the structure `model_name.version` e.g. vit_large_patch14_dinov2.lvd142m base_backbone_model = config.backbone.split(".")[0] if base_backbone_model not in timm.list_models(): raise ValueError(f"backbone {base_backbone_model} is not supported by timm.") if hasattr(config, "out_features") and config.out_features is not None: raise ValueError("out_features is not supported by TimmBackbone. Please use out_indices instead.") pretrained = getattr(config, "use_pretrained_backbone", None) if pretrained is None: raise ValueError("use_pretrained_backbone is not set in the config. Please set it to True or False.") # We just take the final layer by default. This matches the default for the transformers models. out_indices = config.out_indices if getattr(config, "out_indices", None) is not None else (-1,) in_chans = kwargs.pop("in_chans", config.num_channels) self._backbone = timm.create_model( config.backbone, pretrained=pretrained, # This is currently not possible for transformer architectures. features_only=config.features_only, in_chans=in_chans, out_indices=out_indices, **kwargs, ) # Converts all `BatchNorm2d` and `SyncBatchNorm` or `BatchNormAct2d` and `SyncBatchNormAct2d` layers of provided module into `FrozenBatchNorm2d` or `FrozenBatchNormAct2d` respectively if getattr(config, "freeze_batch_norm_2d", False): self.freeze_batch_norm_2d() # These are used to control the output of the model when called. If output_hidden_states is True, then # return_layers is modified to include all layers. self._return_layers = { layer["module"]: str(layer["index"]) for layer in self._backbone.feature_info.get_dicts() } self._all_layers = {layer["module"]: str(i) for i, layer in enumerate(self._backbone.feature_info.info)} super()._init_backbone(config) @classmethod def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): requires_backends(cls, ["vision", "timm"]) from ...models.timm_backbone import TimmBackboneConfig config = kwargs.pop("config", TimmBackboneConfig()) use_timm = kwargs.pop("use_timm_backbone", True) if not use_timm: raise ValueError("use_timm_backbone must be True for timm backbones") num_channels = kwargs.pop("num_channels", config.num_channels) features_only = kwargs.pop("features_only", config.features_only) use_pretrained_backbone = kwargs.pop("use_pretrained_backbone", config.use_pretrained_backbone) out_indices = kwargs.pop("out_indices", config.out_indices) config = TimmBackboneConfig( backbone=pretrained_model_name_or_path, num_channels=num_channels, features_only=features_only, use_pretrained_backbone=use_pretrained_backbone, out_indices=out_indices, ) return super()._from_config(config, **kwargs) def freeze_batch_norm_2d(self): timm.utils.model.freeze_batch_norm_2d(self._backbone) def unfreeze_batch_norm_2d(self): timm.utils.model.unfreeze_batch_norm_2d(self._backbone) def _init_weights(self, module): """ Empty init weights function to ensure compatibility of the class in the library. """ pass def forward( self, pixel_values: torch.FloatTensor, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[BackboneOutput, Tuple[Tensor, ...]]: return_dict = return_dict if return_dict is not None else self.config.use_return_dict output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions if output_attentions: raise ValueError("Cannot output attentions for timm backbones at the moment") if output_hidden_states: # We modify the return layers to include all the stages of the backbone self._backbone.return_layers = self._all_layers hidden_states = self._backbone(pixel_values, **kwargs) self._backbone.return_layers = self._return_layers feature_maps = tuple(hidden_states[i] for i in self.out_indices) else: feature_maps = self._backbone(pixel_values, **kwargs) hidden_states = None feature_maps = tuple(feature_maps) hidden_states = tuple(hidden_states) if hidden_states is not None else None if not return_dict: output = (feature_maps,) if output_hidden_states: output = output + (hidden_states,) return output return BackboneOutput(feature_maps=feature_maps, hidden_states=hidden_states, attentions=None)
transformers/src/transformers/models/timm_backbone/modeling_timm_backbone.py/0
{ "file_path": "transformers/src/transformers/models/timm_backbone/modeling_timm_backbone.py", "repo_id": "transformers", "token_count": 2693 }
378
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License """Tokenization classes for UDOP model.""" import os import re import warnings from shutil import copyfile from typing import Any, Dict, List, Optional, Tuple, Union import sentencepiece as spm from ...tokenization_utils import PreTrainedTokenizer from ...tokenization_utils_base import ( AddedToken, BatchEncoding, EncodedInput, PreTokenizedInput, TextInput, TextInputPair, TruncationStrategy, ) from ...utils import PaddingStrategy, TensorType, add_end_docstrings, logging logger = logging.get_logger(__name__) SPIECE_UNDERLINE = "▁" UDOP_ENCODE_KWARGS_DOCSTRING = r""" add_special_tokens (`bool`, *optional*, defaults to `True`): Whether or not to encode the sequences with the special tokens relative to their model. padding (`bool`, `str` or [`~file_utils.PaddingStrategy`], *optional*, defaults to `False`): Activates and controls padding. Accepts the following values: - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `False`): Activates and controls truncation. Accepts the following values: - `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). max_length (`int`, *optional*): Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to `None`, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. stride (`int`, *optional*, defaults to 0): If set to a number along with `max_length`, the overflowing tokens returned when `return_overflowing_tokens=True` will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta). return_tensors (`str` or [`~file_utils.TensorType`], *optional*): If set, will return tensors instead of list of python integers. Acceptable values are: - `'tf'`: Return TensorFlow `tf.constant` objects. - `'pt'`: Return PyTorch `torch.Tensor` objects. - `'np'`: Return Numpy `np.ndarray` objects. return_token_type_ids (`bool`, *optional*): Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer's default, defined by the `return_outputs` attribute. [What are token type IDs?](../glossary#token-type-ids) return_attention_mask (`bool`, *optional*): Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer's default, defined by the `return_outputs` attribute. [What are attention masks?](../glossary#attention-mask) return_overflowing_tokens (`bool`, *optional*, defaults to `False`): Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with `truncation_strategy = longest_first` or `True`, an error is raised instead of returning overflowing tokens. return_special_tokens_mask (`bool`, *optional*, defaults to `False`): Whether or not to return special tokens mask information. return_offsets_mapping (`bool`, *optional*, defaults to `False`): Whether or not to return `(char_start, char_end)` for each token. This is only available on fast tokenizers inheriting from [`PreTrainedTokenizerFast`], if using Python's tokenizer, this method will raise `NotImplementedError`. return_length (`bool`, *optional*, defaults to `False`): Whether or not to return the lengths of the encoded inputs. verbose (`bool`, *optional*, defaults to `True`): Whether or not to print more information and warnings. **kwargs: passed to the `self.tokenize()` method Return: [`BatchEncoding`]: A [`BatchEncoding`] with the following fields: - **input_ids** -- List of token ids to be fed to a model. [What are input IDs?](../glossary#input-ids) - **bbox** -- List of bounding boxes to be fed to a model. - **token_type_ids** -- List of token type ids to be fed to a model (when `return_token_type_ids=True` or if *"token_type_ids"* is in `self.model_input_names`). [What are token type IDs?](../glossary#token-type-ids) - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names`). [What are attention masks?](../glossary#attention-mask) - **labels** -- List of labels to be fed to a model. (when `word_labels` is specified). - **overflowing_tokens** -- List of overflowing tokens sequences (when a `max_length` is specified and `return_overflowing_tokens=True`). - **num_truncated_tokens** -- Number of tokens truncated (when a `max_length` is specified and `return_overflowing_tokens=True`). - **special_tokens_mask** -- List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when `add_special_tokens=True` and `return_special_tokens_mask=True`). - **length** -- The length of the inputs (when `return_length=True`). """ VOCAB_FILES_NAMES = {"vocab_file": "spiece.model", "tokenizer_file": "tokenizer.json"} class UdopTokenizer(PreTrainedTokenizer): """ Adapted from [`LayoutXLMTokenizer`] and [`T5Tokenizer`]. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Path to the vocabulary file. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sequence token. <Tip> When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. </Tip> unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. sep_token_box (`List[int]`, *optional*, defaults to `[1000, 1000, 1000, 1000]`): The bounding box to use for the special [SEP] token. pad_token_box (`List[int]`, *optional*, defaults to `[0, 0, 0, 0]`): The bounding box to use for the special [PAD] token. pad_token_label (`int`, *optional*, defaults to -100): The label to use for padding tokens. Defaults to -100, which is the `ignore_index` of PyTorch's CrossEntropyLoss. only_label_first_subword (`bool`, *optional*, defaults to `True`): Whether or not to only label the first subword, in case word labels are provided. additional_special_tokens (`List[str]`, *optional*, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`): Additional special tokens used by the tokenizer. sp_model_kwargs (`dict`, *optional*): Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest_size results. - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. legacy (`bool`, *optional*, defaults to `True`): Whether or not the `legacy` behaviour of the tokenizer should be used. Legacy is before the merge of #24622 which includes fixes to properly handle tokens that appear after special tokens. A simple example: - `legacy=True`: ```python >>> from transformers import T5Tokenizer >>> tokenizer = T5Tokenizer.from_pretrained("t5-base", legacy=True) >>> tokenizer.encode("Hello <extra_id_0>.") [8774, 32099, 3, 5, 1] ``` - `legacy=False`: ```python >>> from transformers import T5Tokenizer >>> tokenizer = T5Tokenizer.from_pretrained("t5-base", legacy=False) >>> tokenizer.encode("Hello <extra_id_0>.") # the extra space `[3]` is no longer here [8774, 32099, 5, 1] ``` Checkout the pull request and the issue [here](https://github.com/huggingface/transformers/pull/24565) for more details. add_prefix_space (`bool`, *optional*, defaults to `True`): Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. Attributes: sp_model (`SentencePieceProcessor`): The *SentencePiece* processor that is used for every conversion (string, tokens and IDs). """ vocab_files_names = VOCAB_FILES_NAMES model_input_names = ["input_ids", "attention_mask"] def __init__( self, vocab_file, eos_token="</s>", unk_token="<unk>", sep_token="</s>", pad_token="<pad>", sep_token_box=[1000, 1000, 1000, 1000], pad_token_box=[0, 0, 0, 0], pad_token_label=-100, only_label_first_subword=True, additional_special_tokens=None, sp_model_kwargs: Optional[Dict[str, Any]] = None, legacy=True, add_prefix_space=True, **kwargs, ) -> None: eos_token = AddedToken(eos_token, special=True) if isinstance(eos_token, str) else eos_token unk_token = AddedToken(unk_token, special=True) if isinstance(unk_token, str) else unk_token sep_token = AddedToken(sep_token, special=True) if isinstance(sep_token, str) else sep_token pad_token = AddedToken(pad_token, special=True) if isinstance(pad_token, str) else pad_token self.legacy = legacy self.add_prefix_space = add_prefix_space self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs self.vocab_file = vocab_file self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.Load(vocab_file) # additional properties self.sep_token_box = sep_token_box self.pad_token_box = pad_token_box self.pad_token_label = pad_token_label self.only_label_first_subword = only_label_first_subword super().__init__( eos_token=eos_token, unk_token=unk_token, sep_token=sep_token, pad_token=pad_token, sep_token_box=sep_token_box, pad_token_box=pad_token_box, pad_token_label=pad_token_label, only_label_first_subword=only_label_first_subword, additional_special_tokens=additional_special_tokens, sp_model_kwargs=self.sp_model_kwargs, legacy=legacy, add_prefix_space=add_prefix_space, **kwargs, ) @property def vocab_size(self): return len(self.sp_model) # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_vocab def get_vocab(self): vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} vocab.update(self.added_tokens_encoder) return vocab # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_special_tokens_mask def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: return super().get_special_tokens_mask( token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True ) # normal case: some special tokens if token_ids_1 is None: return ([0] * len(token_ids_0)) + [1] return ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_sentinel_tokens def get_sentinel_tokens(self): return list( set(filter(lambda x: bool(re.search(r"<extra_id_\d+>", x)) is not None, self.additional_special_tokens)) ) # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_sentinel_token_ids def get_sentinel_token_ids(self): return [self.convert_tokens_to_ids(token) for token in self.get_sentinel_tokens()] # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer._add_eos_if_not_present def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]: """Do not add eos again if user already added it.""" if len(token_ids) > 0 and token_ids[-1] == self.eos_token_id: warnings.warn( f"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated" " eos tokens being added." ) return token_ids else: return token_ids + [self.eos_token_id] # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.create_token_type_ids_from_sequences def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make use of token type ids, therefore a list of zeros is returned. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros. """ eos = [self.eos_token_id] if token_ids_1 is None: return len(token_ids_0 + eos) * [0] return len(token_ids_0 + eos + token_ids_1 + eos) * [0] # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.build_inputs_with_special_tokens def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A sequence has the following format: - single sequence: `X </s>` - pair of sequences: `A </s> B </s>` Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ token_ids_0 = self._add_eos_if_not_present(token_ids_0) if token_ids_1 is None: return token_ids_0 else: token_ids_1 = self._add_eos_if_not_present(token_ids_1) return token_ids_0 + token_ids_1 # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.__getstate__ def __getstate__(self): state = self.__dict__.copy() state["sp_model"] = None return state def __setstate__(self, d): self.__dict__ = d self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.Load(self.vocab_file) # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.tokenize def tokenize(self, text: "TextInput", **kwargs) -> List[str]: """ Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the first token is special. """ if self.legacy or len(text) == 0: return super().tokenize(text, **kwargs) text = text.replace(SPIECE_UNDERLINE, " ") if self.add_prefix_space: text = SPIECE_UNDERLINE + text tokens = super().tokenize(text, **kwargs) if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens: tokens = tokens[1:] return tokens # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer._tokenize def _tokenize(self, text, **kwargs): """ Returns a tokenized string. We de-activated the `add_dummy_prefix` option, thus the sentencepiece internals will always strip any SPIECE_UNDERLINE. For example: `self.sp_model.encode(f"{SPIECE_UNDERLINE}Hey", out_type = str)` will give `['H', 'e', 'y']` instead of `['▁He', 'y']`. Thus we always encode `f"{unk_token}text"` and strip the `unk_token`. Here is an example with `unk_token = "<unk>"` and `unk_token_length = 4`. `self.tokenizer.sp_model.encode("<unk> Hey", out_type = str)[4:]`. """ if self.legacy or not text.startswith((SPIECE_UNDERLINE, " ")): return self.sp_model.encode(text, out_type=str) # 1. Encode string + prefix ex: "<unk> Hey" tokens = self.sp_model.encode(self.unk_token + text, out_type=str) # 2. Remove self.unk_token from ['<','unk','>', '▁Hey'] return tokens[self.unk_token_length :] if len(tokens) >= self.unk_token_length else tokens def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.sp_model.piece_to_id(token) def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" return self.sp_model.IdToPiece(index) # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.convert_tokens_to_string def convert_tokens_to_string(self, tokens): """Converts a sequence of tokens (string) in a single string.""" # since we manually add the prefix space, we have to remove it when decoding if tokens[0].startswith(SPIECE_UNDERLINE) and self.add_prefix_space: tokens[0] = tokens[0][1:] current_sub_tokens = [] out_string = "" prev_is_special = False for token in tokens: # make sure that special tokens are not decoded using sentencepiece model if token in self.all_special_tokens: if not prev_is_special: out_string += " " out_string += self.sp_model.decode(current_sub_tokens) + token prev_is_special = True current_sub_tokens = [] else: current_sub_tokens.append(token) prev_is_special = False out_string += self.sp_model.decode(current_sub_tokens) return out_string.strip() # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.save_vocabulary def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return out_vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] ) if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): copyfile(self.vocab_file, out_vocab_file) elif not os.path.isfile(self.vocab_file): with open(out_vocab_file, "wb") as fi: content_spiece_model = self.sp_model.serialized_model_proto() fi.write(content_spiece_model) return (out_vocab_file,) @add_end_docstrings(UDOP_ENCODE_KWARGS_DOCSTRING) def __call__( self, text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None, boxes: Union[List[List[int]], List[List[List[int]]]] = None, word_labels: Optional[Union[List[int], List[List[int]]]] = None, text_target: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, text_pair_target: Optional[ Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] ] = None, **kwargs, ) -> BatchEncoding: if text is None and text_target is None: raise ValueError("You need to specify either `text` or `text_target`.") if text is not None: # The context manager will send the inputs as normal texts and not text_target, but we shouldn't change the # input mode in this case. if not self._in_target_context_manager: self._switch_to_input_mode() encodings = self.call_boxes(text=text, text_pair=text_pair, boxes=boxes, word_labels=word_labels, **kwargs) if text_target is not None: self._switch_to_target_mode() target_encodings = self._call_one(text=text_target, text_pair=text_pair_target, **kwargs) # Leave back tokenizer in input mode self._switch_to_input_mode() if text_target is None: return encodings elif text is None: return target_encodings else: encodings["labels"] = target_encodings["input_ids"] return encodings def call_boxes( self, text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]], text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None, boxes: Union[List[List[int]], List[List[List[int]]]] = None, word_labels: Optional[Union[List[int], List[List[int]]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences with word-level normalized bounding boxes and optional labels. Args: text (`str`, `List[str]`, `List[List[str]]`): The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings (words of a single example or questions of a batch of examples) or a list of list of strings (batch of words). text_pair (`List[str]`, `List[List[str]]`): The sequence or batch of sequences to be encoded. Each sequence should be a list of strings (pretokenized string). boxes (`List[List[int]]`, `List[List[List[int]]]`): Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale. word_labels (`List[int]`, `List[List[int]]`, *optional*): Word-level integer labels (for token classification tasks such as FUNSD, CORD). """ # Input type checking for clearer error def _is_valid_text_input(t): if isinstance(t, str): # Strings are fine return True elif isinstance(t, (list, tuple)): # List are fine as long as they are... if len(t) == 0: # ... empty return True elif isinstance(t[0], str): # ... list of strings return True elif isinstance(t[0], (list, tuple)): # ... list with an empty list or with a list of strings return len(t[0]) == 0 or isinstance(t[0][0], str) else: return False else: return False if text_pair is not None: # in case text + text_pair are provided, text = questions, text_pair = words if not _is_valid_text_input(text): raise ValueError("text input must of type `str` (single example) or `List[str]` (batch of examples). ") if not isinstance(text_pair, (list, tuple)): raise ValueError( "words must of type `List[str]` (single pretokenized example), " "or `List[List[str]]` (batch of pretokenized examples)." ) else: # in case only text is provided => must be words if not isinstance(text, (list, tuple)): raise ValueError( "Words must of type `List[str]` (single pretokenized example), " "or `List[List[str]]` (batch of pretokenized examples)." ) if text_pair is not None: is_batched = isinstance(text, (list, tuple)) else: is_batched = isinstance(text, (list, tuple)) and text and isinstance(text[0], (list, tuple)) words = text if text_pair is None else text_pair if boxes is None: raise ValueError("You must provide corresponding bounding boxes") if is_batched: if len(words) != len(boxes): raise ValueError("You must provide words and boxes for an equal amount of examples") for words_example, boxes_example in zip(words, boxes): if len(words_example) != len(boxes_example): raise ValueError("You must provide as many words as there are bounding boxes") else: if len(words) != len(boxes): raise ValueError("You must provide as many words as there are bounding boxes") if is_batched: if text_pair is not None and len(text) != len(text_pair): raise ValueError( f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:" f" {len(text_pair)}." ) batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text is_pair = bool(text_pair is not None) return self.batch_encode_plus_boxes( batch_text_or_text_pairs=batch_text_or_text_pairs, is_pair=is_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) else: return self.encode_plus_boxes( text=text, text_pair=text_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def batch_encode_plus_boxes( self, batch_text_or_text_pairs: Union[ List[TextInput], List[TextInputPair], List[PreTokenizedInput], ], is_pair: bool = None, boxes: Optional[List[List[List[int]]]] = None, word_labels: Optional[List[List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ Tokenize and prepare for the model a list of sequences or a list of pairs of sequences. Args: batch_text_or_text_pairs (`List[str]`, `List[Tuple[str, str]]`, `List[List[str]]`, `List[Tuple[List[str], List[str]]]`, and for not-fast tokenizers, also `List[List[int]]`, `List[Tuple[List[int], List[int]]]`): Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see details in `encode_plus`). """ # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) return self._batch_encode_plus_boxes( batch_text_or_text_pairs=batch_text_or_text_pairs, is_pair=is_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def encode_boxes( self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None, boxes: Optional[List[List[int]]] = None, word_labels: Optional[List[List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs, ) -> List[int]: """ Args: Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary. Same as doing `self.convert_tokens_to_ids(self.tokenize(text))`. text (`str`, `List[str]` or `List[int]`): The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` method). text_pair (`str`, `List[str]` or `List[int]`, *optional*): Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` method). """ encoded_inputs = self.encode_plus_boxes( text, text_pair=text_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, return_tensors=return_tensors, **kwargs, ) return encoded_inputs["input_ids"] def encode_plus_boxes( self, text: Union[TextInput, PreTokenizedInput], text_pair: Optional[PreTokenizedInput] = None, boxes: Optional[List[List[int]]] = None, word_labels: Optional[List[List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: """ Tokenize and prepare for the model a sequence or a pair of sequences. <Tip warning={true}> This method is deprecated, `__call__` should be used instead. </Tip> Args: text (`str`, `List[str]` or (for non-fast tokenizers) `List[int]`): The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` method). text_pair (`str`, `List[str]` or `List[int]`, *optional*): Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` method). """ # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) return self._encode_plus_boxes( text=text, text_pair=text_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, **kwargs, ) def _batch_encode_plus_boxes( self, batch_text_or_text_pairs: Union[ List[TextInput], List[TextInputPair], List[PreTokenizedInput], ], is_pair: bool = None, boxes: Optional[List[List[List[int]]]] = None, word_labels: Optional[List[List[int]]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers. " "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast." ) batch_outputs = self._batch_prepare_for_model_boxes( batch_text_or_text_pairs=batch_text_or_text_pairs, is_pair=is_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=return_tensors, verbose=verbose, ) return BatchEncoding(batch_outputs) @add_end_docstrings(UDOP_ENCODE_KWARGS_DOCSTRING) def _batch_prepare_for_model_boxes( self, batch_text_or_text_pairs, is_pair: bool = None, boxes: Optional[List[List[int]]] = None, word_labels: Optional[List[List[int]]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_length: bool = False, verbose: bool = True, ) -> BatchEncoding: """ Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens Args: batch_ids_pairs: list of tokenized input ids or input ids pairs """ batch_outputs = {} for idx, example in enumerate(zip(batch_text_or_text_pairs, boxes)): batch_text_or_text_pair, boxes_example = example outputs = self.prepare_for_model_boxes( batch_text_or_text_pair[0] if is_pair else batch_text_or_text_pair, batch_text_or_text_pair[1] if is_pair else None, boxes_example, word_labels=word_labels[idx] if word_labels is not None else None, add_special_tokens=add_special_tokens, padding=PaddingStrategy.DO_NOT_PAD.value, # we pad in batch afterward truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=None, # we pad in batch afterward return_attention_mask=False, # we pad in batch afterward return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=None, # We convert the whole batch to tensors at the end prepend_batch_axis=False, verbose=verbose, ) for key, value in outputs.items(): if key not in batch_outputs: batch_outputs[key] = [] batch_outputs[key].append(value) batch_outputs = self.pad( batch_outputs, padding=padding_strategy.value, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, ) batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) return batch_outputs def _encode_plus_boxes( self, text: Union[TextInput, PreTokenizedInput], text_pair: Optional[PreTokenizedInput] = None, boxes: Optional[List[List[int]]] = None, word_labels: Optional[List[int]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs, ) -> BatchEncoding: if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers. " "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast. " "More information on available tokenizers at " "https://github.com/huggingface/transformers/pull/2674" ) return self.prepare_for_model_boxes( text=text, text_pair=text_pair, boxes=boxes, word_labels=word_labels, add_special_tokens=add_special_tokens, padding=padding_strategy.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, prepend_batch_axis=True, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, verbose=verbose, ) @add_end_docstrings(UDOP_ENCODE_KWARGS_DOCSTRING) def prepare_for_model_boxes( self, text: Union[TextInput, PreTokenizedInput], text_pair: Optional[PreTokenizedInput] = None, boxes: Optional[List[List[int]]] = None, word_labels: Optional[List[int]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, prepend_batch_axis: bool = False, **kwargs, ) -> BatchEncoding: """ Prepares a sequence or a pair of sequences so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. Word-level `boxes` are turned into token-level `bbox`. If provided, word-level `word_labels` are turned into token-level `labels`. The word label is used for the first token of the word, while remaining tokens are labeled with -100, such that they will be ignored by the loss function. Args: text (`str`, `List[str]`, `List[List[str]]`): The first sequence to be encoded. This can be a string, a list of strings or a list of list of strings. text_pair (`List[str]` or `List[int]`, *optional*): Optional second sequence to be encoded. This can be a list of strings (words of a single example) or a list of list of strings (words of a batch of examples). """ # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs, ) tokens = [] pair_tokens = [] token_boxes = [] pair_token_boxes = [] labels = [] if text_pair is None: if word_labels is None: # CASE 1: document image classification (training + inference) + CASE 2: token classification (inference) for word, box in zip(text, boxes): if len(word) < 1: # skip empty words continue word_tokens = self.tokenize(word) tokens.extend(word_tokens) token_boxes.extend([box] * len(word_tokens)) else: # CASE 2: token classification (training) for word, box, label in zip(text, boxes, word_labels): if len(word) < 1: # skip empty words continue word_tokens = self.tokenize(word) tokens.extend(word_tokens) token_boxes.extend([box] * len(word_tokens)) if self.only_label_first_subword: # Use the real label id for the first token of the word, and padding ids for the remaining tokens labels.extend([label] + [self.pad_token_label] * (len(word_tokens) - 1)) else: labels.extend([label] * len(word_tokens)) else: # CASE 3: document visual question answering (inference) # text = question # text_pair = words tokens = self.tokenize(text) token_boxes = [self.pad_token_box for _ in range(len(tokens))] for word, box in zip(text_pair, boxes): if len(word) < 1: # skip empty words continue word_tokens = self.tokenize(word) pair_tokens.extend(word_tokens) pair_token_boxes.extend([box] * len(word_tokens)) # Create ids + pair_ids ids = self.convert_tokens_to_ids(tokens) pair_ids = self.convert_tokens_to_ids(pair_tokens) if pair_tokens else None # Compute the total size of the returned encodings pair = bool(pair_ids is not None) len_ids = len(ids) len_pair_ids = len(pair_ids) if pair else 0 total_len = len_ids + len_pair_ids + (self.num_special_tokens_to_add(pair=pair) if add_special_tokens else 0) # Truncation: Handle max sequence length overflowing_tokens = [] overflowing_token_boxes = [] overflowing_labels = [] if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and max_length and total_len > max_length: ( ids, token_boxes, pair_ids, pair_token_boxes, labels, overflowing_tokens, overflowing_token_boxes, overflowing_labels, ) = self.truncate_sequences( ids, token_boxes, pair_ids=pair_ids, pair_token_boxes=pair_token_boxes, labels=labels, num_tokens_to_remove=total_len - max_length, truncation_strategy=truncation_strategy, stride=stride, ) if return_token_type_ids and not add_special_tokens: raise ValueError( "Asking to return token_type_ids while setting add_special_tokens to False " "results in an undefined behavior. Please set add_special_tokens to True or " "set return_token_type_ids to None." ) # Load from model defaults if return_token_type_ids is None: return_token_type_ids = "token_type_ids" in self.model_input_names if return_attention_mask is None: return_attention_mask = "attention_mask" in self.model_input_names encoded_inputs = {} if return_overflowing_tokens: encoded_inputs["overflowing_tokens"] = overflowing_tokens encoded_inputs["overflowing_token_boxes"] = overflowing_token_boxes encoded_inputs["overflowing_labels"] = overflowing_labels encoded_inputs["num_truncated_tokens"] = total_len - max_length # Add special tokens if add_special_tokens: sequence = self.build_inputs_with_special_tokens(ids, pair_ids) token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids) token_boxes = token_boxes + [self.sep_token_box] if pair_token_boxes: pair_token_boxes = pair_token_boxes + [self.sep_token_box] if labels: labels = labels + [self.pad_token_label] else: sequence = ids + pair_ids if pair else ids token_type_ids = [0] * len(ids) + ([0] * len(pair_ids) if pair else []) # Build output dictionary encoded_inputs["input_ids"] = sequence encoded_inputs["bbox"] = token_boxes + pair_token_boxes if return_token_type_ids: encoded_inputs["token_type_ids"] = token_type_ids if return_special_tokens_mask: if add_special_tokens: encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(ids, pair_ids) else: encoded_inputs["special_tokens_mask"] = [0] * len(sequence) if labels: encoded_inputs["labels"] = labels # Check lengths self._eventual_warn_about_too_long_sequence(encoded_inputs["input_ids"], max_length, verbose) # Padding if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask: encoded_inputs = self.pad( encoded_inputs, max_length=max_length, padding=padding_strategy.value, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, ) if return_length: encoded_inputs["length"] = len(encoded_inputs["input_ids"]) batch_outputs = BatchEncoding( encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis ) return batch_outputs # Copied from transformers.models.layoutxlm.tokenization_layoutxlm.LayoutXLMTokenizer.truncate_sequences def truncate_sequences( self, ids: List[int], token_boxes: List[List[int]], pair_ids: Optional[List[int]] = None, pair_token_boxes: Optional[List[List[int]]] = None, labels: Optional[List[int]] = None, num_tokens_to_remove: int = 0, truncation_strategy: Union[str, TruncationStrategy] = "longest_first", stride: int = 0, ) -> Tuple[List[int], List[int], List[int]]: """ Truncates a sequence pair in-place following the strategy. Args: ids (`List[int]`): Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and `convert_tokens_to_ids` methods. token_boxes (`List[List[int]]`): Bounding boxes of the first sequence. pair_ids (`List[int]`, *optional*): Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize` and `convert_tokens_to_ids` methods. pair_token_boxes (`List[List[int]]`, *optional*): Bounding boxes of the second sequence. labels (`List[int]`, *optional*): Labels of the first sequence (for token classification tasks). num_tokens_to_remove (`int`, *optional*, defaults to 0): Number of tokens to remove using the truncation strategy. truncation_strategy (`str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `False`): The strategy to follow for truncation. Can be: - `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). stride (`int`, *optional*, defaults to 0): If set to a positive number, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens. Returns: `Tuple[List[int], List[int], List[int]]`: The truncated `ids`, the truncated `pair_ids` and the list of overflowing tokens. """ if num_tokens_to_remove <= 0: return ids, token_boxes, pair_ids, pair_token_boxes, labels, [], [], [] if not isinstance(truncation_strategy, TruncationStrategy): truncation_strategy = TruncationStrategy(truncation_strategy) overflowing_tokens = [] overflowing_token_boxes = [] overflowing_labels = [] if truncation_strategy == TruncationStrategy.LONGEST_FIRST: for _ in range(num_tokens_to_remove): if pair_ids is None or len(ids) > len(pair_ids): if not overflowing_tokens: window_len = min(len(ids), stride + 1) else: window_len = 1 overflowing_tokens.extend(ids[-window_len:]) overflowing_token_boxes.extend(token_boxes[-window_len:]) overflowing_labels.extend(labels[-window_len:]) ids = ids[:-1] token_boxes = token_boxes[:-1] labels = labels[:-1] else: if not overflowing_tokens: window_len = min(len(pair_ids), stride + 1) else: window_len = 1 overflowing_tokens.extend(pair_ids[-window_len:]) overflowing_token_boxes.extend(pair_token_boxes[-window_len:]) pair_ids = pair_ids[:-1] pair_token_boxes = pair_token_boxes[:-1] elif truncation_strategy == TruncationStrategy.ONLY_FIRST: if len(ids) > num_tokens_to_remove: window_len = min(len(ids), stride + num_tokens_to_remove) overflowing_tokens = ids[-window_len:] overflowing_token_boxes = token_boxes[-window_len:] overflowing_labels = labels[-window_len:] ids = ids[:-num_tokens_to_remove] token_boxes = token_boxes[:-num_tokens_to_remove] labels = labels[:-num_tokens_to_remove] else: logger.error( f"We need to remove {num_tokens_to_remove} to truncate the input " f"but the first sequence has a length {len(ids)}. " f"Please select another truncation strategy than {truncation_strategy}, " "for instance 'longest_first' or 'only_second'." ) elif truncation_strategy == TruncationStrategy.ONLY_SECOND and pair_ids is not None: if len(pair_ids) > num_tokens_to_remove: window_len = min(len(pair_ids), stride + num_tokens_to_remove) overflowing_tokens = pair_ids[-window_len:] overflowing_token_boxes = pair_token_boxes[-window_len:] pair_ids = pair_ids[:-num_tokens_to_remove] pair_token_boxes = pair_token_boxes[:-num_tokens_to_remove] else: logger.error( f"We need to remove {num_tokens_to_remove} to truncate the input " f"but the second sequence has a length {len(pair_ids)}. " f"Please select another truncation strategy than {truncation_strategy}, " "for instance 'longest_first' or 'only_first'." ) return ( ids, token_boxes, pair_ids, pair_token_boxes, labels, overflowing_tokens, overflowing_token_boxes, overflowing_labels, ) # Copied from transformers.models.layoutxlm.tokenization_layoutxlm.LayoutXLMTokenizer._pad def _pad( self, encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], max_length: Optional[int] = None, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, ) -> dict: """ Pad encoded inputs (on left/right and up to predefined length or max length in the batch) Args: encoded_inputs: Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). max_length: maximum length of the returned list and optionally padding length (see below). Will truncate by taking into account the special tokens. padding_strategy: PaddingStrategy to use for padding. - PaddingStrategy.LONGEST Pad to the longest sequence in the batch - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) - PaddingStrategy.DO_NOT_PAD: Do not pad The tokenizer padding sides are defined in self.padding_side: - 'left': pads on the left of the sequences - 'right': pads on the right of the sequences pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability `>= 7.5` (Volta). return_attention_mask: (optional) Set to False to avoid returning attention mask (default: set to model specifics) """ # Load from model defaults if return_attention_mask is None: return_attention_mask = "attention_mask" in self.model_input_names required_input = encoded_inputs[self.model_input_names[0]] if padding_strategy == PaddingStrategy.LONGEST: max_length = len(required_input) if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length # Initialize attention mask if not present. if return_attention_mask and "attention_mask" not in encoded_inputs: encoded_inputs["attention_mask"] = [1] * len(required_input) if needs_to_be_padded: difference = max_length - len(required_input) if self.padding_side == "right": if return_attention_mask: encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference if "token_type_ids" in encoded_inputs: encoded_inputs["token_type_ids"] = ( encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference ) if "bbox" in encoded_inputs: encoded_inputs["bbox"] = encoded_inputs["bbox"] + [self.pad_token_box] * difference if "labels" in encoded_inputs: encoded_inputs["labels"] = encoded_inputs["labels"] + [self.pad_token_label] * difference if "special_tokens_mask" in encoded_inputs: encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference elif self.padding_side == "left": if return_attention_mask: encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] if "token_type_ids" in encoded_inputs: encoded_inputs["token_type_ids"] = [self.pad_token_type_id] * difference + encoded_inputs[ "token_type_ids" ] if "bbox" in encoded_inputs: encoded_inputs["bbox"] = [self.pad_token_box] * difference + encoded_inputs["bbox"] if "labels" in encoded_inputs: encoded_inputs["labels"] = [self.pad_token_label] * difference + encoded_inputs["labels"] if "special_tokens_mask" in encoded_inputs: encoded_inputs["special_tokens_mask"] = [1] * difference + encoded_inputs["special_tokens_mask"] encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input else: raise ValueError("Invalid padding strategy:" + str(self.padding_side)) return encoded_inputs
transformers/src/transformers/models/udop/tokenization_udop.py/0
{ "file_path": "transformers/src/transformers/models/udop/tokenization_udop.py", "repo_id": "transformers", "token_count": 32058 }
379