repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AT/distilroberta-base-finetuned-wikitext2 | AT | roberta | 27 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 942 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
AVSilva/bertimbau-large-fine-tuned-md | AVSilva | bert | 13 | 3 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,028 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AVSilva/bertimbau-large-fine-tuned-sd | AVSilva | bert | 13 | 12 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,028 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AWTStress/stress_classifier | AWTStress | distilbert | 4 | 2 | transformers | 0 | text-classification | false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 764 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_znj9o4r
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
AWTStress/stress_score | AWTStress | distilbert | 4 | 2 | transformers | 0 | text-classification | false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 765 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Pinwheel/wav2vec2-base-timit-demo-colab | Pinwheel | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,641 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4812
- Wer: 0.3557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4668 | 4.0 | 500 | 1.3753 | 0.9895 |
| 0.6126 | 8.0 | 1000 | 0.4809 | 0.4350 |
| 0.2281 | 12.0 | 1500 | 0.4407 | 0.4033 |
| 0.1355 | 16.0 | 2000 | 0.4590 | 0.3765 |
| 0.0923 | 20.0 | 2500 | 0.4754 | 0.3707 |
| 0.0654 | 24.0 | 3000 | 0.4719 | 0.3557 |
| 0.0489 | 28.0 | 3500 | 0.4812 | 0.3557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Abirate/bert_fine_tuned_cola | Abirate | bert | 8 | 6 | transformers | 0 | text-classification | false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | false | true | 2,343 |
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English.
## Pretained Model Description
BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives:
- Masked language modeling (MLM)
- Next sentence prediction (NSP)
## Fine-tuned Model Description: BERT fine-tuned Cola
The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.
By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable
## How to use ?
###### Directly with a pipeline for a text-classification NLP task
```python
from transformers import pipeline
cola = pipeline('text-classification', model='Abirate/bert_fine_tuned_cola')
cola("Tunisia is a beautiful country")
[{'label': 'acceptable', 'score': 0.989352285861969}]
```
###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
import numpy as np
tokenizer = AutoTokenizer.from_pretrained('Abirate/bert_fine_tuned_cola')
model = TFAutoModelForSequenceClassification.from_pretrained("Abirate/bert_fine_tuned_cola")
text = "Tunisia is a beautiful country."
encoded_input = tokenizer(text, return_tensors='tf')
#The logits
output = model(encoded_input)
#Postprocessing
probas_output = tf.math.softmax(tf.squeeze(output['logits']), axis = -1)
class_preds = np.argmax(probas_output, axis = -1)
#Predicting the class acceptable or not acceptable
model.config.id2label[class_preds]
#Result
'acceptable'
``` |
ActivationAI/distilbert-base-uncased-finetuned-emotion | ActivationAI | distilbert | 12 | 12 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8151 | 1.0 | 250 | 0.3043 | 0.907 | 0.9035 |
| 0.24 | 2.0 | 500 | 0.2128 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AdapterHub/bert-base-uncased-pf-anli_r3 | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['anli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,135 |
# Adapter `AdapterHub/bert-base-uncased-pf-anli_r3` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-anli_r3", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-art | AdapterHub | bert | 6 | 0 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['art'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapter-transformers'] | false | true | true | 2,062 |
# Adapter `AdapterHub/bert-base-uncased-pf-art` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-art", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-boolq | AdapterHub | bert | 6 | 97 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['boolq'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:qa/boolq', 'adapter-transformers'] | false | true | true | 2,137 |
# Adapter `AdapterHub/bert-base-uncased-pf-boolq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-boolq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-cola | AdapterHub | bert | 6 | 2 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:lingaccept/cola', 'adapter-transformers'] | false | true | true | 2,149 |
# Adapter `AdapterHub/bert-base-uncased-pf-cola` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cola", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-commonsense_qa | AdapterHub | bert | 6 | 47 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['commonsense_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapterhub:comsense/csqa', 'adapter-transformers'] | false | true | true | 2,102 |
# Adapter `AdapterHub/bert-base-uncased-pf-commonsense_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-commonsense_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-comqa | AdapterHub | bert | 6 | 2 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['com_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,139 |
# Adapter `AdapterHub/bert-base-uncased-pf-comqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-comqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-conll2000 | AdapterHub | bert | 6 | 5 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['conll2000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:chunk/conll2000', 'adapter-transformers'] | false | true | true | 2,152 |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2000` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2000", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-conll2003 | AdapterHub | bert | 6 | 17 | adapter-transformers | 1 | token-classification | false | false | false | null | ['en'] | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:ner/conll2003', 'adapter-transformers'] | false | true | true | 2,148 |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-conll2003_pos | AdapterHub | bert | 6 | 5 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:pos/conll2003', 'adapter-transformers'] | false | true | true | 2,156 |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-copa | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapterhub:comsense/copa', 'adapter-transformers'] | false | true | true | 2,082 |
# Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-copa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-cosmos_qa | AdapterHub | bert | 6 | 15 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['cosmos_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapterhub:comsense/cosmosqa', 'adapter-transformers'] | false | true | true | 2,100 |
# Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cosmos_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-cq | AdapterHub | bert | 6 | 4 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapterhub:qa/cq', 'adapter-transformers'] | false | true | true | 2,129 |
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-drop | AdapterHub | bert | 6 | 42 | adapter-transformers | 2 | question-answering | false | false | false | null | ['en'] | ['drop'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,133 |
# Adapter `AdapterHub/bert-base-uncased-pf-drop` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-drop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-duorc_p | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['duorc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_p` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_p", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-duorc_s | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['duorc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_s` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_s", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-emo | AdapterHub | bert | 6 | 4 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['emo'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,125 |
# Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emo", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-emotion | AdapterHub | bert | 6 | 197 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-emotion` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emotion", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-fce_error_detection | AdapterHub | bert | 6 | 6 | adapter-transformers | 1 | token-classification | false | false | false | null | ['en'] | ['fce_error_detection'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:ged/fce', 'adapter-transformers'] | false | true | true | 2,156 |
# Adapter `AdapterHub/bert-base-uncased-pf-fce_error_detection` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-fce_error_detection", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-hellaswag | AdapterHub | bert | 6 | 0 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['hellaswag'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapterhub:comsense/hellaswag', 'adapter-transformers'] | false | true | true | 2,102 |
# Adapter `AdapterHub/bert-base-uncased-pf-hellaswag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-hellaswag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-hotpotqa | AdapterHub | bert | 6 | 52 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['hotpot_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,151 |
# Adapter `AdapterHub/bert-base-uncased-pf-hotpotqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-hotpotqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-imdb | AdapterHub | bert | 6 | 2 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:sentiment/imdb', 'adapter-transformers'] | false | true | true | 2,147 |
# Adapter `AdapterHub/bert-base-uncased-pf-imdb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-imdb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-mit_movie_trivia | AdapterHub | bert | 6 | 7 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:ner/mit_movie_trivia', 'adapter-transformers'] | false | true | true | 2,176 |
# Adapter `AdapterHub/bert-base-uncased-pf-mit_movie_trivia` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mit_movie_trivia", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-mnli | AdapterHub | bert | 6 | 9 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['multi_nli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:nli/multinli', 'adapter-transformers'] | false | true | true | 2,143 |
# Adapter `AdapterHub/bert-base-uncased-pf-mnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-mrpc | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:sts/mrpc', 'adapter-transformers'] | false | true | true | 2,135 |
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mrpc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-multirc | AdapterHub | bert | 6 | 11 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'adapterhub:rc/multirc', 'bert', 'adapter-transformers'] | false | true | true | 2,145 |
# Adapter `AdapterHub/bert-base-uncased-pf-multirc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-multirc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-newsqa | AdapterHub | bert | 6 | 24 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['newsqa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-newsqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-newsqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-pmb_sem_tagging | AdapterHub | bert | 6 | 6 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:semtag/pmb', 'adapter-transformers'] | false | true | true | 2,154 |
# Adapter `AdapterHub/bert-base-uncased-pf-pmb_sem_tagging` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-pmb_sem_tagging", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-qnli | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:nli/qnli', 'adapter-transformers'] | false | true | true | 2,135 |
# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-qqp | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'adapter-transformers', 'adapterhub:sts/qqp', 'bert'] | false | true | true | 2,131 |
# Adapter `AdapterHub/bert-base-uncased-pf-qqp` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qqp", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-quail | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['quail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapter-transformers'] | false | true | true | 2,070 |
# Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-quartz | AdapterHub | bert | 6 | 2 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['quartz'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapter-transformers'] | false | true | true | 2,074 |
# Adapter `AdapterHub/bert-base-uncased-pf-quartz` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quartz", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-quoref | AdapterHub | bert | 6 | 6 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['quoref'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-quoref` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quoref", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-race | AdapterHub | bert | 6 | 10 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['race'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['adapterhub:rc/race', 'bert', 'adapter-transformers'] | false | true | true | 2,070 |
# Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-race", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-record | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:rc/record', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-record", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-rotten_tomatoes | AdapterHub | bert | 6 | 12 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['rotten_tomatoes'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:sentiment/rotten_tomatoes', 'adapter-transformers'] | false | true | true | 2,191 |
# Adapter `AdapterHub/bert-base-uncased-pf-rotten_tomatoes` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rotten_tomatoes", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-rte | AdapterHub | bert | 6 | 4 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:nli/rte', 'adapter-transformers'] | false | true | true | 2,131 |
# Adapter `AdapterHub/bert-base-uncased-pf-rte` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rte", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-scicite | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['scicite'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,141 |
# Adapter `AdapterHub/bert-base-uncased-pf-scicite` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scicite", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-scitail | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['scitail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:nli/scitail', 'adapter-transformers'] | false | true | true | 2,147 |
# Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scitail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-sick | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['sick'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'adapter-transformers', 'bert', 'adapterhub:nli/sick'] | false | true | true | 2,135 |
# Adapter `AdapterHub/bert-base-uncased-pf-sick` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sick", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-snli | AdapterHub | bert | 6 | 11 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['snli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,129 |
# Adapter `AdapterHub/bert-base-uncased-pf-snli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-snli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-social_i_qa | AdapterHub | bert | 6 | 10 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['social_i_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapter-transformers'] | false | true | true | 2,094 |
# Adapter `AdapterHub/bert-base-uncased-pf-social_i_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-social_i_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-squad | AdapterHub | bert | 6 | 75 | adapter-transformers | 2 | question-answering | false | false | false | null | ['en'] | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapterhub:qa/squad1', 'adapter-transformers'] | false | true | true | 2,143 |
# Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-squad_v2 | AdapterHub | bert | 6 | 126 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapterhub:qa/squad2', 'adapter-transformers'] | false | true | true | 2,149 |
# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad_v2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-sst2 | AdapterHub | bert | 6 | 13 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:sentiment/sst-2', 'adapter-transformers'] | false | true | true | 2,149 |
# Adapter `AdapterHub/bert-base-uncased-pf-sst2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sst2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-stsb | AdapterHub | bert | 6 | 4 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:sts/sts-b', 'adapter-transformers'] | false | true | true | 2,137 |
# Adapter `AdapterHub/bert-base-uncased-pf-stsb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-stsb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-swag | AdapterHub | bert | 6 | 0 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['swag'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapter-transformers'] | false | true | true | 2,066 |
# Adapter `AdapterHub/bert-base-uncased-pf-swag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-swag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-trec | AdapterHub | bert | 6 | 17 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['trec'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,129 |
# Adapter `AdapterHub/bert-base-uncased-pf-trec` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-trec", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-ud_deprel | AdapterHub | bert | 6 | 2 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:deprel/ud_ewt', 'adapter-transformers'] | false | true | true | 2,148 |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_deprel` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_deprel", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-ud_en_ewt | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapterhub:dp/ud_ewt', 'adapter-transformers'] | false | true | true | 1,480 |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_en_ewt` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_en_ewt", source="hf", set_active=True)
```
## Architecture & Training
This adapter was trained using adapter-transformer's example script for dependency parsing.
See https://github.com/Adapter-Hub/adapter-transformers/tree/master/examples/dependency-parsing.
## Evaluation results
Scores achieved by dependency parsing adapters on the test set of UD English EWT after training:
| Model | UAS | LAS |
| --- | --- | --- |
| `bert-base-uncased` | 91.74 | 89.15 |
| `roberta-base` | 91.43 | 88.43 |
## Citation
<!-- Add some description here --> |
AdapterHub/bert-base-uncased-pf-ud_pos | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapterhub:pos/ud_ewt', 'adapter-transformers'] | false | true | true | 2,136 |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-wic | AdapterHub | bert | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapterhub:wordsence/wic', 'adapter-transformers'] | false | true | true | 2,143 |
# Adapter `AdapterHub/bert-base-uncased-pf-wic` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wic", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-wikihop | AdapterHub | bert | 6 | 4 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'bert', 'adapterhub:qa/wikihop', 'adapter-transformers'] | false | true | true | 2,149 |
# Adapter `AdapterHub/bert-base-uncased-pf-wikihop` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wikihop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-winogrande | AdapterHub | bert | 6 | 3 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['winogrande'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'adapterhub:comsense/winogrande', 'adapter-transformers'] | false | true | true | 2,106 |
# Adapter `AdapterHub/bert-base-uncased-pf-winogrande` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-winogrande", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/bert-base-uncased-pf-wnut_17 | AdapterHub | bert | 6 | 4 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['wnut_17'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,134 |
# Adapter `AdapterHub/bert-base-uncased-pf-wnut_17` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wnut_17", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-yelp_polarity | AdapterHub | bert | 6 | 16 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['yelp_polarity'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'bert', 'adapter-transformers'] | false | true | true | 2,165 |
# Adapter `AdapterHub/bert-base-uncased-pf-yelp_polarity` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-yelp_polarity", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bioASQyesno | AdapterHub | bart | 6 | 26 | adapter-transformers | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['adapterhub:qa/bioasq', 'adapter-transformers', 'bart'] | false | true | true | 1,189 |
# Adapter `AdapterHub/bioASQyesno` for facebook/bart-base
An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/bioasq](https://adapterhub.ml/explore/qa/bioasq/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("facebook/bart-base")
adapter_name = model.load_adapter("AdapterHub/bioASQyesno", source="hf", set_active=True)
```
## Architecture & Training
Trained for 15 epochs with early stopping, a learning rate of 1e-4, and a batch size of 4 on the yes-no questions of the bioASQ 8b dataset.
## Evaluation results
Achieved 75% accuracy on the test dataset of bioASQ 8b dataset.
## Citation
<!-- Add some description here --> |
AdapterHub/narrativeqa | AdapterHub | bart | 6 | 47 | adapter-transformers | 0 | null | false | false | false | null | null | ['narrativeqa'] | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 | ['adapterhub:qa/narrativeqa', 'adapter-transformers', 'bart'] | false | true | true | 1,057 |
# Adapter `hSterz/narrativeqa` for facebook/bart-base
An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/narrativeqa](https://adapterhub.ml/explore/qa/narrativeqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("facebook/bart-base")
adapter_name = model.load_adapter("hSterz/narrativeqa", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
AdapterHub/roberta-base-pf-anli_r3 | AdapterHub | roberta | 6 | 2 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['anli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapter-transformers'] | false | true | true | 2,110 |
# Adapter `AdapterHub/roberta-base-pf-anli_r3` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-anli_r3", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-art | AdapterHub | roberta | 6 | 0 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['art'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapter-transformers'] | false | true | true | 2,037 |
# Adapter `AdapterHub/roberta-base-pf-art` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-art", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-boolq | AdapterHub | roberta | 6 | 109 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['boolq'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:qa/boolq', 'adapter-transformers'] | false | true | true | 2,112 |
# Adapter `AdapterHub/roberta-base-pf-boolq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-boolq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-cola | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:lingaccept/cola', 'adapter-transformers'] | false | true | true | 2,124 |
# Adapter `AdapterHub/roberta-base-pf-cola` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cola", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-commonsense_qa | AdapterHub | roberta | 6 | 24 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['commonsense_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapterhub:comsense/csqa', 'adapter-transformers'] | false | true | true | 2,077 |
# Adapter `AdapterHub/roberta-base-pf-commonsense_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-commonsense_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-comqa | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['com_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,114 |
# Adapter `AdapterHub/roberta-base-pf-comqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-comqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-conll2000 | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['conll2000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'roberta', 'adapterhub:chunk/conll2000', 'adapter-transformers'] | false | true | true | 2,127 |
# Adapter `AdapterHub/roberta-base-pf-conll2000` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2000", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-conll2003 | AdapterHub | roberta | 6 | 15 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'roberta', 'adapterhub:ner/conll2003', 'adapter-transformers'] | false | true | true | 2,123 |
# Adapter `AdapterHub/roberta-base-pf-conll2003` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-conll2003_pos | AdapterHub | roberta | 6 | 3 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'roberta', 'adapterhub:pos/conll2003', 'adapter-transformers', 'token-classification'] | false | true | true | 2,131 |
# Adapter `AdapterHub/roberta-base-pf-conll2003_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003_pos", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-copa | AdapterHub | roberta | 6 | 2 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapterhub:comsense/copa', 'adapter-transformers'] | false | true | true | 2,057 |
# Adapter `AdapterHub/roberta-base-pf-copa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-copa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-cosmos_qa | AdapterHub | roberta | 6 | 13 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['cosmos_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapterhub:comsense/cosmosqa', 'adapter-transformers'] | false | true | true | 2,075 |
# Adapter `AdapterHub/roberta-base-pf-cosmos_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cosmos_qa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-cq | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapterhub:qa/cq', 'adapter-transformers'] | false | true | true | 2,104 |
# Adapter `AdapterHub/roberta-base-pf-cq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-drop | AdapterHub | roberta | 6 | 25 | adapter-transformers | 1 | question-answering | false | false | false | null | ['en'] | ['drop'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,108 |
# Adapter `AdapterHub/roberta-base-pf-drop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-drop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-duorc_p | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['duorc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-duorc_p` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-duorc_p", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-duorc_s | AdapterHub | roberta | 6 | 5 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['duorc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-duorc_s` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-duorc_s", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-emo | AdapterHub | roberta | 6 | 7 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['emo'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapter-transformers'] | false | true | true | 2,100 |
# Adapter `AdapterHub/roberta-base-pf-emo` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-emo", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-emotion | AdapterHub | roberta | 6 | 37 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapter-transformers'] | false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-emotion` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-emotion", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-fce_error_detection | AdapterHub | roberta | 6 | 4 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | ['fce_error_detection'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'roberta', 'adapterhub:ged/fce', 'adapter-transformers'] | false | true | true | 2,131 |
# Adapter `AdapterHub/roberta-base-pf-fce_error_detection` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-fce_error_detection", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-hellaswag | AdapterHub | roberta | 6 | 4 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['hellaswag'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapterhub:comsense/hellaswag', 'adapter-transformers'] | false | true | true | 2,077 |
# Adapter `AdapterHub/roberta-base-pf-hellaswag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-hellaswag", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-hotpotqa | AdapterHub | roberta | 6 | 32 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['hotpot_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,126 |
# Adapter `AdapterHub/roberta-base-pf-hotpotqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-hotpotqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-imdb | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:sentiment/imdb', 'adapter-transformers'] | false | true | true | 2,122 |
# Adapter `AdapterHub/roberta-base-pf-imdb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-mit_movie_trivia | AdapterHub | roberta | 6 | 3 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'roberta', 'adapterhub:ner/mit_movie_trivia', 'adapter-transformers'] | false | true | true | 2,151 |
# Adapter `AdapterHub/roberta-base-pf-mit_movie_trivia` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mit_movie_trivia", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-mnli | AdapterHub | roberta | 6 | 9 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['multi_nli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:nli/multinli', 'adapter-transformers'] | false | true | true | 2,118 |
# Adapter `AdapterHub/roberta-base-pf-mnli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-mrpc | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:sts/mrpc', 'adapter-transformers'] | false | true | true | 2,110 |
# Adapter `AdapterHub/roberta-base-pf-mrpc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mrpc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-multirc | AdapterHub | roberta | 6 | 12 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'adapterhub:rc/multirc', 'roberta', 'adapter-transformers'] | false | true | true | 2,120 |
# Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-multirc", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-newsqa | AdapterHub | roberta | 6 | 15 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['newsqa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-newsqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-newsqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-pmb_sem_tagging | AdapterHub | roberta | 6 | 3 | adapter-transformers | 0 | token-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['token-classification', 'roberta', 'adapterhub:semtag/pmb', 'adapter-transformers'] | false | true | true | 2,129 |
# Adapter `AdapterHub/roberta-base-pf-pmb_sem_tagging` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-pmb_sem_tagging", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-qnli | AdapterHub | roberta | 6 | 2 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:nli/qnli', 'adapter-transformers'] | false | true | true | 2,110 |
# Adapter `AdapterHub/roberta-base-pf-qnli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-qnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-qqp | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'adapter-transformers', 'adapterhub:sts/qqp', 'roberta'] | false | true | true | 2,106 |
# Adapter `AdapterHub/roberta-base-pf-qqp` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-qqp", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-quail | AdapterHub | roberta | 6 | 0 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['quail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapter-transformers'] | false | true | true | 2,045 |
# Adapter `AdapterHub/roberta-base-pf-quail` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quail", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-quartz | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['quartz'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'adapter-transformers'] | false | true | true | 2,049 |
# Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quartz", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-quoref | AdapterHub | roberta | 6 | 9 | adapter-transformers | 0 | question-answering | false | false | false | null | ['en'] | ['quoref'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'roberta', 'adapter-transformers'] | false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-quoref` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-quoref", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-race | AdapterHub | roberta | 6 | 3 | adapter-transformers | 0 | null | false | false | false | null | ['en'] | ['race'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['adapterhub:rc/race', 'roberta', 'adapter-transformers'] | false | true | true | 2,045 |
# Adapter `AdapterHub/roberta-base-pf-race` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-race", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
AdapterHub/roberta-base-pf-record | AdapterHub | roberta | 6 | 1 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:rc/record', 'adapter-transformers'] | false | true | true | 2,116 |
# Adapter `AdapterHub/roberta-base-pf-record` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-record", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-rotten_tomatoes | AdapterHub | roberta | 6 | 4 | adapter-transformers | 0 | text-classification | false | false | false | null | ['en'] | ['rotten_tomatoes'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'roberta', 'adapterhub:sentiment/rotten_tomatoes', 'adapter-transformers'] | false | true | true | 2,166 |
# Adapter `AdapterHub/roberta-base-pf-rotten_tomatoes` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-rotten_tomatoes", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |