repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
flboehm/reddit-bert-text4 | flboehm | bert | 14 | 6 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,247 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-bert-text4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1071 | 1.0 | 978 | 2.6170 |
| 2.6788 | 2.0 | 1956 | 2.5332 |
| 2.6112 | 3.0 | 2934 | 2.4844 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 647c6123bff3d42e1da30437e46e7138 |
Helsinki-NLP/opus-mt-ru-da | Helsinki-NLP | marian | 11 | 46 | transformers | 0 | translation | true | true | false | apache-2.0 | ['ru', 'da'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,997 | false |
### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.dan | 56.6 | 0.714 |
### System Info:
- hf_name: rus-dan
- source_languages: rus
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'da']
- src_constituents: {'rus'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: dan
- short_pair: ru-da
- chrF2_score: 0.7140000000000001
- bleu: 56.6
- brevity_penalty: 0.977
- ref_len: 11746.0
- src_name: Russian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: da
- prefer_old: False
- long_pair: rus-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 428666dfbbcc9b758a5e416e3aad4a5e |
Salesforce/codegen-6B-nl | Salesforce | codegen | 9 | 990 | transformers | 1 | text-generation | true | false | false | bsd-3-clause | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,786 | false | # CodeGen (CodeGen-NL 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 6B** in the paper, where "NL" means it is pre-trained on the Pile and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 6B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
| b8914b5de95df3c94b51ad07effb2c43 |
facebook/s2t-wav2vec2-large-en-ar | facebook | speech-encoder-decoder | 10 | 16 | transformers | 5 | automatic-speech-recognition | true | false | false | mit | ['en', 'ar'] | ['covost2', 'librispeech_asr'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2'] | false | true | true | 3,517 | false |
# S2T2-Wav2Vec2-CoVoST2-EN-AR-ST
`s2t-wav2vec2-large-en-ar` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Arabic text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-ar", feature_extractor="facebook/s2t-wav2vec2-large-en-ar")
translation = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-ar")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-ar")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-ar (BLEU score): **20.2**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bb08b09cdee07897c5a9852ed2d257ab |
zboxi7/finetuning-sentiment-model-3000-samples | zboxi7 | distilbert | 23 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,088 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1460
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 12f9a205fd5fe5d7a886d2e16559822a |
bochaowei/t5-small-finetuned-xsum-wei2 | bochaowei | t5 | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['xsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,423 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4131
- Rouge1: 29.2287
- Rouge2: 8.4073
- Rougel: 23.0934
- Rougelsum: 23.0954
- Gen Len: 18.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| 1f7d305e7102f0416cbbd0e77b62e9c4 |
gokuls/tiny-bert-sst2-distilled-model | gokuls | bert | 16 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,660 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled-model
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2592
- Accuracy: 0.8383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5303 | 1.0 | 4210 | 1.2542 | 0.8222 |
| 0.4503 | 2.0 | 8420 | 1.1260 | 0.8211 |
| 0.3689 | 3.0 | 12630 | 1.2325 | 0.8234 |
| 0.3122 | 4.0 | 16840 | 1.2533 | 0.8337 |
| 0.2764 | 5.0 | 21050 | 1.2726 | 0.8337 |
| 0.254 | 6.0 | 25260 | 1.2609 | 0.8337 |
| 0.2358 | 7.0 | 29470 | 1.2592 | 0.8383 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.1+cu113
- Datasets 1.15.1
- Tokenizers 0.12.1
| e1c9595c2e758f95c0dd06a80ff35ad0 |
fathyshalab/domain_transfer_clinic_credit_cards-massive_cooking-roberta-large-v1-2-4 | fathyshalab | roberta | 14 | 0 | sentence-transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | true | true | 1,534 | false |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_cooking-roberta-large-v1-2-4
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_cooking-roberta-large-v1-2-4")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| ba27e99428b60b29a6d041854632502a |
ahmeddbahaa/mt5-base-finetuned-ar-wikilingua | ahmeddbahaa | mt5 | 10 | 4 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | ['wiki_lingua'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 2,194 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-ar-wikilingua
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6790
- Rouge-1: 19.46
- Rouge-2: 6.82
- Rouge-l: 17.57
- Gen Len: 18.83
- Bertscore: 70.18
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.9783 | 1.0 | 5111 | 4.0107 | 15.8 | 4.65 | 14.18 | 18.98 | 68.66 |
| 4.2093 | 2.0 | 10222 | 3.8664 | 16.46 | 5.17 | 15.08 | 18.91 | 68.5 |
| 4.0303 | 3.0 | 15333 | 3.7847 | 17.0 | 5.43 | 15.45 | 18.89 | 68.75 |
| 3.9165 | 4.0 | 20444 | 3.7405 | 17.03 | 5.5 | 15.45 | 18.86 | 68.78 |
| 3.8396 | 5.0 | 25555 | 3.7102 | 17.14 | 5.57 | 15.48 | 18.87 | 68.92 |
| 3.7825 | 6.0 | 30666 | 3.6944 | 17.64 | 5.73 | 15.96 | 18.82 | 69.14 |
| 3.7447 | 7.0 | 35777 | 3.6801 | 17.6 | 5.66 | 15.9 | 18.78 | 69.23 |
| 3.7203 | 8.0 | 40888 | 3.6790 | 17.94 | 5.81 | 16.21 | 18.81 | 69.29 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1b45bb2611e473746b8199787eebcbdd |
Alred/t5-small-finetuned-summarization-cnn | Alred | t5 | 13 | 3 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | ['cnn_dailymail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,441 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summarization-cnn
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0105
- Rouge1: 24.4825
- Rouge2: 9.1573
- Rougel: 19.7135
- Rougelsum: 22.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 2.0389 | 1.0 | 718 | 2.0150 | 24.4413 | 9.1782 | 19.7202 | 22.2225 |
| 1.9497 | 2.0 | 1436 | 2.0105 | 24.4825 | 9.1573 | 19.7135 | 22.2551 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| b396c1debf698a05de5b6d4b091f85d1 |
rahul77/t5-small-finetuned-xsum-rahul2 | rahul77 | t5 | 11 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,221 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-rahul2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 63 | 1.3966 | 24.7113 | 17.3364 | 22.3967 | 24.026 | 19.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| e8b59f6effd436b3bc50d7a64c3e27c7 |
ksenon07147/NLP_Opt350M | ksenon07147 | opt | 17 | 0 | transformers | 0 | text-generation | true | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,243 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_Opt350M
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.453 | 1.0 | 849 | 3.3589 |
| 2.9744 | 2.0 | 1698 | 3.3594 |
| 2.7146 | 3.0 | 2547 | 3.3806 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
| bb88e4cf2306fe0e88c50745060235f8 |
brad1141/oldData_BERT | brad1141 | bert | 10 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,498 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oldData_BERT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2348 | 1.0 | 1125 | 1.0185 |
| 1.0082 | 2.0 | 2250 | 0.7174 |
| 0.699 | 3.0 | 3375 | 0.3657 |
| 0.45 | 4.0 | 4500 | 0.1880 |
| 0.2915 | 5.0 | 5625 | 0.1140 |
| 0.2056 | 6.0 | 6750 | 0.0708 |
| 0.1312 | 7.0 | 7875 | 0.0616 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 0e2a7a171822b2235e055b8e20877838 |
sahita/language-identification | sahita | null | 8 | 26 | speechbrain | 1 | audio-classification | true | false | false | apache-2.0 | ['multilingual', 'en', 'hi', 'ot'] | ['VoxLingua107'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107'] | false | true | true | 6,991 | false |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed.
The model can classify a speech utterance according to the language spoken.
It covers 3 different languages (
English,
Hindi,
Other.
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="sahita/language-identification", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
# (tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
# -3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
# -2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
# -3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
# -2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
# -2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
# -3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
# -2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
# -2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
# -3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
# -2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
# -4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
# -3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
# -2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
# -2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
# -2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
# -3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
# -2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
# -2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
# -2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
# -3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
# -2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
# tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
# ['ot: Other']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
# torch.Size([1, 1, 256])
```
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
See the [SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/voxlingua107/recipes/VoxLingua107/lang_id).
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
### Referencing VoxLingua107
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain | 1a1d63261f67dcb28d29247fda113107 |
DeividasM/wav2vec2-large-xlsr-53-lithuanian | DeividasM | wav2vec2 | 9 | 17 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['lt'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 3,365 | false |
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.55 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
| d9d90f3e3e6345fd1372bfe34c43d907 |
KES/GEC-English | KES | t5 | 8 | 0 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text2text-generation', 'Guyanese Creole', 'Caribbean dialect'] | false | true | true | 923 | false |
# Guyanese English Creole to English Translator
This model utilises T5-base pre-trained model. It was fine tuned using a custom dataset for translation of Guyanese English Creole to English. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creoles checkout the library [Caribe](https://pypi.org/project/Caribe/).
___
# Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KES/GEC-English")
model = AutoModelForSeq2SeqLM.from_pretrained("KES/GEC-English")
text = "Ah waan ah phone"
inputs = tokenizer("guy:"+text, truncation=True, return_tensors='pt')
output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
translation=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(translation)) #translation: I want a phone.
```
___
| a086dd07e554557e9f4d296dfd3efa03 |
SaiNikhileshReddy/xlm-roberta-large-finetuned-ner | SaiNikhileshReddy | xlm-roberta | 9 | 13 | transformers | 0 | token-classification | true | false | false | mit | null | ['hi_ner_config'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,196 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the hi_ner_config dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2329
- eval_precision: 0.7110
- eval_recall: 0.6854
- eval_f1: 0.6980
- eval_accuracy: 0.9332
- eval_runtime: 162.3478
- eval_samples_per_second: 66.9
- eval_steps_per_second: 16.73
- epoch: 2.64
- step: 50198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| f4c214d9d50400ca474977ff7818b5d7 |
tarteel-ai/whisper-tiny-ar-quran | tarteel-ai | whisper | 30 | 13 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,832 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-ar-quran
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0928
- Wer: 7.0535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1766 | 0.05 | 500 | 0.2829 | 20.0236 |
| 0.1129 | 0.09 | 1000 | 0.1981 | 13.8364 |
| 0.0775 | 0.14 | 1500 | 0.1763 | 12.5450 |
| 0.0678 | 0.19 | 2000 | 0.1485 | 10.7302 |
| 0.0437 | 0.23 | 2500 | 0.1336 | 9.6693 |
| 0.0341 | 0.28 | 3000 | 0.1244 | 8.9602 |
| 0.0302 | 0.33 | 3500 | 0.1059 | 8.2224 |
| 0.0189 | 0.37 | 4000 | 0.1044 | 7.6902 |
| 0.0167 | 0.42 | 4500 | 0.0966 | 7.2643 |
| 0.0151 | 0.47 | 5000 | 0.0928 | 7.0535 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 69bb9b19fbd5fcb25dcd8b8a194251de |
sagawa/ZINC-t5-v2 | sagawa | t5 | 8 | 4 | transformers | 0 | text2text-generation | true | false | true | mit | null | ['sagawa/ZINC-canonicalized'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | true | true | true | 1,987 | false |
# ZINC-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1228
- Accuracy: 0.9476
## Model description
We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Compared to ZINC-t5, ZINC-t5-v2 uses a character-level tokenizer, and it was also trained on ZINC.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5).
## Training and evaluation data
We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.2090 | 100000 | 0.9264 | 0.1860 |
| 0.1628 | 200000 | 0.9349 | 0.1613 |
| 0.1632 | 300000 | 0.9395 | 0.1467 |
| 0.1451 | 400000 | 0.9435 | 0.1345 |
| 0.1311 | 500000 | 0.9465 | 0.1261 | | 7807010734e1460562106c38dbc2f1c6 |
facebook/hubert-large-ls960-ft | facebook | hubert | 9 | 26,932 | transformers | 27 | automatic-speech-recognition | true | true | false | apache-2.0 | ['en'] | ['libri-light', 'librispeech_asr'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['speech', 'audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | true | true | true | 2,809 | false |
# Hubert-Large-Finetuned
[Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression)
The large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
The model is a fine-tuned version of [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k).
[Paper](https://arxiv.org/abs/2106.07447)
Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed
**Abstract**
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert .
# Usage
The model can be used for automatic-speech-recognition as follows:
```python
import torch
from transformers import Wav2Vec2Processor, HubertForCTC
from datasets import load_dataset
processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
# ->"A MAN SAID TO THE UNIVERSE SIR I EXIST"
``` | 72f2b3dfdcfe321de5de1fe72142d6db |
timm/convnext_tiny.fb_in22k_ft_in1k | timm | null | 4 | 518 | timm | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k', 'imagenet-22k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 21,428 | false | # Model card for convnext_tiny.fb_in22k_ft_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.6
- GMACs: 4.5
- Activations (M): 13.4
- Image size: 224 x 224
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnext_tiny.fb_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_tiny.fb_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for convnext_base:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_tiny.fb_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
### By Throughput (samples / sec)
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
| 8f9d8c9355b2cbe4ec90cdd8779c62fd |
Xxanderr/ScraperTrainer | Xxanderr | gpt2 | 17 | 4 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 888 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ScraperTrainer
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| b8afc79e8489762b252e911ee406aad3 |
GinaYang/xlm-roberta-base-finetuned-panx-en | GinaYang | xlm-roberta | 9 | 19 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| cbc9cb369bd865aabb30a5b79af77f1e |
debbiesoon/longformer_summarise | debbiesoon | led | 13 | 29 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['scientific_papers'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,585 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_summarise
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3003
- Rouge2 Precision: 0.1654
- Rouge2 Recall: 0.0966
- Rouge2 Fmeasure: 0.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 2.909 | 0.08 | 10 | 2.8969 | 0.09 | 0.1439 | 0.0953 |
| 2.615 | 0.16 | 20 | 2.6182 | 0.1232 | 0.0865 | 0.0924 |
| 2.581 | 0.24 | 30 | 2.4687 | 0.1357 | 0.0733 | 0.09 |
| 2.1294 | 0.32 | 40 | 2.5215 | 0.1495 | 0.0932 | 0.1044 |
| 2.8083 | 0.4 | 50 | 2.3870 | 0.1794 | 0.1054 | 0.1224 |
| 3.0704 | 0.48 | 60 | 2.3676 | 0.1572 | 0.0989 | 0.1108 |
| 2.4716 | 0.56 | 70 | 2.3554 | 0.1707 | 0.1039 | 0.1198 |
| 2.454 | 0.64 | 80 | 2.3411 | 0.1619 | 0.0943 | 0.1115 |
| 2.3046 | 0.72 | 90 | 2.3105 | 0.1547 | 0.0965 | 0.1116 |
| 1.7467 | 0.8 | 100 | 2.3417 | 0.1551 | 0.0877 | 0.1046 |
| 2.7696 | 0.88 | 110 | 2.3226 | 0.1543 | 0.0954 | 0.1085 |
| 2.4999 | 0.96 | 120 | 2.3003 | 0.1654 | 0.0966 | 0.1118 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
| ce94bbf07969f0e31070be6957b498c9 |
jonatasgrosman/exp_w2v2t_id_wav2vec2_s226 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'id'] | false | true | true | 456 | false | # exp_w2v2t_id_wav2vec2_s226
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 005e0282d749df332c10820a6ba56d63 |
spacy/de_core_news_lg | spacy | null | 32 | 13 | spacy | 0 | token-classification | false | false | false | mit | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 31,288 | false | ### Details: https://spacy.io/models/de#de_core_news_lg
German pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `de_core_news_lg` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) |
| **Sources** | [TIGER Corpus](https://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html) (Brants, Sabine, Stefanie Dipper, Peter Eisenberg, Silvia Hansen, Esther König, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit)<br />[Tiger2Dep](https://www.ims.uni-stuttgart.de/forschung/ressourcen/werkzeuge/tiger2dep/) (Wolfgang Seeker)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (772 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$(`, `$,`, `$.`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPRART`, `APZR`, `ART`, `CARD`, `FM`, `ITJ`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NN`, `NNE`, `PDAT`, `PDS`, `PIAT`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PROAV`, `PTKA`, `PTKANT`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `TRUNC`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VMPP`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY`, `_SP` |
| **`morphologizer`** | `POS=PUNCT`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `POS=VERB\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Degree=Pos\|POS=ADV`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=SCONJ`, `Case=Acc\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PART`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=PROPN`, `POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADP`, `Gender=Neut\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=SCONJ\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=X`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=SPACE`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=AUX\|VerbForm=Inf`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=AUX\|VerbForm=Part`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Degree=Sup\|POS=ADV`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|POS=NOUN`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|POS=PROPN`, `Case=Gen\|Definite=Def\|POS=DET\|PronType=Art`, `Case=Gen\|POS=PROPN`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|POS=PRON\|PronType=Dem`, `Definite=Ind\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Neut\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `ac`, `adc`, `ag`, `ams`, `app`, `avc`, `cc`, `cd`, `cj`, `cm`, `cp`, `cvc`, `da`, `dep`, `dm`, `ep`, `ju`, `mnr`, `mo`, `ng`, `nk`, `nmc`, `oa`, `oc`, `og`, `op`, `par`, `pd`, `pg`, `ph`, `pm`, `pnc`, `punct`, `rc`, `re`, `rs`, `sb`, `sbp`, `svp`, `uc`, `vo` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.96 |
| `TOKEN_P` | 99.92 |
| `TOKEN_R` | 99.90 |
| `TOKEN_F` | 99.91 |
| `TAG_ACC` | 97.96 |
| `POS_ACC` | 98.41 |
| `MORPH_ACC` | 92.06 |
| `MORPH_MICRO_P` | 96.01 |
| `MORPH_MICRO_R` | 95.99 |
| `MORPH_MICRO_F` | 96.00 |
| `SENTS_P` | 95.18 |
| `SENTS_R` | 96.48 |
| `SENTS_F` | 95.41 |
| `DEP_UAS` | 92.66 |
| `DEP_LAS` | 90.78 |
| `LEMMA_ACC` | 97.91 |
| `ENTS_P` | 85.27 |
| `ENTS_R` | 84.44 |
| `ENTS_F` | 84.85 | | c5d28b6d137c9ff9b6db956fa5034dc0 |
Helsinki-NLP/opus-mt-tc-base-tr-uk | Helsinki-NLP | marian | 13 | 3 | transformers | 0 | translation | true | true | false | cc-by-4.0 | ['tr', 'uk'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 5,252 | false | # opus-mt-tc-base-tr-uk
Neural machine translation model for translating from Turkish (tr) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-07
* source language(s):
* target language(s): ukr
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip)
* more information released models: [OPUS-MT tur-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"1000 yen yeterli mi?",
"Zürih, İsviçre'de bir şehirdir."
]
model_name = "pytorch-models/opus-mt-tc-base-tr-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Чи достатньо 1000 ієн?
# Цюрих - місто в Швейцарії.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-tr-uk")
print(pipe("1000 yen yeterli mi?"))
# expected output: Чи достатньо 1000 ієн?
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| tur-ukr | tatoeba-test-v2021-08-07 | 0.63573 | 40.5 | 2520 | 13079 |
| tur-ukr | flores101-devtest | 0.49944 | 19.9 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 03:37:19 EET 2022
* port machine: LM0-400-22516.local
| a67b82c3537c9a5198392c1a98eb0413 |
Tirendaz/distilbert-base-uncased-finetuned-emotion | Tirendaz | distilbert | 14 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2243
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.866 | 1.0 | 250 | 0.3365 | 0.896 | 0.8905 |
| 0.2626 | 2.0 | 500 | 0.2243 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 7e82126e676ecb6285d897d0edf521e7 |
Nobody138/xlm-roberta-base-finetuned-panx-en | Nobody138 | xlm-roberta | 10 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| a239a7bacf8cdf14503c957e07b35b79 |
KoichiYasuoka/deberta-base-thai-ud-goeswith | KoichiYasuoka | deberta-v2 | 10 | 444 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['th'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['thai', 'token-classification', 'pos', 'dependency-parsing'] | false | true | true | 2,713 | false |
# deberta-base-thai-ud-goeswith
## Model Description
This is a DeBERTa(V2) model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-base-thai](https://huggingface.co/KoichiYasuoka/deberta-base-thai).
## How to Use
```py
class UDgoeswith(object):
def __init__(self,bert):
from transformers import AutoTokenizer,AutoModelForTokenClassification
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForTokenClassification.from_pretrained(bert)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=self.tokenizer(text,return_offsets_mapping=True)
v=w["input_ids"]
x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)]
with torch.no_grad():
e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:]
r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())]
e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan)
g=self.model.config.label2id["X|_|goeswith"]
r=numpy.tri(e.shape[0])
for i in range(e.shape[0]):
for j in range(i+2,e.shape[1]):
r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1
e[:,:,g]+=numpy.where(r==0,0,numpy.nan)
m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan)
m[1:,1:]=numpy.nanmax(e,axis=2).transpose()
p=numpy.zeros(m.shape)
p[1:,1:]=numpy.nanargmax(e,axis=2).transpose()
for i in range(1,m.shape[0]):
m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan)
m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text+"\n"
v=[(s,e) for s,e in w["offset_mapping"] if s<e]
for i,(s,e) in enumerate(v,1):
q=self.model.config.id2label[p[i,h[i]]].split("|")
u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=UDgoeswith("KoichiYasuoka/deberta-base-thai-ud-goeswith")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/).
Or without ufal.chu-liu-edmonds:
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-base-thai-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
| a449056c5c39d290a50b53e7000802cc |
ricardo-filho/bert_base_tcm_no_objeto_0.8 | ricardo-filho | bert | 19 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 9,334 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_tcm_no_objeto_0.8
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Criterio Julgamento Precision: 0.7444
- Criterio Julgamento Recall: 0.8684
- Criterio Julgamento F1: 0.8016
- Criterio Julgamento Number: 114
- Data Sessao Precision: 0.7297
- Data Sessao Recall: 0.9153
- Data Sessao F1: 0.8120
- Data Sessao Number: 59
- Modalidade Licitacao Precision: 0.9412
- Modalidade Licitacao Recall: 0.9697
- Modalidade Licitacao F1: 0.9552
- Modalidade Licitacao Number: 462
- Numero Exercicio Precision: 0.9018
- Numero Exercicio Recall: 0.9619
- Numero Exercicio F1: 0.9309
- Numero Exercicio Number: 210
- Valor Objeto Precision: 0.7778
- Valor Objeto Recall: 0.8537
- Valor Objeto F1: 0.8140
- Valor Objeto Number: 41
- Overall Precision: 0.8803
- Overall Recall: 0.9458
- Overall F1: 0.9119
- Overall Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.012 | 1.0 | 2863 | 0.0099 | 0.7059 | 0.8421 | 0.7680 | 114 | 0.7013 | 0.9153 | 0.7941 | 59 | 0.9366 | 0.9589 | 0.9476 | 462 | 0.9136 | 0.9571 | 0.9349 | 210 | 0.5902 | 0.8780 | 0.7059 | 41 | 0.8583 | 0.9368 | 0.8958 | 0.9974 |
| 0.0095 | 2.0 | 5726 | 0.0076 | 0.8095 | 0.8947 | 0.8500 | 114 | 0.6935 | 0.7288 | 0.7107 | 59 | 0.9346 | 0.9589 | 0.9466 | 462 | 0.9054 | 0.9571 | 0.9306 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.8901 | 0.9323 | 0.9107 | 0.9981 |
| 0.005 | 3.0 | 8589 | 0.0091 | 0.7574 | 0.9035 | 0.8240 | 114 | 0.6471 | 0.9322 | 0.7639 | 59 | 0.9371 | 0.9675 | 0.9521 | 462 | 0.9091 | 0.9524 | 0.9302 | 210 | 0.7660 | 0.8780 | 0.8182 | 41 | 0.8715 | 0.9492 | 0.9087 | 0.9978 |
| 0.0042 | 4.0 | 11452 | 0.0076 | 0.7444 | 0.8684 | 0.8016 | 114 | 0.7297 | 0.9153 | 0.8120 | 59 | 0.9412 | 0.9697 | 0.9552 | 462 | 0.9018 | 0.9619 | 0.9309 | 210 | 0.7778 | 0.8537 | 0.8140 | 41 | 0.8803 | 0.9458 | 0.9119 | 0.9983 |
| 0.004 | 5.0 | 14315 | 0.0100 | 0.7373 | 0.7632 | 0.7500 | 114 | 0.7534 | 0.9322 | 0.8333 | 59 | 0.9124 | 0.9697 | 0.9402 | 462 | 0.9196 | 0.9810 | 0.9493 | 210 | 0.76 | 0.9268 | 0.8352 | 41 | 0.8724 | 0.9413 | 0.9055 | 0.9979 |
| 0.0041 | 6.0 | 17178 | 0.0103 | 0.7377 | 0.7895 | 0.7627 | 114 | 0.75 | 0.8644 | 0.8031 | 59 | 0.9492 | 0.9697 | 0.9593 | 462 | 0.92 | 0.9857 | 0.9517 | 210 | 0.7872 | 0.9024 | 0.8409 | 41 | 0.8919 | 0.9402 | 0.9154 | 0.9980 |
| 0.002 | 7.0 | 20041 | 0.0092 | 0.7984 | 0.8684 | 0.8319 | 114 | 0.68 | 0.8644 | 0.7612 | 59 | 0.9471 | 0.9697 | 0.9583 | 462 | 0.9196 | 0.9810 | 0.9493 | 210 | 0.7872 | 0.9024 | 0.8409 | 41 | 0.8918 | 0.9492 | 0.9196 | 0.9983 |
| 0.0014 | 8.0 | 22904 | 0.0100 | 0.8033 | 0.8596 | 0.8305 | 114 | 0.7612 | 0.8644 | 0.8095 | 59 | 0.9532 | 0.9697 | 0.9614 | 462 | 0.9186 | 0.9667 | 0.9420 | 210 | 0.8222 | 0.9024 | 0.8605 | 41 | 0.9049 | 0.9447 | 0.9244 | 0.9983 |
| 0.0015 | 9.0 | 25767 | 0.0108 | 0.7787 | 0.8333 | 0.8051 | 114 | 0.7067 | 0.8983 | 0.7910 | 59 | 0.9513 | 0.9719 | 0.9615 | 462 | 0.9107 | 0.9714 | 0.9401 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.8943 | 0.9458 | 0.9194 | 0.9984 |
| 0.0008 | 10.0 | 28630 | 0.0112 | 0.7934 | 0.8421 | 0.8170 | 114 | 0.7222 | 0.8814 | 0.7939 | 59 | 0.9533 | 0.9719 | 0.9625 | 462 | 0.9193 | 0.9762 | 0.9469 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.9012 | 0.9470 | 0.9235 | 0.9984 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 0260721f35671450970a311a020a1b3f |
kit-nlp/bert-base-japanese-sentiment-cyberbullying | kit-nlp | bert | 9 | 177 | transformers | 1 | text-classification | true | false | true | cc-by-sa-4.0 | ['ja'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,227 | false |
# electra-base-cyberbullying
This is a BERT Base model for the Japanese language finetuned for automatic cyberbullying detection.
The model was based on [daigo's BERT Base for Japanese sentiment analysis](https://huggingface.co/daigo/bert-base-japanese-sentiment), and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset".
## Licenses
The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite this model using the following citation.
```
@inproceedings{tanabe2022bert-base-cyberbullying,
title={北見工業大学 テキスト情報処理研究室 BERT Base ネットいじめ検出モデル (Daigo ver.)},
author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人},
publisher={HuggingFace},
year={2022},
url = "https://huggingface.co/kit-nlp/bert-base-japanese-sentiment-cyberbullying"
}
```
| f3b078fc87b9a90136d5a7bf39de91eb |
danielsaggau/lbert_scotus_classsification | danielsaggau | bert | 10 | 5 | transformers | 0 | text-classification | true | false | false | cc-by-sa-4.0 | null | ['lex_glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 934 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lbert_scotus_classsification
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 4cf7cb517c694ae5008f1841621e39ba |
kontur-ai/sbert_punc_case_ru | kontur-ai | bert | 10 | 344 | transformers | 10 | token-classification | true | false | false | apache-2.0 | ['ru'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['PyTorch', 'Transformers'] | false | true | true | 1,445 | false |
# SbertPuncCase
SbertPuncCase - модель восстановления пунктуации и регистра для русского языка. Модель способна расставлять точки, запятые и знаки вопроса;
определять регистр - слово в нижнем регистре, слово с первой буквой в верхнем регистре, слово в верхнем регистре.
Модель разработана для восстановления текста после распознавания речи, поэтому работает со строками в нижнем регистре.
В основу модели легла [sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru).
В качестве обучающих данных использованы текстовые расшифровки интервью.
# Как это работает
1. Текст переводится в нижний регистр и разбивается на слова.
2. Слова разделяются на токены.
3. Модель (по аналогии с задачей NER) предсказывает класс для каждого токена. Классификация на 12 классов: 3+1 знака препинания * 3 варианта регистра.
4. Функция декодировки восстанавливает текст соответственно предсказанным классам.
# Как использовать
Код модели находится в файле `sbert-punc-case-ru/sbertpunccase.py`.
Для быстрой установки можно воспользоваться командой:
```
pip install git+https://huggingface.co/kontur-ai/sbert_punc_case_ru
```
Использование модели:
```
from sbert_punc_case_ru import SbertPuncCase
model = SbertPuncCase()
model.punctuate("sbert punc case расставляет точки запятые и знаки вопроса вам нравится")
```
# Авторы
[Альмира Муртазина](https://github.com/almiradreamer)
[Александр Абугалиев](https://github.com/Squire-tomsk) | 3c5ff0d69fdb9d3ccd50ab2d92ccb3af |
WillHeld/en-bert-xnli | WillHeld | bert | 13 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['xnli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 909 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-bert-xnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the xnli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
| 488e14250d86984eea14a6f6427aedbc |
christofid/pgt | christofid | gpt2 | 10 | 2 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,509 | false | ### PGT
PGT is a GPT-2 prompt-based model trained to facilitate 3 patent generation-related tasks, namely: *part-of-patent generation*, *part-of-patent editing* and *patent coherence check*. For more information about the dataset and the training procedure with refer the reader to [our paper](https://openreview.net/pdf?id=dLHtwZKvJmE).
The task specification is taken place by appending a short sentence at the end of a given input. The general format is:
`input <|sep|> task specific prompt <|sep|>`
In all cases, the generated output ends with the special token <|endoftext|> to facilitate postprocessing.
### Supported tasks
**Part-of-patent generation** attempts to generate a part of a patent given as input another, already existing part of it. The model has been trained to perform title-to-abstract, abstract-to-claim as well as their inverse generations. For the claim case, the model was only exposed to independent claims during the training. Input example for part-of-patent generation for the abstract-to-title case:
`An interesting patent abstract. <|sep|> Given the above abstract, suggest a title <|sep|>`
**Part-of-patent editing** attempts to suggest alternatives for some highlighted parts of a patent abstract or claim. These parts are defined in the input with the special [MASK] token. The expected size of these masked parts can be from a single word to a small phrase. If more than one masks are given in the input, then the generated suggestions are distinguished in the output but the special <|mask_sep|> token. Input example for part-of-patent editing working on a claim input:
`An interesting patent claim with a [MASK] part. <|sep|> Replace the [MASK] tokens in the above claim <|sep|>`
The **coherence check** assesses the quality of a patent by examining whether to given parts of a patent could belong to the same patent in terms of content and syntax. The input patent parts can be title, abstract or claim. The expected output is Yes or No. Input example for the coherence check task having as input a title and a claim:
`A patent title <|sep|> An interesting patent claim. <|sep|> Do the above title and claim belong to the same patent? <|sep|>"`
Further prompts and tasks can be tried in a zero-shot fashion.
The model and the tasks are also integrated and available via the [GT4SD python library](https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/explore-pgt.ipynb).
### Example
A full example of part-of-patent generation
```
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("christofid/pgt")
model = AutoModelForCausalLM.from_pretrained("christofid/pgt")
text = "Automated patent generation <|sep|> Given the above title, suggest an abstract <|sep|>"
text_encoded = tokenizer.encode(text, return_tensors="pt")
generated = model.generate(text_encoded, do_sample=True, top_k=50, num_return_sequences = 3, max_length=512)
generated_text = [tokenizer.decode(case).split("<|endoftext|>")[0].strip() for case in generated]
```
### BibTeX entry and citation info
```
@inproceedings{christofidellis2022pgt,
title={PGT: a prompt based generative transformer for the patent domain},
author={Christofidellis, Dimitrios and Torres, Antonio Berrios and Dave, Ashish and Roveri, Manuel and Schmidt, Kristin and Swaminathan, Sarath and Vandierendonck, Hans and Zubarev, Dmitry and Manica, Matteo},
booktitle={ICML 2022 Workshop on Knowledge Retrieval and Language Models},
year={2022}
}
```
| 394ca01da22102c5b387849658e762ce |
niclas/model_en | niclas | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 5,250 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_en
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8610
- Wer: 0.2641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 6.3443 | 3.05 | 250 | 3.0966 | 1.0 |
| 2.9847 | 6.1 | 500 | 3.0603 | 1.0 |
| 2.9263 | 9.15 | 750 | 2.9131 | 1.0 |
| 2.2584 | 12.19 | 1000 | 1.4318 | 0.6575 |
| 1.2603 | 15.24 | 1250 | 1.1964 | 0.4994 |
| 0.9182 | 18.29 | 1500 | 1.1494 | 0.4485 |
| 0.7462 | 21.34 | 1750 | 1.2171 | 0.4357 |
| 0.6129 | 24.39 | 2000 | 1.0557 | 0.3468 |
| 0.5364 | 27.44 | 2250 | 1.1069 | 0.4222 |
| 0.4607 | 30.48 | 2500 | 1.3270 | 0.3370 |
| 0.4139 | 33.53 | 2750 | 1.1814 | 0.3658 |
| 0.3587 | 36.58 | 3000 | 1.2423 | 0.3419 |
| 0.321 | 39.63 | 3250 | 1.2931 | 0.3211 |
| 0.2961 | 42.68 | 3500 | 1.1409 | 0.3315 |
| 0.2635 | 45.73 | 3750 | 1.4537 | 0.3241 |
| 0.2498 | 48.78 | 4000 | 1.2643 | 0.3192 |
| 0.2352 | 51.82 | 4250 | 1.2789 | 0.3278 |
| 0.2193 | 54.87 | 4500 | 1.4220 | 0.3021 |
| 0.2068 | 57.92 | 4750 | 1.3567 | 0.3713 |
| 0.2055 | 60.97 | 5000 | 1.5375 | 0.3051 |
| 0.198 | 64.02 | 5250 | 1.2676 | 0.2782 |
| 0.1835 | 67.07 | 5500 | 1.3905 | 0.2825 |
| 0.1655 | 70.12 | 5750 | 1.7000 | 0.2978 |
| 0.1677 | 73.17 | 6000 | 1.4250 | 0.2812 |
| 0.1522 | 76.22 | 6250 | 1.4220 | 0.2941 |
| 0.1522 | 79.27 | 6500 | 1.5195 | 0.3021 |
| 0.1344 | 82.32 | 6750 | 1.3749 | 0.2996 |
| 0.1298 | 85.36 | 7000 | 1.6663 | 0.2849 |
| 0.1293 | 88.41 | 7250 | 1.4564 | 0.2892 |
| 0.1264 | 91.46 | 7500 | 1.4373 | 0.2935 |
| 0.1243 | 94.51 | 7750 | 1.6572 | 0.2972 |
| 0.1141 | 97.56 | 8000 | 1.4936 | 0.2892 |
| 0.1086 | 100.61 | 8250 | 1.5231 | 0.2868 |
| 0.1056 | 103.65 | 8500 | 1.3733 | 0.2763 |
| 0.098 | 106.7 | 8750 | 1.4887 | 0.2923 |
| 0.0984 | 109.75 | 9000 | 1.3779 | 0.2923 |
| 0.0916 | 112.8 | 9250 | 1.4868 | 0.2604 |
| 0.0881 | 115.85 | 9500 | 1.7991 | 0.2996 |
| 0.0846 | 118.9 | 9750 | 1.5845 | 0.2849 |
| 0.0861 | 121.95 | 10000 | 1.6684 | 0.2794 |
| 0.0806 | 124.99 | 10250 | 1.5774 | 0.3039 |
| 0.0822 | 128.05 | 10500 | 1.5928 | 0.2886 |
| 0.0788 | 131.1 | 10750 | 1.6158 | 0.2880 |
| 0.0704 | 134.15 | 11000 | 1.7679 | 0.2941 |
| 0.0721 | 137.19 | 11250 | 1.7055 | 0.2629 |
| 0.0723 | 140.24 | 11500 | 1.5473 | 0.2653 |
| 0.0676 | 143.29 | 11750 | 1.8963 | 0.2745 |
| 0.0665 | 146.34 | 12000 | 1.6367 | 0.2739 |
| 0.0618 | 149.39 | 12250 | 1.6757 | 0.2745 |
| 0.0595 | 152.44 | 12500 | 1.5900 | 0.2745 |
| 0.056 | 155.48 | 12750 | 1.5362 | 0.2794 |
| 0.0587 | 158.53 | 13000 | 1.4616 | 0.2684 |
| 0.0519 | 161.58 | 13250 | 1.6867 | 0.2549 |
| 0.0569 | 164.63 | 13500 | 1.8294 | 0.2574 |
| 0.0497 | 167.68 | 13750 | 1.7844 | 0.2868 |
| 0.0531 | 170.73 | 14000 | 1.7564 | 0.2770 |
| 0.0489 | 173.78 | 14250 | 1.5811 | 0.2629 |
| 0.0524 | 176.82 | 14500 | 1.6925 | 0.2684 |
| 0.0431 | 179.87 | 14750 | 1.7236 | 0.2653 |
| 0.0457 | 182.92 | 15000 | 1.7460 | 0.2512 |
| 0.045 | 185.97 | 15250 | 1.8096 | 0.2610 |
| 0.0402 | 189.02 | 15500 | 1.8795 | 0.2635 |
| 0.0529 | 192.07 | 15750 | 1.8310 | 0.2616 |
| 0.0396 | 195.12 | 16000 | 1.8380 | 0.2635 |
| 0.0432 | 198.17 | 16250 | 1.8610 | 0.2641 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.13.3
- Tokenizers 0.10.3
| ca7602bd34b9aba50cdb813a1bc1ffc8 |
fimster/whisper-small-sv-SE | fimster | whisper | 12 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sv'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['i-dont-know-what-im-doing', 'generated_from_trainer'] | true | true | true | 1,482 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small sv-SE - Lab 2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3278
- Wer: 19.7736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1378 | 1.29 | 1000 | 0.2953 | 21.4165 |
| 0.0475 | 2.59 | 2000 | 0.2913 | 20.2495 |
| 0.0186 | 3.88 | 3000 | 0.3027 | 19.8193 |
| 0.0042 | 5.17 | 4000 | 0.3278 | 19.7736 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 660ad473cf5e98d33917fcdd6cbc3dae |
moaiz237/wav2vec2-base-timit-moaiz_exp2_new | moaiz237 | wav2vec2 | 12 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp2_new
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6849
- Wer: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1266 | 13.89 | 500 | 1.0233 | 0.7034 |
| 0.5928 | 27.78 | 1000 | 0.6849 | 0.5396 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 1a087cd002266e67a48efbe9ea754507 |
responsibility-framing/predict-perception-bert-blame-assassin | responsibility-framing | bert | 12 | 19 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,663 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-assassin
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5128
- Rmse: 1.0287
- Rmse Blame::a L'assassino: 1.0287
- Mae: 0.8883
- Mae Blame::a L'assassino: 0.8883
- R2: 0.5883
- R2 Blame::a L'assassino: 0.5883
- Cos: 0.6522
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5795
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0184 | 1.0 | 15 | 1.2219 | 1.5879 | 1.5879 | 1.4308 | 1.4308 | 0.0191 | 0.0191 | 0.3913 | 0.0 | 0.5 | 0.3781 | nan |
| 0.9214 | 2.0 | 30 | 1.0927 | 1.5017 | 1.5017 | 1.3634 | 1.3634 | 0.1227 | 0.1227 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan |
| 0.7809 | 3.0 | 45 | 0.8206 | 1.3013 | 1.3013 | 1.1808 | 1.1808 | 0.3412 | 0.3412 | 0.4783 | 0.0 | 0.5 | 0.3819 | nan |
| 0.6593 | 4.0 | 60 | 0.5894 | 1.1029 | 1.1029 | 1.0145 | 1.0145 | 0.5268 | 0.5268 | 0.7391 | 0.0 | 0.5 | 0.6408 | nan |
| 0.4672 | 5.0 | 75 | 0.4759 | 0.9910 | 0.9910 | 0.8868 | 0.8868 | 0.6180 | 0.6180 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.3356 | 6.0 | 90 | 0.4220 | 0.9332 | 0.9332 | 0.8083 | 0.8083 | 0.6612 | 0.6612 | 0.6522 | 0.0 | 0.5 | 0.4249 | nan |
| 0.2782 | 7.0 | 105 | 0.4477 | 0.9612 | 0.9612 | 0.8046 | 0.8046 | 0.6406 | 0.6406 | 0.6522 | 0.0 | 0.5 | 0.6101 | nan |
| 0.2075 | 8.0 | 120 | 0.4389 | 0.9518 | 0.9518 | 0.8050 | 0.8050 | 0.6476 | 0.6476 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.1725 | 9.0 | 135 | 0.4832 | 0.9985 | 0.9985 | 0.8356 | 0.8356 | 0.6121 | 0.6121 | 0.7391 | 0.0 | 0.5 | 0.6616 | nan |
| 0.1642 | 10.0 | 150 | 0.4368 | 0.9494 | 0.9494 | 0.8060 | 0.8060 | 0.6493 | 0.6493 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.1172 | 11.0 | 165 | 0.4538 | 0.9677 | 0.9677 | 0.8174 | 0.8174 | 0.6357 | 0.6357 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.104 | 12.0 | 180 | 0.4672 | 0.9819 | 0.9819 | 0.8384 | 0.8384 | 0.6249 | 0.6249 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0822 | 13.0 | 195 | 0.4401 | 0.9530 | 0.9530 | 0.8107 | 0.8107 | 0.6467 | 0.6467 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0755 | 14.0 | 210 | 0.4464 | 0.9598 | 0.9598 | 0.8251 | 0.8251 | 0.6416 | 0.6416 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0801 | 15.0 | 225 | 0.4834 | 0.9988 | 0.9988 | 0.8604 | 0.8604 | 0.6119 | 0.6119 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.053 | 16.0 | 240 | 0.4846 | 1.0001 | 1.0001 | 0.8651 | 0.8651 | 0.6109 | 0.6109 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0573 | 17.0 | 255 | 0.4970 | 1.0128 | 1.0128 | 0.8743 | 0.8743 | 0.6010 | 0.6010 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0571 | 18.0 | 270 | 0.4803 | 0.9956 | 0.9956 | 0.8503 | 0.8503 | 0.6144 | 0.6144 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0483 | 19.0 | 285 | 0.4936 | 1.0093 | 1.0093 | 0.8740 | 0.8740 | 0.6037 | 0.6037 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0414 | 20.0 | 300 | 0.5138 | 1.0297 | 1.0297 | 0.8943 | 0.8943 | 0.5875 | 0.5875 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0513 | 21.0 | 315 | 0.5240 | 1.0399 | 1.0399 | 0.9050 | 0.9050 | 0.5793 | 0.5793 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0499 | 22.0 | 330 | 0.5275 | 1.0434 | 1.0434 | 0.9048 | 0.9048 | 0.5765 | 0.5765 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0423 | 23.0 | 345 | 0.5350 | 1.0508 | 1.0508 | 0.8872 | 0.8872 | 0.5705 | 0.5705 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0447 | 24.0 | 360 | 0.4963 | 1.0120 | 1.0120 | 0.8754 | 0.8754 | 0.6016 | 0.6016 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0364 | 25.0 | 375 | 0.5009 | 1.0167 | 1.0167 | 0.8809 | 0.8809 | 0.5979 | 0.5979 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0412 | 26.0 | 390 | 0.5060 | 1.0219 | 1.0219 | 0.8781 | 0.8781 | 0.5938 | 0.5938 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0297 | 27.0 | 405 | 0.5027 | 1.0185 | 1.0185 | 0.8838 | 0.8838 | 0.5964 | 0.5964 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0416 | 28.0 | 420 | 0.5071 | 1.0230 | 1.0230 | 0.8867 | 0.8867 | 0.5929 | 0.5929 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0327 | 29.0 | 435 | 0.5124 | 1.0283 | 1.0283 | 0.8883 | 0.8883 | 0.5887 | 0.5887 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0383 | 30.0 | 450 | 0.5128 | 1.0287 | 1.0287 | 0.8883 | 0.8883 | 0.5883 | 0.5883 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| 0626fb33eb37e68553212cf0fc7fb835 |
jonatasgrosman/exp_w2v2t_nl_unispeech-ml_s498 | jonatasgrosman | unispeech | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['nl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'nl'] | false | true | true | 500 | false | # exp_w2v2t_nl_unispeech-ml_s498
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 7cc8f935f8c6e2295d50dd09aa9da188 |
aXhyra/emotion_trained_1234567 | aXhyra | distilbert | 10 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,401 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9051
- F1: 0.7302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6480 | 0.7231 |
| No log | 2.0 | 408 | 0.6114 | 0.7403 |
| 0.5045 | 3.0 | 612 | 0.7592 | 0.7311 |
| 0.5045 | 4.0 | 816 | 0.9051 | 0.7302 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| e31124506264fea723ad937d347ab17d |
jonatasgrosman/exp_w2v2t_de_r-wav2vec2_s460 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'de'] | false | true | true | 462 | false | # exp_w2v2t_de_r-wav2vec2_s460
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 53bb6ded9dd6b3fbb50b67cf95cc2092 |
comehu/sm64-ost | comehu | null | 13 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'music', 'generation', 'tensorflow'] | false | true | true | 1,034 | false |
# Musika Model: musika_sm64_ost
## Model provided by: comehu
Pretrained musika_sm64_ost model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained musika_sm64_ost model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
| 96af76f74e7d707b279551f8c3dbe229 |
tiennvcs/bert-large-uncased-finetuned-docvqa | tiennvcs | bert | 12 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 7,214 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-docvqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.5228 | 0.05 | 1000 | 2.6645 |
| 2.4909 | 0.1 | 2000 | 2.8985 |
| 2.1679 | 0.16 | 3000 | 2.3551 |
| 1.9451 | 0.21 | 4000 | 2.2226 |
| 1.6814 | 0.26 | 5000 | 2.1590 |
| 1.8868 | 0.31 | 6000 | 2.6197 |
| 1.6618 | 0.36 | 7000 | 2.3632 |
| 1.8313 | 0.41 | 8000 | 2.4519 |
| 1.7017 | 0.47 | 9000 | 2.2682 |
| 1.8169 | 0.52 | 10000 | 2.4486 |
| 1.7074 | 0.57 | 11000 | 2.3862 |
| 1.7674 | 0.62 | 12000 | 2.1801 |
| 1.8134 | 0.67 | 13000 | 2.3032 |
| 1.8334 | 0.73 | 14000 | 2.4205 |
| 1.6819 | 0.78 | 15000 | 2.2398 |
| 1.5846 | 0.83 | 16000 | 2.3834 |
| 1.6758 | 0.88 | 17000 | 1.9683 |
| 1.6303 | 0.93 | 18000 | 2.3297 |
| 1.5652 | 0.98 | 19000 | 2.0581 |
| 1.3045 | 1.04 | 20000 | 2.4950 |
| 1.2393 | 1.09 | 21000 | 2.6622 |
| 1.1526 | 1.14 | 22000 | 2.3749 |
| 1.2631 | 1.19 | 23000 | 2.3915 |
| 1.1846 | 1.24 | 24000 | 2.2592 |
| 1.2731 | 1.3 | 25000 | 2.4239 |
| 1.3057 | 1.35 | 26000 | 2.2920 |
| 1.134 | 1.4 | 27000 | 2.3107 |
| 1.2017 | 1.45 | 28000 | 2.4271 |
| 1.2202 | 1.5 | 29000 | 2.1814 |
| 1.2179 | 1.56 | 30000 | 2.3365 |
| 1.2359 | 1.61 | 31000 | 2.1256 |
| 1.1964 | 1.66 | 32000 | 2.1720 |
| 1.269 | 1.71 | 33000 | 2.4363 |
| 1.1812 | 1.76 | 34000 | 2.2372 |
| 1.2187 | 1.81 | 35000 | 2.2318 |
| 1.1805 | 1.87 | 36000 | 2.3693 |
| 1.1458 | 1.92 | 37000 | 2.5128 |
| 1.1958 | 1.97 | 38000 | 2.1311 |
| 0.8924 | 2.02 | 39000 | 2.4635 |
| 0.869 | 2.07 | 40000 | 2.8231 |
| 0.8333 | 2.13 | 41000 | 2.6762 |
| 0.9194 | 2.18 | 42000 | 2.4588 |
| 0.8089 | 2.23 | 43000 | 2.6443 |
| 0.8612 | 2.28 | 44000 | 2.4300 |
| 0.7981 | 2.33 | 45000 | 2.7418 |
| 0.9765 | 2.38 | 46000 | 2.6543 |
| 0.8646 | 2.44 | 47000 | 2.5990 |
| 1.0316 | 2.49 | 48000 | 2.4625 |
| 0.9862 | 2.54 | 49000 | 2.4691 |
| 1.027 | 2.59 | 50000 | 2.4156 |
| 0.9412 | 2.64 | 51000 | 2.4204 |
| 0.9353 | 2.7 | 52000 | 2.4933 |
| 0.9509 | 2.75 | 53000 | 2.4708 |
| 0.9351 | 2.8 | 54000 | 2.5351 |
| 0.9968 | 2.85 | 55000 | 2.2506 |
| 1.025 | 2.9 | 56000 | 2.6317 |
| 1.627 | 2.95 | 57000 | 2.7843 |
| 0.9294 | 3.01 | 58000 | 2.9396 |
| 0.6043 | 3.06 | 59000 | 3.1560 |
| 0.7903 | 3.11 | 60000 | 2.8330 |
| 0.7373 | 3.16 | 61000 | 2.9422 |
| 0.6499 | 3.21 | 62000 | 3.0948 |
| 0.6411 | 3.27 | 63000 | 2.7900 |
| 0.625 | 3.32 | 64000 | 2.5268 |
| 0.6264 | 3.37 | 65000 | 2.8701 |
| 0.6143 | 3.42 | 66000 | 3.2544 |
| 0.6286 | 3.47 | 67000 | 2.6208 |
| 0.739 | 3.53 | 68000 | 2.8107 |
| 0.5981 | 3.58 | 69000 | 2.8073 |
| 0.6502 | 3.63 | 70000 | 2.6293 |
| 0.6548 | 3.68 | 71000 | 2.9501 |
| 0.7243 | 3.73 | 72000 | 2.7917 |
| 0.598 | 3.78 | 73000 | 2.9341 |
| 0.6159 | 3.84 | 74000 | 2.7629 |
| 0.5905 | 3.89 | 75000 | 2.6441 |
| 0.6393 | 3.94 | 76000 | 2.6660 |
| 0.677 | 3.99 | 77000 | 2.7616 |
| 0.3281 | 4.04 | 78000 | 3.6873 |
| 0.4524 | 4.1 | 79000 | 3.3441 |
| 0.3994 | 4.15 | 80000 | 3.3129 |
| 0.4686 | 4.2 | 81000 | 3.1813 |
| 0.5293 | 4.25 | 82000 | 2.9088 |
| 0.3961 | 4.3 | 83000 | 3.0765 |
| 0.4406 | 4.35 | 84000 | 3.1254 |
| 0.401 | 4.41 | 85000 | 3.2415 |
| 0.4594 | 4.46 | 86000 | 3.0691 |
| 0.4523 | 4.51 | 87000 | 3.0493 |
| 0.4719 | 4.56 | 88000 | 3.1352 |
| 0.4895 | 4.61 | 89000 | 2.8991 |
| 0.423 | 4.67 | 90000 | 3.1738 |
| 0.3984 | 4.72 | 91000 | 3.1862 |
| 0.4206 | 4.77 | 92000 | 3.1213 |
| 0.4587 | 4.82 | 93000 | 3.0030 |
| 0.381 | 4.87 | 94000 | 3.3218 |
| 0.4138 | 4.92 | 95000 | 3.1529 |
| 0.4003 | 4.98 | 96000 | 3.1375 |
| 0.2098 | 5.03 | 97000 | 3.7443 |
| 0.2334 | 5.08 | 98000 | 3.7359 |
| 0.2534 | 5.13 | 99000 | 3.7814 |
| 0.3067 | 5.18 | 100000 | 3.7128 |
| 0.2363 | 5.24 | 101000 | 3.6091 |
| 0.2652 | 5.29 | 102000 | 3.4015 |
| 0.3311 | 5.34 | 103000 | 3.4793 |
| 0.2344 | 5.39 | 104000 | 3.6792 |
| 0.2741 | 5.44 | 105000 | 3.5385 |
| 0.2896 | 5.5 | 106000 | 3.8118 |
| 0.2071 | 5.55 | 107000 | 3.8690 |
| 0.3023 | 5.6 | 108000 | 3.7087 |
| 0.3299 | 5.65 | 109000 | 3.4925 |
| 0.1943 | 5.7 | 110000 | 3.6739 |
| 0.2488 | 5.75 | 111000 | 3.7614 |
| 0.3138 | 5.81 | 112000 | 3.5156 |
| 0.2555 | 5.86 | 113000 | 3.6056 |
| 0.2918 | 5.91 | 114000 | 3.6533 |
| 0.2751 | 5.96 | 115000 | 3.6367 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.8.0+cu101
- Datasets 1.11.0
- Tokenizers 0.10.3
| 125c7fb70134c0c57c9ffd98c2286544 |
Helsinki-NLP/opus-mt-ig-fi | Helsinki-NLP | marian | 10 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-ig-fi
* source languages: ig
* target languages: fi
* OPUS readme: [ig-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.fi | 23.5 | 0.451 |
| 25772c498f3977cf84807d5f1377a8d7 |
SiddharthaM/mbert-profane-final | SiddharthaM | bert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,192 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-profane-final
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4464
- Accuracy: 0.8983
- Precision: 0.8135
- Recall: 0.8120
- F1: 0.8128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.2313 | 0.9154 | 0.8687 | 0.8010 | 0.8294 |
| 0.3077 | 2.0 | 592 | 0.2223 | 0.9125 | 0.8473 | 0.8205 | 0.8330 |
| 0.3077 | 3.0 | 888 | 0.2137 | 0.9259 | 0.8784 | 0.8379 | 0.8563 |
| 0.2102 | 4.0 | 1184 | 0.2334 | 0.9163 | 0.8483 | 0.8417 | 0.8449 |
| 0.2102 | 5.0 | 1480 | 0.2737 | 0.9068 | 0.8305 | 0.8242 | 0.8273 |
| 0.1533 | 6.0 | 1776 | 0.3214 | 0.8964 | 0.8034 | 0.8510 | 0.8239 |
| 0.1092 | 7.0 | 2072 | 0.3409 | 0.9002 | 0.8115 | 0.8414 | 0.8252 |
| 0.1092 | 8.0 | 2368 | 0.3849 | 0.9049 | 0.8322 | 0.8066 | 0.8185 |
| 0.0775 | 9.0 | 2664 | 0.4408 | 0.8983 | 0.8113 | 0.8215 | 0.8162 |
| 0.0775 | 10.0 | 2960 | 0.4464 | 0.8983 | 0.8135 | 0.8120 | 0.8128 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
| 343e48a039e74b44ed64413e8a161cf9 |
figfig/whisper-small-en | figfig | whisper | 26 | 55 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['figfig/restaurant_order_test'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,464 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# restaurant_test_model
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the test_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5435
- Wer: 78.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 10.0 | 10 | 2.2425 | 7.1429 |
| No log | 20.0 | 20 | 0.6651 | 0.0 |
| 2.4375 | 30.0 | 30 | 0.5776 | 35.7143 |
| 2.4375 | 40.0 | 40 | 0.5435 | 78.5714 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| d19cd7eb4579f5390db5c1cadd6f0c72 |
Hayoung/my_awesome_ko_en_model | Hayoung | t5 | 60 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,337 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_ko_en_model
This model is a fine-tuned version of [KETI-AIR/ke-t5-small](https://huggingface.co/KETI-AIR/ke-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| No log | 1.0 | 67 | nan | 0.0 | 19.0 |
| No log | 2.0 | 134 | nan | 0.0 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.9.0+cu111
- Datasets 2.7.1
- Tokenizers 0.13.2
| ed7bf59befcc6b7328d2268f2364859c |
cafeai/cafe_aesthetic | cafeai | beit | 9 | 9,275 | transformers | 12 | image-classification | true | false | false | agpl-3.0 | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,617 | false |
# Info
Since people are downloading this and I don't know why, I'll add some information. This model is an image classifier fine-tuned on `microsoft/beit-base-patch16-384`.
Its purpose is to be used in the dataset conditioning step for the [Waifu Diffusion project](https://huggingface.co/hakurei/waifu-diffusion), a fine-tune effort for Stable Diffusion. As WD1.4 is planned to have a *significantly large dataset* (~15m images), it is infeasible to analyze every image manually to determine whether or not it should be included in the final training dataset. This image classifier is trained on approximately 3.5k real-life and anime/manga images. Its purpose is to remove aesthetically worthless images from our dataset by classifying them as "`not_aesthetic`". The image classifier was trained to **err on the side of caution** and will generally tend to include images unless they are in a "manga-like" format, have messy lines and/or are sketches, or include an unacceptable amount of text (namely text that covers the primary subject of the image). The idea is that certain images will hurt a SD fine-tune.
Note: This classifier is not perfect, just like every other classifier out there. However, with a sufficiently large dataset, any imperfections or misclassifications should average themselves out due to the Law of Large Numbers.
You can test out the classifier [here](https://huggingface.co/spaces/cafeai/cafe_aesthetic_demo), along with some other classifiers for the project.
# License
Released under the aGPLv3. Use the model as you wish for any purpose. If you make changes, share the changes. | e4501fb3676335705fc70b72e88bca03 |
mictiong85/wav2vec2-base-timit-demo-colab | mictiong85 | wav2vec2 | 12 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,640 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4635
- Wer: 0.3357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6808 | 4.0 | 500 | 1.5478 | 1.0481 |
| 0.835 | 8.0 | 1000 | 0.4611 | 0.4703 |
| 0.3013 | 12.0 | 1500 | 0.4327 | 0.3887 |
| 0.1741 | 16.0 | 2000 | 0.4073 | 0.3677 |
| 0.1309 | 20.0 | 2500 | 0.4306 | 0.3595 |
| 0.1097 | 24.0 | 3000 | 0.4318 | 0.3475 |
| 0.0825 | 28.0 | 3500 | 0.4635 | 0.3357 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| fc5aaab529b28aec8bece9d332d16de3 |
Hetarth/marian-finetuned-hi-hinglish | Hetarth | marian | 9 | 3 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,361 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marian-finetuned-hi-hinglish
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1869
- Validation Loss: 4.0607
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1869 | 4.0607 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| 3b73a83f456e3e03f2ea7fdcacdf9c6a |
Qiliang/distilbart-xsum-12-3-whole_summary_chatGPT_and_tweetsum | Qiliang | bart | 13 | 718 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,692 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-12-3-whole_summary_chatGPT_and_tweetsum
This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-3](https://huggingface.co/sshleifer/distilbart-xsum-12-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7952
- Rouge1: 45.7353
- Rouge2: 29.1566
- Rougel: 45.8429
- Rougelsum: 45.7353
- Gen Len: 16.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 397 | 2.8069 | 42.233 | 23.7538 | 39.2701 | 39.2701 | 17.0 |
| 2.8673 | 2.0 | 794 | 2.7736 | 48.2389 | 29.6927 | 43.5004 | 43.5004 | 17.4 |
| 1.8043 | 3.0 | 1191 | 2.7952 | 45.7353 | 29.1566 | 45.8429 | 45.7353 | 16.6 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
| 6ec2ef595911bb8e3c272a373222ee00 |
google/tapas-mini-finetuned-wtq | google | tapas | 8 | 44 | transformers | 1 | table-question-answering | true | true | false | apache-2.0 | ['en'] | ['wikitablequestions'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['tapas', 'table-question-answering'] | false | true | true | 7,105 | false |
# TAPAS mini model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_mini` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
**MINI** | **noreset** | **0.2783** | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
**MINI** | **reset** | **0.2854** | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 6335a70a7c99c7112601cfb62c2c768c |
ajtamayoh/Negation_Scope_Detection_SFU_Spanish_NLP-CIC-WFU_DisTEMIST_fine_tuned | ajtamayoh | bert | 12 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,000 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Negation_Scope_Detection_SFU_Spanish_NLP-CIC-WFU_DisTEMIST_fine_tuned
This model is a fine-tuned version of [ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT](https://huggingface.co/ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3219
- Precision: 0.7403
- Recall: 0.7571
- F1: 0.7486
- Accuracy: 0.9518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.2142 | 0.5227 | 0.6497 | 0.5793 | 0.9267 |
| No log | 2.0 | 144 | 0.2019 | 0.625 | 0.7062 | 0.6631 | 0.9420 |
| No log | 3.0 | 216 | 0.3089 | 0.6444 | 0.6554 | 0.6499 | 0.9432 |
| No log | 4.0 | 288 | 0.2376 | 0.6952 | 0.7345 | 0.7143 | 0.9478 |
| No log | 5.0 | 360 | 0.2876 | 0.7037 | 0.7514 | 0.7268 | 0.9538 |
| No log | 6.0 | 432 | 0.3077 | 0.7278 | 0.7401 | 0.7339 | 0.9534 |
| 0.091 | 7.0 | 504 | 0.3219 | 0.7403 | 0.7571 | 0.7486 | 0.9518 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 7d56278fbffbe58f84d98e40ae8ee2ca |
sd-concepts-library/nouns-glasses | sd-concepts-library | null | 9 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,105 | false | ### nouns glasses on Stable Diffusion
This is the `<nouns glasses>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




| 3c4fbe623b7ff31a3c04cdf4ef6fd747 |
Oleksandr2003/QA_model | Oleksandr2003 | xlm-roberta | 29 | 20 | transformers | 0 | question-answering | true | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,263 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_model
This model is a fine-tuned version of [ukr-models/xlm-roberta-base-uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5806 | 1.0 | 549 | 1.4431 |
| 1.3526 | 2.0 | 1098 | 1.2543 |
| 1.0814 | 3.0 | 1647 | 1.2761 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1e8adb1103e9d4386625d28abb0b260e |
gchhablani/bert-large-cased-finetuned-mrpc | gchhablani | bert | 71 | 16 | transformers | 1 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,678 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6441 | 1.0 | 917 | 0.6370 | 0.6838 | 0.8122 | 0.7480 |
| 0.6451 | 2.0 | 1834 | 0.6553 | 0.6838 | 0.8122 | 0.7480 |
| 0.6428 | 3.0 | 2751 | 0.6332 | 0.6838 | 0.8122 | 0.7480 |
| 0.6476 | 4.0 | 3668 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.6499 | 5.0 | 4585 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 509ae55760aee726d70cdc10041cf0a4 |
Huyen2310/Vin-P2-14000 | Huyen2310 | whisper | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['vi'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,012 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuyenNguyen
This model is a fine-tuned version of [Huyen2310/FPT-S15000](https://huggingface.co/Huyen2310/FPT-S15000) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| cb81e78601f4f454c4bba93b16281612 |
PrimeQA/listqa_nq-task-xlm-roberta-large | PrimeQA | xlm-roberta | 9 | 0 | transformers | 0 | null | true | false | false | apache-2.0 | ['multilingual'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['MRC', 'Natural Questions List', 'xlm-roberta-large'] | false | true | true | 2,587 | false |
# Model description
An XLM-RoBERTa reading comprehension model for List Question Answering using a fine-tuned [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) model that is further fine-tuned on the list questions in the [Natural Questions](https://huggingface.co/datasets/natural_questions) dataset.
## Intended uses & limitations
You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, listqa_nq-task-xlm-roberta-large.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [listqa.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/listqa.ipynb).
### BibTeX entry and citation info
```bibtex
@article{kwiatkowski-etal-2019-natural,
title = "Natural Questions: A Benchmark for Question Answering Research",
author = "Kwiatkowski, Tom and
Palomaki, Jennimaria and
Redfield, Olivia and
Collins, Michael and
Parikh, Ankur and
Alberti, Chris and
Epstein, Danielle and
Polosukhin, Illia and
Devlin, Jacob and
Lee, Kenton and
Toutanova, Kristina and
Jones, Llion and
Kelcey, Matthew and
Chang, Ming-Wei and
Dai, Andrew M. and
Uszkoreit, Jakob and
Le, Quoc and
Petrov, Slav",
journal = "Transactions of the Association for Computational Linguistics",
volume = "7",
year = "2019",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q19-1026",
doi = "10.1162/tacl_a_00276",
pages = "452--466",
}
```
```bibtex
@article{DBLP:journals/corr/abs-1911-02116,
author = {Alexis Conneau and
Kartikay Khandelwal and
Naman Goyal and
Vishrav Chaudhary and
Guillaume Wenzek and
Francisco Guzm{\'{a}}n and
Edouard Grave and
Myle Ott and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {Unsupervised Cross-lingual Representation Learning at Scale},
journal = {CoRR},
volume = {abs/1911.02116},
year = {2019},
url = {http://arxiv.org/abs/1911.02116},
eprinttype = {arXiv},
eprint = {1911.02116},
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| 1995f4856a4208f495e76ee86f9cf565 |
jonatasgrosman/exp_w2v2t_zh-cn_r-wav2vec2_s79 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['zh-CN'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'zh-CN'] | false | true | true | 467 | false | # exp_w2v2t_zh-cn_r-wav2vec2_s79
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| c8aec4187a5d5a9f4de5e7296faf4d31 |
microsoft/git-base-textcaps | microsoft | git | 10 | 145 | transformers | 0 | image-to-text | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'image-captioning'] | false | true | true | 3,031 | false |
# GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps
GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text).
Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs.
The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens.
The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token.

This allows the model to be used for tasks like:
- image and video captioning
- visual question answering (VQA) on images and videos
- even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
## Intended uses & limitations
You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html).
## Training data
From the paper:
> We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions
(CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016),
Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B
data following a similar collection procedure in Hu et al. (2021a).
=> however this is for the model referred to as "GIT" in the paper, which is not open-sourced.
This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs.
Next, the model was fine-tuned on TextCaps.
See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
### Preprocessing
We refer to the original repo regarding details for preprocessing during training.
During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
## Evaluation results
For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100). | d320e2f76b7384f76075809cae0fbc98 |
KarelDO/bert-base-uncased.CEBaB_confounding.uniform.absa.5-class.seed_44 | KarelDO | bert | 14 | 2 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['OpenTable'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,124 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased.CEBaB_confounding.uniform.absa.5-class.seed_44
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4180
- Accuracy: 0.8827
- Macro-f1: 0.8804
- Weighted-macro-f1: 0.8826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 44
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
| 2f08e2bf9de5c2e483c65f7da8ef502e |
bigscience/mt0-large | bigscience | mt5 | 8 | 956 | transformers | 8 | text-generation | true | false | false | apache-2.0 | ['af', 'am', 'ar', 'az', 'be', 'bg', 'bn', 'ca', 'ceb', 'co', 'cs', 'cy', 'da', 'de', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fil', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'haw', 'hi', 'hmn', 'ht', 'hu', 'hy', 'ig', 'is', 'it', 'iw', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lb', 'lo', 'lt', 'lv', 'mg', 'mi', 'mk', 'ml', 'mn', 'mr', 'ms', 'mt', 'my', 'ne', 'nl', False, 'ny', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'sd', 'si', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'st', 'su', 'sv', 'sw', 'ta', 'te', 'tg', 'th', 'tr', 'uk', 'und', 'ur', 'uz', 'vi', 'xh', 'yi', 'yo', 'zh', 'zu'] | ['bigscience/xP3', 'mc4'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | true | true | true | 8,932 | false |

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-large"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-large"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-large"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [mt5-large](https://huggingface.co/google/mt5-large), also refer to the `config.json` file
- **Finetuning steps:** 25000
- **Finetuning tokens:** 4.62 billion
- **Precision:** bfloat16
## Hardware
- **TPUs:** TPUv4-64
## Software
- **Orchestration:** [T5X](https://github.com/google-research/t5x)
- **Neural networks:** [Jax](https://github.com/google/jax)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 7ab4a8168099b69ac0fd5562e798494d |
ontocord/vlt5 | ontocord | t5 | 7 | 15 | transformers | 0 | null | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,475 | false | Install lumi:
```
git clone https://github.com/ontocord/lumi
pip install transformers sentencepiece
```
Install models:
```
from lumi.modeling_vlt5 import *
from lumi.tokenization_vlt5 import *
from lumi.modeling_dalle import *
import torch
minidalle = DalleModel.from_pretrained("ontocord/minidalle").eval().half().to('cuda')
vlt5 = VLT5.from_pretrained("ontocord/vlt5").eval().half().to('cuda')
vlt5_tokenizer = VLT5Tokenizer.from_pretrained("ontocord/vlt5")
```
Use:
```
text="""A woman riding a black horse next to a blue fence in central park"""
img = minidalle.generate(
text=text,
image_output=True,
token_output=False
)
print (vlt5_image2text(vlt5, vlt5_tokenizer, "caption:", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: what is she riding?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: what is the color of the fence?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: how many horses are there?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: is it a man or woman riding the horse?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they at the beach?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they at the city?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they at the park?", img)["text"])
print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they in space?", img)["text"])
``` | 3944b9af25690dfedbbb6d055d4fbf44 |
sasi2400/GFMgenderDetection | sasi2400 | bert | 17 | 39 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'gender'] | true | true | true | 1,278 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GFMgenderDetection
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4328
- Accuracy: 0.7971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4591 | 1.0 | 4567 | 0.4502 | 0.7841 |
| 0.3915 | 2.0 | 9134 | 0.4328 | 0.7971 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2 | 60d56a2a92cae79ed203026c5bc8ffe7 |
nguyenkhoa2407/xlm-roberta-base-NER-favsbot | nguyenkhoa2407 | xlm-roberta | 10 | 11 | transformers | 0 | token-classification | true | false | false | mit | null | ['favsbot'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,091 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-NER-favsbot
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0572
- Precision: 0.5556
- Recall: 0.4722
- F1: 0.5105
- Accuracy: 0.6900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 2.4303 | 0.1448 | 0.3556 | 0.2058 | 0.1855 |
| No log | 2.0 | 8 | 2.3220 | 0.1465 | 0.3556 | 0.2075 | 0.1991 |
| No log | 3.0 | 12 | 2.1842 | 0.2486 | 0.2389 | 0.2436 | 0.4593 |
| No log | 4.0 | 16 | 1.9552 | 0.4 | 0.0111 | 0.0216 | 0.4367 |
| No log | 5.0 | 20 | 1.6989 | 0.0 | 0.0 | 0.0 | 0.4321 |
| No log | 6.0 | 24 | 1.6532 | 0.5 | 0.0056 | 0.0110 | 0.4344 |
| No log | 7.0 | 28 | 1.5724 | 0.3649 | 0.15 | 0.2126 | 0.5045 |
| No log | 8.0 | 32 | 1.5164 | 0.3654 | 0.2111 | 0.2676 | 0.5271 |
| No log | 9.0 | 36 | 1.4448 | 0.4203 | 0.1611 | 0.2329 | 0.5090 |
| No log | 10.0 | 40 | 1.3922 | 0.4833 | 0.1611 | 0.2417 | 0.5158 |
| No log | 11.0 | 44 | 1.3409 | 0.5395 | 0.2278 | 0.3203 | 0.5498 |
| No log | 12.0 | 48 | 1.2831 | 0.5824 | 0.2944 | 0.3911 | 0.5950 |
| No log | 13.0 | 52 | 1.2269 | 0.5714 | 0.3556 | 0.4384 | 0.6335 |
| No log | 14.0 | 56 | 1.1766 | 0.5625 | 0.4 | 0.4675 | 0.6606 |
| No log | 15.0 | 60 | 1.1408 | 0.5540 | 0.4278 | 0.4828 | 0.6674 |
| No log | 16.0 | 64 | 1.1159 | 0.56 | 0.4667 | 0.5091 | 0.6810 |
| No log | 17.0 | 68 | 1.0908 | 0.5658 | 0.4778 | 0.5181 | 0.6855 |
| No log | 18.0 | 72 | 1.0722 | 0.5658 | 0.4778 | 0.5181 | 0.6923 |
| No log | 19.0 | 76 | 1.0615 | 0.5592 | 0.4722 | 0.5120 | 0.6900 |
| No log | 20.0 | 80 | 1.0572 | 0.5556 | 0.4722 | 0.5105 | 0.6900 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| ca9fed22bccd1e3a9aff487761813635 |
cjvt/t5-sl-small | cjvt | t5 | 8 | 139 | transformers | 0 | text2text-generation | true | false | false | cc-by-sa-4.0 | ['sl'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 866 | false |
# t5-sl-small
t5-sl-small model is a Slovene T5 model. It has 8 encoder and 8 decoder layers, in total about 60 million parameters.
It was trained for 5 epochs on the following corpora:
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
## Evaluation
The model is described in detail and evaluated in our paper ["*Sequence to sequence pretraining for a less-resourced Slovenian language*"](https://arxiv.org/abs/2207.13988)
## Changelog
2022-07-21: updated with v2 of the model, the old one is still accesible at [cjvt/legacy-t5-sl-small](https://huggingface.co/cjvt/legacy-t5-sl-small).
2022-09-21: added fast tokenizer (Huggingface's TokenizerFast class, the tokenization remains the same) | c6cf7a8be24bcec26def1db182ac1cbb |
jonatasgrosman/exp_w2v2t_pl_vp-nl_s632 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pl'] | false | true | true | 469 | false | # exp_w2v2t_pl_vp-nl_s632
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 4a684bad2ba145fe92849ba8bf6d8540 |
bigcode/santacoder-megatron | bigcode | null | 4 | 0 | null | 1 | text-generation | false | false | false | openrail | ['code'] | ['bigcode/the-stack'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | true | true | true | 3,000 | false |
# SantaCoder

Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces/bigcode/santacoder-demo).
# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [License](#license)
6. [Citation](#citation)
# Model Summary
This is the Megatron-version of [SantaCoder](https://huggingface.co/bigcode/santacoder).
We refer the reader to the [SantaCoder model page](https://huggingface.co/bigcode/santacoder) for full documentation about this model
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](www.bigcode-project.org)
- **Paper:** [🎅SantaCoder: Don't reach for the stars!🌟](https://t.co/YV3pzUbYOr)
- **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
- **Languages:** Python, Java, and JavaScript
# Use
## Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
You should phrase commands like they occur in source code such as comments (e.g. `# the following function computes the sqrt`) or write a function signature and docstring and let the model complete the function body.
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/santacoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 600K
- **Pretraining tokens:** 236 billion
- **Precision:** float16
## Hardware
- **GPUs:** 96 Tesla V100
- **Training time:** 6.2 days
- **Total FLOPS:** 2.1 x 10e21
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
| e1a3050abee5b7538807a12b298ae35f |
G80/detr-resnet-50_finetuned_cppe5 | G80 | detr | 9 | 15 | transformers | 0 | object-detection | true | false | false | apache-2.0 | null | ['cppe-5'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 982 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 278e6e232b5d3c3aff83e8f51a0197fc |
hfl/rbt4-h312 | hfl | bert | 6 | 986 | transformers | 3 | fill-mask | true | true | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert'] | false | true | true | 914 | false |
# Please use 'Bert' related functions to load this model!
## Chinese small pre-trained model MiniRBT
In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology.
This repository is developed based on:https://github.com/iflytek/MiniRBT
You may also interested in,
- Chinese LERT: https://github.com/ymcui/LERT
- Chinese PERT: https://github.com/ymcui/PERT
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/iflytek/HFL-Anthology | 18f94a0889631c7789df00803d171966 |
Ruth/gbert-large-germaner | Ruth | bert | 19 | 11 | transformers | 0 | token-classification | false | true | false | mit | ['de'] | ['germaner'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | true | true | true | 975 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-large-germaner
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the germaner dataset.
It achieves the following results on the evaluation set:
- precision: 0.8693
- recall: 0.8856
- f1: 0.8774
- accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 8
- eval_batch_size: 8
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True
### Framework versions
- Transformers 4.18.0
- Datasets 1.18.0
- Tokenizers 0.12.1
| eecbbd633538ae01d52f5c4b5f06b3cc |
qBob/BART_corrector | qBob | bart | 29 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,503 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART_corrector
This model is a fine-tuned version of [ainize/bart-base-cnn](https://huggingface.co/ainize/bart-base-cnn) on a homemade dataset. Each sample of the dataset is an english sentence that has been duplicated 10 times and where random errors (7%) were added.
It achieves the following results on the evaluation set:
- Loss: 0.0025
- Rouge1: 81.4214
- Rouge2: 80.2027
- Rougel: 81.4202
- Rougelsum: 81.4241
- Gen Len: 19.3962
## Model description
More information needed
## Intended uses & limitations
The goal of this model is to correct a sentence, given several versions of it with various mistakes.
Text sample :
_TheIdeSbgn of thh Eiffel Toweg is aYtribeted to Ma. . ahd design of The Eijfel Tower is attribQtedBto ta. . The designYof the EifZel Tower Vs APtWibuteQ to Ma. . The xeQign oC the EiffelXTower ik attributed to Ma. . ghebFesign of theSbiffel TJwer is atMributed to Ma. . The desOBn of thQ Eiffel ToweP isfattributnd toBMa. . The design of the EBfUel Fower is JtAriOuted tx Ma. . The design of Jhe ENffel LoweF is aptrVbuted Lo Ma. . The deslgX of the lPffel Towermis attributedhtohMa. . The desRgn of thekSuffel Tower is Ttkribufed to Ma. ._
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0071 | 1.0 | 2365 | 0.0039 | 81.3664 | 80.0861 | 81.3601 | 81.3667 | 19.3967 |
| 0.0033 | 2.0 | 4730 | 0.0029 | 81.3937 | 80.1548 | 81.3902 | 81.3974 | 19.3961 |
| 0.0018 | 3.0 | 7095 | 0.0029 | 81.3838 | 80.1404 | 81.385 | 81.3878 | 19.3965 |
| 0.001 | 4.0 | 9460 | 0.0025 | 81.4214 | 80.2027 | 81.4202 | 81.4241 | 19.3962 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 91705ed3e252783718ecb340e2ee37f6 |
elRivx/megaPals2.1 | elRivx | null | 3 | 0 | null | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 1,522 | false |
**megaPals2.1**
Hi guys! Do you remember the superhero vintage animated series? Do you like the 70s style? This Stable Diffusion 2.1 embedding is for you! Some recomendations: the magic word for your prompts is megaPals.
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/wZmw8Xr.png width=30% height=30%>
<img src=https://imgur.com/JJGBmT8.png width=30% height=30%>
<img src=https://imgur.com/0Nr4IJm.png width=30% height=30%>
<img src=https://imgur.com/rRN9r1N.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| 352219c7aa6c5b977187f36f163bd59e |
gokuls/bert-tiny-Massive-intent-KD-BERT_and_distilBERT | gokuls | bert | 15 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['massive'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,948 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-Massive-intent-KD-BERT_and_distilBERT
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3729
- Accuracy: 0.8470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 15.1159 | 1.0 | 720 | 12.8257 | 0.2253 |
| 12.9949 | 2.0 | 1440 | 10.9891 | 0.4304 |
| 11.3865 | 3.0 | 2160 | 9.5622 | 0.5032 |
| 10.0553 | 4.0 | 2880 | 8.3700 | 0.5539 |
| 8.9431 | 5.0 | 3600 | 7.4127 | 0.6104 |
| 8.0135 | 6.0 | 4320 | 6.6185 | 0.6286 |
| 7.1987 | 7.0 | 5040 | 5.9517 | 0.6818 |
| 6.5168 | 8.0 | 5760 | 5.3879 | 0.7118 |
| 5.9352 | 9.0 | 6480 | 4.9426 | 0.7275 |
| 5.4299 | 10.0 | 7200 | 4.5637 | 0.7413 |
| 5.0017 | 11.0 | 7920 | 4.2379 | 0.7585 |
| 4.5951 | 12.0 | 8640 | 3.9699 | 0.7678 |
| 4.2849 | 13.0 | 9360 | 3.7416 | 0.7737 |
| 3.991 | 14.0 | 10080 | 3.5502 | 0.7865 |
| 3.7455 | 15.0 | 10800 | 3.4090 | 0.7900 |
| 3.5315 | 16.0 | 11520 | 3.3053 | 0.7914 |
| 3.345 | 17.0 | 12240 | 3.1670 | 0.8003 |
| 3.1767 | 18.0 | 12960 | 3.0739 | 0.8013 |
| 3.0322 | 19.0 | 13680 | 2.9927 | 0.8047 |
| 2.8864 | 20.0 | 14400 | 2.9366 | 0.8037 |
| 2.7728 | 21.0 | 15120 | 2.8666 | 0.8091 |
| 2.6732 | 22.0 | 15840 | 2.8146 | 0.8126 |
| 2.5726 | 23.0 | 16560 | 2.7588 | 0.8195 |
| 2.493 | 24.0 | 17280 | 2.7319 | 0.8273 |
| 2.4183 | 25.0 | 18000 | 2.6847 | 0.8249 |
| 2.3526 | 26.0 | 18720 | 2.6317 | 0.8323 |
| 2.2709 | 27.0 | 19440 | 2.6071 | 0.8288 |
| 2.2125 | 28.0 | 20160 | 2.5982 | 0.8323 |
| 2.1556 | 29.0 | 20880 | 2.5546 | 0.8337 |
| 2.1042 | 30.0 | 21600 | 2.5278 | 0.8318 |
| 2.054 | 31.0 | 22320 | 2.5005 | 0.8411 |
| 2.0154 | 32.0 | 23040 | 2.4891 | 0.8347 |
| 1.9785 | 33.0 | 23760 | 2.4633 | 0.8367 |
| 1.9521 | 34.0 | 24480 | 2.4451 | 0.8421 |
| 1.9247 | 35.0 | 25200 | 2.4370 | 0.8416 |
| 1.8741 | 36.0 | 25920 | 2.4197 | 0.8446 |
| 1.8659 | 37.0 | 26640 | 2.4081 | 0.8406 |
| 1.8367 | 38.0 | 27360 | 2.3979 | 0.8426 |
| 1.8153 | 39.0 | 28080 | 2.3758 | 0.8451 |
| 1.7641 | 40.0 | 28800 | 2.3729 | 0.8470 |
| 1.7608 | 41.0 | 29520 | 2.3683 | 0.8460 |
| 1.7647 | 42.0 | 30240 | 2.3628 | 0.8446 |
| 1.7656 | 43.0 | 30960 | 2.3492 | 0.8470 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| 93359ec93646a2db81bd760c9a3c94fa |
jonatasgrosman/exp_w2v2t_fa_xlsr-53_s204 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fa'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fa'] | false | true | true | 461 | false | # exp_w2v2t_fa_xlsr-53_s204
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 3067a847c980b48e3cc70a419139c92c |
Yotta/XpCoDir2 | Yotta | bert | 6 | 1 | transformers | 0 | feature-extraction | true | false | false | apache-2.0 | null | ['XpCo'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 884 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XpCoDir2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the XpCoDataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 2.0.0
- Tokenizers 0.10.3
| d0ab9816a07d03c87016e3d8b005c95e |
gary109/wav2vec2-base-finetuned-ks | gary109 | wav2vec2 | 10 | 3 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | null | ['superb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,560 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0981
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6641 | 1.0 | 399 | 0.5522 | 0.9337 |
| 0.2698 | 2.0 | 798 | 0.2015 | 0.9715 |
| 0.1839 | 3.0 | 1197 | 0.1195 | 0.9793 |
| 0.1582 | 4.0 | 1596 | 0.1039 | 0.9791 |
| 0.1425 | 5.0 | 1995 | 0.0981 | 0.9801 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| 1869ae818e73ea96b728516ad0ac8bb1 |
augustocsc/gpt-m0 | augustocsc | gpt2 | 7 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,103 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-m0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7384 | 0.61 | 500 | 1.6251 |
| 0.0325 | 1.22 | 1000 | 0.0146 |
| 0.0104 | 1.83 | 1500 | 0.0094 |
| 0.008 | 2.44 | 2000 | 0.0074 |
| 0.0061 | 3.05 | 2500 | 0.0058 |
| 0.0057 | 3.66 | 3000 | 0.0050 |
| 0.0059 | 4.27 | 3500 | 0.0050 |
| 0.0047 | 4.88 | 4000 | 0.0050 |
| 0.0043 | 5.49 | 4500 | 0.0045 |
| 0.0043 | 6.11 | 5000 | 0.0045 |
| 0.0036 | 6.72 | 5500 | 0.0043 |
| 0.0038 | 7.33 | 6000 | 0.0041 |
| 0.0034 | 7.94 | 6500 | 0.0044 |
| 0.0036 | 8.55 | 7000 | 0.0040 |
| 0.0032 | 9.16 | 7500 | 0.0039 |
| 0.0033 | 9.77 | 8000 | 0.0037 |
| 0.0032 | 10.38 | 8500 | 0.0036 |
| 0.0029 | 10.99 | 9000 | 0.0035 |
| 0.003 | 11.6 | 9500 | 0.0035 |
| 0.0027 | 12.21 | 10000 | 0.0036 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 3cd3343d9b949805f3ca7e6d2722eca8 |
Qiliang/bart-large-cnn-samsum-ElectrifAi_v8.3 | Qiliang | bart | 13 | 303 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,679 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum-ElectrifAi_v8.3
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8755
- Rouge1: 60.4165
- Rouge2: 41.6463
- Rougel: 50.9083
- Rougelsum: 59.2499
- Gen Len: 109.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 20 | 0.9037 | 57.105 | 36.4038 | 46.3683 | 55.8701 | 99.15 |
| No log | 2.0 | 40 | 0.8759 | 58.7016 | 39.3877 | 47.444 | 57.4063 | 113.8 |
| No log | 3.0 | 60 | 0.8755 | 60.4165 | 41.6463 | 50.9083 | 59.2499 | 109.7 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.2
| 7bcc5af26b44366c99f5708309644aa7 |
ankurani/roberta-base-finetuned-ner | ankurani | roberta | 11 | 3 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 892 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| cac06f7ae16ac5a84cc3cd111963ecb6 |
Helsinki-NLP/opus-mt-sv-hu | Helsinki-NLP | marian | 10 | 15 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 770 | false |
### opus-mt-sv-hu
* source languages: sv
* target languages: hu
* OPUS readme: [sv-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sv.hu | 44.6 | 0.660 |
| 2550a84beec69f4f5d20641af1e82c2f |
gokuls/distilbert_add_GLUE_Experiment_stsb_192 | gokuls | distilbert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,238 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_stsb_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2659
- Pearson: nan
- Spearmanr: nan
- Combined Score: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 7.0456 | 1.0 | 23 | 4.3280 | nan | nan | nan |
| 4.7979 | 2.0 | 46 | 3.4200 | nan | nan | nan |
| 3.7359 | 3.0 | 69 | 2.7494 | nan | nan | nan |
| 2.9308 | 4.0 | 92 | 2.3396 | nan | nan | nan |
| 2.3776 | 5.0 | 115 | 2.2659 | nan | nan | nan |
| 2.1865 | 6.0 | 138 | 2.3171 | nan | nan | nan |
| 2.1731 | 7.0 | 161 | 2.3598 | nan | nan | nan |
| 2.1793 | 8.0 | 184 | 2.4690 | 0.1389 | 0.1432 | 0.1410 |
| 2.1725 | 9.0 | 207 | 2.3589 | 0.0899 | 0.0808 | 0.0854 |
| 2.1621 | 10.0 | 230 | 2.3156 | 0.0853 | 0.0802 | 0.0827 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| 4c1470bf9c5b8c9383761add56255bbd |
eslamxm/mt5-base-finetuned-persian | eslamxm | mt5 | 13 | 5 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | ['xlsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'persian', 'generated_from_trainer'] | true | true | true | 1,892 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-persian
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6086
- Rouge-1: 22.02
- Rouge-2: 7.41
- Rouge-l: 18.95
- Gen Len: 19.0
- Bertscore: 69.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 7.2823 | 0.96 | 19 | 3.9800 | 19.78 | 5.57 | 16.24 | 19.0 | 68.19 |
| 4.7334 | 1.96 | 38 | 3.7620 | 20.92 | 7.49 | 18.27 | 18.91 | 68.72 |
| 4.3891 | 2.96 | 57 | 3.6349 | 21.07 | 7.66 | 18.53 | 18.96 | 69.73 |
| 4.2 | 3.96 | 76 | 3.6315 | 19.63 | 6.49 | 16.61 | 19.0 | 69.15 |
| 3.9202 | 4.96 | 95 | 3.6086 | 21.2 | 6.8 | 17.06 | 19.0 | 69.48 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| c8eeae615dca0d970068294e52d0d2be |
jhu-clsp/LegalBert | jhu-clsp | bert | 9 | 6 | transformers | 0 | null | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,383 | false | # Model description
LegalBert is a BERT-base-cased model fine-tuned on a subset of the `case.law` corpus. Further details can be found in this paper:
[A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering](http://ceur-ws.org/Vol-2645/paper5.pdf)
Nils Holzenberger, Andrew Blair-Stanek and Benjamin Van Durme
*Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop, 24 August 2020*
# Usage
```
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("jhu-clsp/LegalBert")
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/LegalBert")
```
# Citation
```
@inproceedings{holzenberger20dataset,
author = {Nils Holzenberger and
Andrew Blair{-}Stanek and
Benjamin Van Durme},
title = {A Dataset for Statutory Reasoning in Tax Law Entailment and Question
Answering},
booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2020
co-located with the 26th {ACM} {SIGKDD} International Conference on
Knowledge Discovery {\&} Data Mining {(KDD} 2020), Virtual Workshop,
August 24, 2020},
series = {{CEUR} Workshop Proceedings},
volume = {2645},
pages = {31--38},
publisher = {CEUR-WS.org},
year = {2020},
url = {http://ceur-ws.org/Vol-2645/paper5.pdf},
}
```
| a63983cde61f6f83f4da77b5baee6998 |
huawei-noah/AutoTinyBERT-S4 | huawei-noah | null | 5 | 1 | transformers | 0 | null | true | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 561 | false | Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. | 2361910815607479c0006311d517de5e |
bayartsogt/whisper-small-mn-12 | bayartsogt | whisper | 19 | 23 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['mn'] | ['mozilla-foundation/common_voice_11_0', 'google/fleurs', 'bayartsogt/ulaanbal-v0', 'bayartsogt/youtube-mongolian-v1'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets'] | true | true | true | 3,061 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-mn-12
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2949
- Wer: 32.3301
- Cer: 13.3493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 25000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.3012 | 1.05 | 1000 | 0.3749 | 43.2379 | 17.6739 |
| 0.2171 | 2.11 | 2000 | 0.3012 | 36.7435 | 15.2029 |
| 0.1732 | 3.16 | 3000 | 0.2823 | 33.4225 | 13.7561 |
| 0.145 | 4.21 | 4000 | 0.2822 | 32.4995 | 13.2436 |
| 0.1159 | 5.27 | 5000 | 0.2949 | 32.3301 | 13.3493 |
| 0.0863 | 6.32 | 6000 | 0.3116 | 32.7234 | 13.3892 |
| 0.0685 | 7.38 | 7000 | 0.3343 | 32.4776 | 13.3077 |
| 0.0506 | 8.43 | 8000 | 0.3584 | 33.3952 | 13.7736 |
| 0.0336 | 9.48 | 9000 | 0.3861 | 33.7011 | 13.8493 |
| 0.0215 | 10.54 | 10000 | 0.4193 | 33.7011 | 14.0140 |
| 0.0141 | 11.59 | 11000 | 0.4463 | 34.0343 | 14.0298 |
| 0.0089 | 12.64 | 12000 | 0.4660 | 33.6137 | 13.8052 |
| 0.0057 | 13.7 | 13000 | 0.4913 | 33.9797 | 13.9849 |
| 0.0039 | 14.75 | 14000 | 0.5078 | 33.9906 | 14.0656 |
| 0.0033 | 15.81 | 15000 | 0.5244 | 33.7721 | 13.9192 |
| 0.0024 | 16.86 | 16000 | 0.5358 | 33.7612 | 13.7910 |
| 0.0018 | 17.91 | 17000 | 0.5469 | 33.6465 | 13.8468 |
| 0.0013 | 18.97 | 18000 | 0.5614 | 33.6683 | 13.7553 |
| 0.0014 | 20.02 | 19000 | 0.5707 | 33.6574 | 13.8884 |
| 0.0006 | 21.07 | 20000 | 0.5835 | 34.0671 | 14.0764 |
| 0.0007 | 22.13 | 21000 | 0.5927 | 33.9742 | 14.0772 |
| 0.0005 | 23.18 | 22000 | 0.5994 | 34.0398 | 14.0290 |
| 0.0004 | 24.24 | 23000 | 0.6067 | 33.9469 | 13.9217 |
| 0.0003 | 25.29 | 24000 | 0.6109 | 33.9688 | 13.9591 |
| 0.0003 | 26.34 | 25000 | 0.6130 | 33.8267 | 13.8360 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 5e29a034c1b0d08647e37279f76d88ac |
Helsinki-NLP/opus-mt-es-zai | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-es-zai
* source languages: es
* target languages: zai
* OPUS readme: [es-zai](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-zai/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.zai | 20.8 | 0.426 |
| 66664ce80639b0fe1d1319e9e66068c7 |
osyvokon/xslr-commonvoice | osyvokon | wav2vec2 | 18 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['tr'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer'] | true | true | true | 2,265 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xslr-commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3835
- Wer: 0.3450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 3.5761 | 1.0 |
| No log | 1.83 | 200 | 3.0512 | 0.9999 |
| No log | 2.75 | 300 | 1.0185 | 0.8188 |
| No log | 3.67 | 400 | 0.5936 | 0.6411 |
| 3.2139 | 4.59 | 500 | 0.4986 | 0.5267 |
| 3.2139 | 5.5 | 600 | 0.4327 | 0.4732 |
| 3.2139 | 6.42 | 700 | 0.4227 | 0.4462 |
| 3.2139 | 7.34 | 800 | 0.4213 | 0.4291 |
| 3.2139 | 8.26 | 900 | 0.4016 | 0.4033 |
| 0.22 | 9.17 | 1000 | 0.3987 | 0.3825 |
| 0.22 | 10.09 | 1100 | 0.4065 | 0.3867 |
| 0.22 | 11.01 | 1200 | 0.3929 | 0.3842 |
| 0.22 | 11.93 | 1300 | 0.3775 | 0.3687 |
| 0.22 | 12.84 | 1400 | 0.3891 | 0.3536 |
| 0.1005 | 13.76 | 1500 | 0.3850 | 0.3492 |
| 0.1005 | 14.68 | 1600 | 0.3823 | 0.3441 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
| 2defe4c1ce477680e7a469371923d236 |
Shubham09/complete_Wav2Vec2_500 | Shubham09 | wav2vec2 | 17 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,039 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# complete_Wav2Vec2_500
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cpu
- Datasets 1.18.3
- Tokenizers 0.11.0
| a289840734a8db155f1bdc8314b50f1f |
sd-concepts-library/dovin-baan | sd-concepts-library | null | 26 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,631 | false | ### Magic: The Gathering Stable Diffusion Textual Inversion Embeddings
Check all MTG related models [here](https://darioft.github.io/stable-diffusion-textual-inversion-mtg-models/)!
### dovin-baan on Stable Diffusion
This is the `<dovin-baan>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:

















| a82399e75ebb248f044f1ab9a1a4c491 |
jhaochenz/finetuned_distilgpt2_sst2_negation0.001_pretrainedTrue_epochs3 | jhaochenz | gpt2 | 17 | 1 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,266 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.001_pretrainedTrue_epochs3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6836 | 1.0 | 1322 | 3.2638 |
| 2.5043 | 2.0 | 2644 | 3.2590 |
| 2.4514 | 3.0 | 3966 | 3.2638 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
| 480b175952f1227698cb80139bb12658 |
gababas/m3rrw3 | gababas | null | 16 | 2 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 415 | false | ### m3rrw3 Dreambooth model trained by gababas with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| d3f2447f45cbecd55d58a6ac5be614d3 |
KeaponLaffin/tippy | KeaponLaffin | null | 18 | 11 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 612 | false | ### Tippy Dreambooth model trained by KeaponLaffin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
| 9c720d329e0b404a8c37cad494c383cc |
espnet/Shinji_Watanabe_librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best | espnet | null | 10 | 1 | espnet | 0 | automatic-speech-recognition | false | false | false | cc-by-4.0 | ['en'] | ['librispeech'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | true | true | 1,862 | false | ## Example ESPnet2 ASR model
### `Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best`
♻️ Imported from https://zenodo.org/record/4030677/
This model was trained by Shinji Watanabe using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2d3c0619b667e7a31e4b9f90f54780da |
Zeynabrz/movie_recommender | Zeynabrz | null | 8 | 0 | null | 0 | null | false | false | false | zlib | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 4,907 | false |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
| 5cf4c6c82ca7f379d52993debbd383ed |
google/t5-efficient-tiny-ff12000 | google | t5 | 12 | 8 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,263 | false |
# T5-Efficient-TINY-FF12000 (Deep-Narrow version)
T5-Efficient-TINY-FF12000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-ff12000** - is of model type **Tiny** with the following variations:
- **ff** is **12000**
It has **61.72** million parameters and thus requires *ca.* **246.87 MB** of memory in full precision (*fp32*)
or **123.44 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 2b010a61efdeb5b81297707ac41295fc |
doc2query/stackexchange-title-body-t5-base-v1 | doc2query | t5 | 11 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['flax-sentence-embeddings/stackexchange_title_body_jsonl'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,841 | false |
# doc2query/stackexchange-title-body-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-title-body-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 550k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
| 96894d3f86bdd5df6b9c837fb1c9e2e2 |
sameearif88/wav2vec2-base-timit-demo-colab3 | sameearif88 | wav2vec2 | 14 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,341 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8480
- Wer: 0.5608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7977 | 13.89 | 500 | 1.6491 | 0.8257 |
| 0.7393 | 27.78 | 1000 | 0.8480 | 0.5608 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| d7392021e2ba0504ea0f88d92dcfe0b5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.