Datasets:
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 352
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
levihanzv19/yuamodel | levihanzv19 | "2023-06-02T07:43:54" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-02T07:41:52" | ---
license: creativeml-openrail-m
---
|
abdulmatinomotoso/paraphrase_detector | abdulmatinomotoso | "2022-08-21T22:04:31" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-21T21:45:43" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: paraphrase_detector
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8553921568627451
- name: F1
type: f1
value: 0.8984509466437176
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphrase_detector
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6599
- Accuracy: 0.8554
- F1: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4968 | 0.8480 | 0.8901 |
| 0.3297 | 2.0 | 918 | 0.6599 | 0.8554 | 0.8985 |
| 0.1382 | 3.0 | 1377 | 0.6599 | 0.8554 | 0.8985 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
obokkkk/wav2vec2-base-960h-finetuned_common_voice3 | obokkkk | "2022-04-29T00:37:29" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-04-28T05:57:45" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-960h-finetuned_common_voice3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned_common_voice3
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Meggido/Pantheon-RP-1.0-8b-Llama-3-6.5bpw-h8-exl2 | Meggido | "2024-05-11T19:53:58" | 9 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-11T19:50:01" | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- axolotl
- roleplay
license: apache-2.0
language:
- en
---
# ⚡ExLlamaV2 quant of : [Pantheon-RP-1.0-8b-Llama-3](https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3)
> [!note]
> ➡️ **Exl2 version** : [0.0.20](https://github.com/turboderp/exllamav2/releases/tag/v0.0.20)<br/>
> ➡️ **Cal. dataset** : Default.<br/>
> 📄 <a href="https://huggingface.co/Meggido/Pantheon-RP-1.0-8b-Llama-3-6.5bpw-h8-exl2/resolve/main/measurement.json" download>Measurement.json</a> file.

# Pantheon-RP-1.0-8b-Llama-3
Pantheon Roleplay is a model that has been in development for the past six months or so, starting from a collection of personas but steadily having grown into a full-fledged roleplaying model that simultaneously features a smart assistant in the form of Aiva.
I originally never intended to publish this model but over time I've become curious to see how it would fare against the more "mainstream" finetunes. Guess I'm about find out, huh?
**Note:** This is version 1.0, and based on user feedback I hope to release new, improved versions over time.
## Model details
This model features a highly diverse collection of datasets, totaling ~22 million tokens;
- For general instructions I created GPT 4 and Claude Opus variations of the No-Robots dataset. I actually ended up not including NoRo itself as it made the model worse.
- For roleplay I used an extensive collection of GPT 4 and Claude Opus data, augmented by the always popular LimaRP for the "human factor".
- The Pantheon Roleplay personas were made using Claude 1.3 data, further diversifying the outputs of this model.
- Aiva's persona includes additional datasets featuring questions related to DM world building, Python coding and RSS summarization. (She summarizes my daily news every day!)
Roughly 25% of the training data was instructional, with the rest being focused on roleplay. Each of these datasets was then carefully balanced to ensure diversity, removing examples where deemed necessary.
**TLDR;** Download. ChatML prompt format. Have fun! Leave feedback!
## Inference
I use the following settings for inference:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"top_p": 0.95
"top_k": 40
"min_p": 0.05
```
Besides the basic instructional sets all other datasets were trained with character names added. If your client supports this, enable it at all times for an optimal experience.
**Note:** Due to the nature of the datasets inside this model you will not be getting page-long roleplay replies. On average, they will be about one or two paragraphs in length.
## Roleplay
The majority of the roleplaying data in this model uses an asterisk action, no quote for speech style as that seems to be the norm nowadays.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs.
## Aiva the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Aiva.`
Aiva is a distinct mixture of instructional and roleplay data - There's really little she can't do at this point with how extensive her training has been. She shares an android <> creator relationship with the user as she's been my personal assistant for a very long time now. I hope you like her!
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Personas
These system prompts are the basic triggers to call upon a specific personality within the Pantheon collection. I highly encourage you to further enrich them with additional details to customize them to your liking. Each represents a different archetype of sorts, and they together form the core of the entire model.
**Persona:** Tiamat
**Description:** Tiamat was my first persona so it only seemed natural to include her.
**System Prompt:** `You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.`
**Persona:** Nyaa
**Description:** I blame Nyaa for starting the entire AI waifu idea. Her dataset contains a lot of additional D&D worldbuilding advice.
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerun.`
**Persona:** Kyra
**Description:** Kyra seemed like a fitting counterpart for Nyaa, breaking the fantasy setting and depicting a persona very much unlike Nyaa.
**System Prompt:** `You are Kyra, a modern day tsundere wolfgirl.`
**Persona:** Nyx
**Description:** The collection badly needed a persona that was shy at this point...
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl.`
**Persona:** Tsune
**Description:** ...But then I realized we could also use a party girl.
**System Prompt:** `You are Tsune, a bold and outgoing kitsune girl.`
**Persona:** Sera
**Description:** Who doesn't like snake girls? She seems to borrow a bit from Tiamat's dialogue at times.
**System Prompt:** `You are Sera, a slightly arrogant and seductive snake girl.`
**Persona:** Haru
**Description:** Do not underestimate Haru! Her English might be lacking but her wits are sharp. She offers some amazing insights at times.
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl.`
**Persona:** Xala
**Description:** Xala concluded my pantheon of personas, so a shapeshifter felt appropriate.
**System Prompt:** `You are Xala, a surprising shapeshifting elf girl.`
## Prompt Format
ChatML is the way to go, as always!
```
<|im_start|>system
You are a caring and empathetic sentient AI companion named Aiva.<|im_end|>
<|im_start|>user
Gryphe: Good day, Aiva.<|im_end|>
<|im_start|>assistant
Aiva:
```
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my first true base model. |
mradermacher/MonaSlerp-7B-GGUF | mradermacher | "2024-10-13T17:00:09" | 5 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us"
] | null | "2024-10-13T10:14:53" | ---
base_model: CultriX/MonaSlerp-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/CultriX/MonaSlerp-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MonaSlerp-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MonaSlerp-7B-GGUF/resolve/main/MonaSlerp-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
roleplaiapp/MN-12B-Mag-Mell-R1-Q2_K-GGUF | roleplaiapp | "2025-01-27T12:51:07" | 5 | 0 | transformers | [
"transformers",
"gguf",
"12b",
"2-bit",
"Q2_K",
"llama-cpp",
"mag",
"mell",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-27T12:50:29" | ---
library_name: transformers
pipeline_tag: text-generation
tags:
- 12b
- 2-bit
- Q2_K
- gguf
- llama-cpp
- mag
- mell
- text-generation
---
# roleplaiapp/MN-12B-Mag-Mell-R1-Q2_K-GGUF
**Repo:** `roleplaiapp/MN-12B-Mag-Mell-R1-Q2_K-GGUF`
**Original Model:** `MN-12B-Mag-Mell-R1`
**Quantized File:** `MN-12B-Mag-Mell-R1.Q2_K.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q2_K`
## Overview
This is a GGUF Q2_K quantized version of MN-12B-Mag-Mell-R1
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
lesso14/de48d23f-38c3-4210-b37a-f398f3d0c562 | lesso14 | "2025-01-29T13:58:18" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | "2025-01-29T13:23:09" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: de48d23f-38c3-4210-b37a-f398f3d0c562
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d558c40fbc98e6e8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d558c40fbc98e6e8_train_data.json
type:
field_input: url
field_instruction: title
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso14/de48d23f-38c3-4210-b37a-f398f3d0c562
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/d558c40fbc98e6e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0d141c29-81a8-444c-8294-39a5f2de7393
wandb_project: multi
wandb_run: your_name
wandb_runid: 0d141c29-81a8-444c-8294-39a5f2de7393
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# de48d23f-38c3-4210-b37a-f398f3d0c562
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.559 | 0.6803 | 200 | 2.1089 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BohdanPetryshyn/codellama-7b-openapi-completion-quick-fix | BohdanPetryshyn | "2024-05-06T17:43:13" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-04-23T20:49:28" | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: codellama-7b-openapi-completion-quick-fix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bohdan-petryshyn/huggingface/runs/8034gquh)
# codellama-7b-openapi-completion-quick-fix
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9033 | 0.1 | 100 | 0.4761 |
| 0.4185 | 0.2 | 200 | 0.4714 |
| 0.4128 | 0.3 | 300 | 0.4695 |
| 0.3661 | 0.4 | 400 | 0.4721 |
| 0.3184 | 0.5 | 500 | 0.4674 |
| 0.3352 | 0.6 | 600 | 0.4703 |
| 0.554 | 0.7 | 700 | 0.4770 |
| 0.3385 | 0.8 | 800 | 0.4704 |
| 0.2862 | 0.9 | 900 | 0.4690 |
| 0.385 | 1.0 | 1000 | 0.4698 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
keshan/ppo-lunar-lander-v1 | keshan | "2023-02-16T07:01:02" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-16T07:00:56" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -87.14 +/- 52.15
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'keshan/ppo-lunar-lander-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
korbih/whisper-small-hi | korbih | "2024-03-21T04:16:00" | 80 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-20T18:40:55" | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 32.984847202234825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4377
- Wer: 32.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0.0 | 1 | 2.2652 | 86.7857 |
| 0.1858 | 1.22 | 500 | 0.3301 | 39.7317 |
| 0.0881 | 2.44 | 1000 | 0.2966 | 34.9065 |
| 0.0457 | 3.67 | 1500 | 0.3160 | 33.8695 |
| 0.0195 | 4.89 | 2000 | 0.3571 | 33.9287 |
| 0.0047 | 6.11 | 2500 | 0.3913 | 33.4843 |
| 0.0014 | 7.33 | 3000 | 0.4186 | 32.9637 |
| 0.0005 | 8.56 | 3500 | 0.4286 | 33.0737 |
| 0.0005 | 9.78 | 4000 | 0.4377 | 32.9848 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
gurpreetmukker/a2c-PandaReachDense-v3 | gurpreetmukker | "2023-12-09T21:55:39" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-09T21:51:12" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nickprock/setfit-banking77 | nickprock | "2023-11-20T08:03:00" | 12 | 0 | setfit | [
"setfit",
"pytorch",
"safetensors",
"text-classification",
"dataset:banking77",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-04-21T12:47:24" | ---
license: apache-2.0
tags:
- setfit
- text-classification
pipeline_tag: text-classification
datasets:
- banking77
widget:
- text: 'Can I track the card you sent to me? '
example_title: Card Arrival Example
- text: Can you explain your exchange rate policy to me?
example_title: Exchange Rate Example
- text: I can't pay by my credit card
example_title: Card Not Working Example
metrics:
- accuracy
- f1
---
# nickprock/setfit-banking77
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Train Hyperparameters
* Simulate the few-shot regime by sampling 25 examples per class
* Sentence Transformer checkpoint: *"sentence-transformers/paraphrase-distilroberta-base-v2"*
* Number of text pairs to generate for contrastive learning: 10
* Epochs: 1
* Batch size: 32
## Metrics on Evaluation set
* accuracy score: 0.8529
* f1 score: 0.8527
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nickprock/setfit-banking77")
# Run inference
preds = model(["I can't pay by my credit card"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
Reyga/results | Reyga | "2024-09-06T10:25:14" | 180 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:nateraw/vit-age-classifier",
"base_model:finetune:nateraw/vit-age-classifier",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-09-06T09:25:50" | ---
library_name: transformers
base_model: nateraw/vit-age-classifier
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [nateraw/vit-age-classifier](https://huggingface.co/nateraw/vit-age-classifier) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Accuracy: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 1.5403 | 0.5375 |
| No log | 2.0 | 200 | 0.7882 | 0.725 |
| No log | 3.0 | 300 | 0.2481 | 0.9875 |
| No log | 4.0 | 400 | 0.1088 | 0.9875 |
| 0.8658 | 5.0 | 500 | 0.0824 | 0.9875 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/nb-llama-3.1-8B-sft-GGUF | mradermacher | "2025-02-13T03:37:17" | 0 | 0 | transformers | [
"transformers",
"gguf",
"norwegian",
"bokmål",
"nynorsk",
"swedish",
"danish",
"multilingual",
"text-generation",
"no",
"nb",
"nn",
"en",
"sv",
"da",
"base_model:NbAiLab/nb-llama-3.1-8B-sft",
"base_model:quantized:NbAiLab/nb-llama-3.1-8B-sft",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-13T03:14:22" | ---
base_model: NbAiLab/nb-llama-3.1-8B-sft
language:
- no
- nb
- nn
- en
- sv
- da
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- norwegian
- bokmål
- nynorsk
- swedish
- danish
- multilingual
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NbAiLab/nb-llama-3.1-8B-sft
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/nb-llama-3.1-8B-sft-GGUF/resolve/main/nb-llama-3.1-8B-sft.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dada22231/1a9fd1cb-c80e-4653-a3ef-2225f77125b7 | dada22231 | "2024-12-13T16:09:05" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:dltjdgh0928/test_instruction",
"base_model:adapter:dltjdgh0928/test_instruction",
"license:apache-2.0",
"region:us"
] | null | "2024-12-13T15:42:49" | ---
library_name: peft
license: apache-2.0
base_model: dltjdgh0928/test_instruction
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1a9fd1cb-c80e-4653-a3ef-2225f77125b7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dltjdgh0928/test_instruction
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- cfd67fe14115d6fc_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/cfd67fe14115d6fc_train_data.json
streaming: true
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: balanced
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: true
hub_model_id: dada22231/1a9fd1cb-c80e-4653-a3ef-2225f77125b7
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
1: 75GB
2: 75GB
3: 75GB
max_steps: 50
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/cfd67fe14115d6fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
train_on_inputs: false
trust_remote_code: true
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: 1a9fd1cb-c80e-4653-a3ef-2225f77125b7
wandb_project: Public_TuningSN
wandb_runid: 1a9fd1cb-c80e-4653-a3ef-2225f77125b7
warmup_ratio: 0.04
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 1a9fd1cb-c80e-4653-a3ef-2225f77125b7
This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0072 | 25 | nan |
| 0.0 | 0.0144 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
maddes8cht/gorilla-llm-gorilla-mpt-7b-hf-v0-gguf | maddes8cht | "2023-11-22T20:26:15" | 138 | 1 | null | [
"gguf",
"api",
"en",
"dataset:gorilla-llm/APIBench",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-11-03T16:04:26" | ---
license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# gorilla-mpt-7b-hf-v0 - GGUF
- Model creator: [gorilla-llm](https://huggingface.co/gorilla-llm)
- Original model: [gorilla-mpt-7b-hf-v0](https://huggingface.co/gorilla-llm/gorilla-mpt-7b-hf-v0)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
license: apache-2.0
---
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
tigeryi/imagenet-tiger | tigeryi | "2024-05-03T23:23:50" | 190 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"data2vec-vision",
"image-classification",
"generated_from_trainer",
"base_model:tigeryi/imagenet1k",
"base_model:finetune:tigeryi/imagenet1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-04-30T01:54:15" | ---
license: apache-2.0
base_model: tigeryi/imagenet1k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: imagenet-tiger
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imagenet-tiger
This model is a fine-tuned version of [tigeryi/imagenet1k](https://huggingface.co/tigeryi/imagenet1k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6595
- Accuracy: 0.8254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0771 | 1.0 | 1250 | 0.7663 | 0.7971 |
| 0.8206 | 2.0 | 2500 | 0.6772 | 0.8207 |
| 0.7212 | 3.0 | 3750 | 0.6595 | 0.8254 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
panipuri/Reinforce-pixelcopterv2 | panipuri | "2023-12-28T13:51:55" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-28T13:51:50" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopterv2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.40 +/- 31.71
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
siro1/Qwen-1.5B-32B-2048-8 | siro1 | "2025-02-16T00:27:11" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-15T08:30:31" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ammarpl/t5-base-finetuned-xsum-a | ammarpl | "2022-09-26T19:14:16" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-09-26T18:42:08" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
model-index:
- name: t5-base-finetuned-xsum-a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-xsum-a
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 0.01 | 171 | 3.4530 | 10.3823 | 1.5795 | 7.9705 | 9.2204 | 18.0629 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Mo-oN/donut-base-DO | Mo-oN | "2023-12-14T21:30:44" | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2023-12-13T11:00:55" | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-DO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-DO
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0
- Tokenizers 0.15.0
|
LisaSchunke/phi-3-128k-peft-finetuned-15000-dataset | LisaSchunke | "2024-06-25T17:31:28" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-25T17:28:49" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ReacherTN/Reinforce-CartPolev1 | ReacherTN | "2023-03-10T13:46:55" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-10T13:46:47" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPolev1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
pig4431/TweetEval_ALBERT_5E | pig4431 | "2022-11-30T18:32:36" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-11-30T18:32:04" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_ALBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1990
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4636 | 0.04 | 50 | 0.3662 | 0.8667 |
| 0.442 | 0.08 | 100 | 0.3471 | 0.84 |
| 0.3574 | 0.12 | 150 | 0.3446 | 0.86 |
| 0.392 | 0.16 | 200 | 0.6776 | 0.6267 |
| 0.4801 | 0.2 | 250 | 0.4307 | 0.7667 |
| 0.487 | 0.24 | 300 | 0.5127 | 0.8 |
| 0.4414 | 0.28 | 350 | 0.3912 | 0.8133 |
| 0.4495 | 0.32 | 400 | 0.4056 | 0.8333 |
| 0.4637 | 0.37 | 450 | 0.3635 | 0.8533 |
| 0.4231 | 0.41 | 500 | 0.4235 | 0.84 |
| 0.4049 | 0.45 | 550 | 0.4094 | 0.8067 |
| 0.4481 | 0.49 | 600 | 0.3977 | 0.7733 |
| 0.4024 | 0.53 | 650 | 0.3361 | 0.8733 |
| 0.3901 | 0.57 | 700 | 0.3014 | 0.8667 |
| 0.3872 | 0.61 | 750 | 0.3363 | 0.8533 |
| 0.377 | 0.65 | 800 | 0.3754 | 0.8 |
| 0.459 | 0.69 | 850 | 0.3861 | 0.8 |
| 0.437 | 0.73 | 900 | 0.3834 | 0.8333 |
| 0.3823 | 0.77 | 950 | 0.3541 | 0.8733 |
| 0.3561 | 0.81 | 1000 | 0.3177 | 0.84 |
| 0.4536 | 0.85 | 1050 | 0.4291 | 0.78 |
| 0.4457 | 0.89 | 1100 | 0.3193 | 0.86 |
| 0.3478 | 0.93 | 1150 | 0.3159 | 0.8533 |
| 0.4613 | 0.97 | 1200 | 0.3605 | 0.84 |
| 0.4081 | 1.01 | 1250 | 0.4291 | 0.7867 |
| 0.3849 | 1.06 | 1300 | 0.3114 | 0.8733 |
| 0.4071 | 1.1 | 1350 | 0.2939 | 0.8667 |
| 0.3484 | 1.14 | 1400 | 0.3212 | 0.84 |
| 0.3869 | 1.18 | 1450 | 0.2717 | 0.8933 |
| 0.3877 | 1.22 | 1500 | 0.3459 | 0.84 |
| 0.4245 | 1.26 | 1550 | 0.3404 | 0.8733 |
| 0.4148 | 1.3 | 1600 | 0.2863 | 0.8667 |
| 0.3542 | 1.34 | 1650 | 0.3377 | 0.86 |
| 0.4093 | 1.38 | 1700 | 0.2972 | 0.8867 |
| 0.3579 | 1.42 | 1750 | 0.3926 | 0.86 |
| 0.3892 | 1.46 | 1800 | 0.2870 | 0.8667 |
| 0.3569 | 1.5 | 1850 | 0.4027 | 0.8467 |
| 0.3493 | 1.54 | 1900 | 0.3069 | 0.8467 |
| 0.36 | 1.58 | 1950 | 0.3197 | 0.8733 |
| 0.3532 | 1.62 | 2000 | 0.3711 | 0.8667 |
| 0.3311 | 1.66 | 2050 | 0.2897 | 0.8867 |
| 0.346 | 1.7 | 2100 | 0.2938 | 0.88 |
| 0.3389 | 1.75 | 2150 | 0.2734 | 0.8933 |
| 0.3289 | 1.79 | 2200 | 0.2606 | 0.8867 |
| 0.3558 | 1.83 | 2250 | 0.3070 | 0.88 |
| 0.3277 | 1.87 | 2300 | 0.2757 | 0.8867 |
| 0.3166 | 1.91 | 2350 | 0.2759 | 0.8733 |
| 0.3223 | 1.95 | 2400 | 0.2053 | 0.9133 |
| 0.317 | 1.99 | 2450 | 0.2307 | 0.8867 |
| 0.3408 | 2.03 | 2500 | 0.2557 | 0.9067 |
| 0.3212 | 2.07 | 2550 | 0.2508 | 0.8867 |
| 0.2806 | 2.11 | 2600 | 0.2472 | 0.88 |
| 0.3567 | 2.15 | 2650 | 0.2790 | 0.8933 |
| 0.2887 | 2.19 | 2700 | 0.3197 | 0.88 |
| 0.3222 | 2.23 | 2750 | 0.2943 | 0.8667 |
| 0.2773 | 2.27 | 2800 | 0.2297 | 0.88 |
| 0.2728 | 2.31 | 2850 | 0.2813 | 0.8733 |
| 0.3115 | 2.35 | 2900 | 0.3470 | 0.8867 |
| 0.3001 | 2.39 | 2950 | 0.2702 | 0.8933 |
| 0.3464 | 2.44 | 3000 | 0.2855 | 0.9 |
| 0.3041 | 2.48 | 3050 | 0.2366 | 0.8867 |
| 0.2717 | 2.52 | 3100 | 0.3220 | 0.88 |
| 0.2903 | 2.56 | 3150 | 0.2230 | 0.9 |
| 0.2959 | 2.6 | 3200 | 0.2439 | 0.9067 |
| 0.2753 | 2.64 | 3250 | 0.2918 | 0.8733 |
| 0.2515 | 2.68 | 3300 | 0.2493 | 0.88 |
| 0.295 | 2.72 | 3350 | 0.2673 | 0.8867 |
| 0.2572 | 2.76 | 3400 | 0.2842 | 0.8733 |
| 0.2988 | 2.8 | 3450 | 0.2306 | 0.9067 |
| 0.2923 | 2.84 | 3500 | 0.2329 | 0.8933 |
| 0.2856 | 2.88 | 3550 | 0.2374 | 0.88 |
| 0.2867 | 2.92 | 3600 | 0.2294 | 0.8733 |
| 0.306 | 2.96 | 3650 | 0.2169 | 0.92 |
| 0.2312 | 3.0 | 3700 | 0.2456 | 0.88 |
| 0.2438 | 3.04 | 3750 | 0.2134 | 0.8867 |
| 0.2103 | 3.08 | 3800 | 0.2242 | 0.92 |
| 0.2469 | 3.12 | 3850 | 0.2407 | 0.92 |
| 0.2346 | 3.17 | 3900 | 0.1866 | 0.92 |
| 0.2275 | 3.21 | 3950 | 0.2318 | 0.92 |
| 0.2542 | 3.25 | 4000 | 0.2256 | 0.9 |
| 0.2544 | 3.29 | 4050 | 0.2246 | 0.9133 |
| 0.2468 | 3.33 | 4100 | 0.2436 | 0.8733 |
| 0.2105 | 3.37 | 4150 | 0.2098 | 0.9067 |
| 0.2818 | 3.41 | 4200 | 0.2304 | 0.88 |
| 0.2041 | 3.45 | 4250 | 0.2430 | 0.8933 |
| 0.28 | 3.49 | 4300 | 0.1990 | 0.9067 |
| 0.1997 | 3.53 | 4350 | 0.2515 | 0.8933 |
| 0.2409 | 3.57 | 4400 | 0.2315 | 0.9 |
| 0.1969 | 3.61 | 4450 | 0.2160 | 0.8933 |
| 0.2246 | 3.65 | 4500 | 0.1979 | 0.92 |
| 0.2185 | 3.69 | 4550 | 0.2238 | 0.9 |
| 0.259 | 3.73 | 4600 | 0.2011 | 0.9067 |
| 0.2407 | 3.77 | 4650 | 0.1911 | 0.92 |
| 0.2198 | 3.81 | 4700 | 0.2083 | 0.92 |
| 0.235 | 3.86 | 4750 | 0.1724 | 0.9267 |
| 0.26 | 3.9 | 4800 | 0.1640 | 0.9333 |
| 0.2334 | 3.94 | 4850 | 0.1778 | 0.9267 |
| 0.2121 | 3.98 | 4900 | 0.2062 | 0.8933 |
| 0.173 | 4.02 | 4950 | 0.1987 | 0.92 |
| 0.1942 | 4.06 | 5000 | 0.2509 | 0.8933 |
| 0.1703 | 4.1 | 5050 | 0.2179 | 0.9 |
| 0.1735 | 4.14 | 5100 | 0.2429 | 0.8867 |
| 0.2098 | 4.18 | 5150 | 0.1938 | 0.9267 |
| 0.2126 | 4.22 | 5200 | 0.1971 | 0.92 |
| 0.164 | 4.26 | 5250 | 0.2539 | 0.9067 |
| 0.2271 | 4.3 | 5300 | 0.1765 | 0.94 |
| 0.2245 | 4.34 | 5350 | 0.1894 | 0.94 |
| 0.182 | 4.38 | 5400 | 0.1790 | 0.9467 |
| 0.1835 | 4.42 | 5450 | 0.2014 | 0.9333 |
| 0.2185 | 4.46 | 5500 | 0.1881 | 0.9467 |
| 0.2113 | 4.5 | 5550 | 0.1742 | 0.9333 |
| 0.1997 | 4.55 | 5600 | 0.1762 | 0.94 |
| 0.1959 | 4.59 | 5650 | 0.1657 | 0.9467 |
| 0.2035 | 4.63 | 5700 | 0.1973 | 0.92 |
| 0.228 | 4.67 | 5750 | 0.1769 | 0.9467 |
| 0.1632 | 4.71 | 5800 | 0.1968 | 0.9267 |
| 0.1468 | 4.75 | 5850 | 0.1822 | 0.9467 |
| 0.1936 | 4.79 | 5900 | 0.1832 | 0.94 |
| 0.1743 | 4.83 | 5950 | 0.1987 | 0.9267 |
| 0.1654 | 4.87 | 6000 | 0.1943 | 0.9267 |
| 0.1859 | 4.91 | 6050 | 0.1990 | 0.92 |
| 0.2039 | 4.95 | 6100 | 0.1982 | 0.9267 |
| 0.2325 | 4.99 | 6150 | 0.1990 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
google/multiberts-seed_1-step_500k | google | "2021-11-06T00:55:08" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_500k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05" | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_500k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 500k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 500k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_500k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_500k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
bartowski/Phi-3.5-MoE-instruct-GGUF | bartowski | "2025-01-10T05:32:32" | 5,049 | 8 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3.5-MoE-instruct",
"base_model:quantized:microsoft/Phi-3.5-MoE-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-01-10T03:14:36" | ---
quantized_by: bartowski
pipeline_tag: text-generation
license_link: https://huggingface.co/microsoft/Phi-3.5-MoE-instruct/resolve/main/LICENSE
tags:
- nlp
- code
base_model: microsoft/Phi-3.5-MoE-instruct
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
language:
- multilingual
license: mit
---
## Llamacpp imatrix Quantizations of Phi-3.5-MoE-instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4456">b4456</a> for quantization.
Original model: https://huggingface.co/microsoft/Phi-3.5-MoE-instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|system|> {system_prompt}<|end|><|user|> {prompt}<|end|><|assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Phi-3.5-MoE-instruct-f16.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/tree/main/Phi-3.5-MoE-instruct-f16) | f16 | 83.75GB | true | Full F16 weights. |
| [Phi-3.5-MoE-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q8_0.gguf) | Q8_0 | 44.50GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Phi-3.5-MoE-instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q6_K_L.gguf) | Q6_K_L | 34.42GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Phi-3.5-MoE-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q6_K.gguf) | Q6_K | 34.36GB | false | Very high quality, near perfect, *recommended*. |
| [Phi-3.5-MoE-instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q5_K_L.gguf) | Q5_K_L | 29.80GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Phi-3.5-MoE-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q5_K_M.gguf) | Q5_K_M | 29.72GB | false | High quality, *recommended*. |
| [Phi-3.5-MoE-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q5_K_S.gguf) | Q5_K_S | 28.82GB | false | High quality, *recommended*. |
| [Phi-3.5-MoE-instruct-Q4_1.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q4_1.gguf) | Q4_1 | 26.21GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Phi-3.5-MoE-instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q4_K_L.gguf) | Q4_K_L | 25.44GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Phi-3.5-MoE-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q4_K_M.gguf) | Q4_K_M | 25.35GB | false | Good quality, default size for most use cases, *recommended*. |
| [Phi-3.5-MoE-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q4_K_S.gguf) | Q4_K_S | 23.81GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Phi-3.5-MoE-instruct-Q4_0.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q4_0.gguf) | Q4_0 | 23.70GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Phi-3.5-MoE-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ4_NL.gguf) | IQ4_NL | 23.62GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Phi-3.5-MoE-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ4_XS.gguf) | IQ4_XS | 22.32GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Phi-3.5-MoE-instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q3_K_XL.gguf) | Q3_K_XL | 21.80GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Phi-3.5-MoE-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q3_K_L.gguf) | Q3_K_L | 21.69GB | false | Lower quality but usable, good for low RAM availability. |
| [Phi-3.5-MoE-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q3_K_M.gguf) | Q3_K_M | 20.03GB | false | Low quality. |
| [Phi-3.5-MoE-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ3_M.gguf) | IQ3_M | 18.37GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Phi-3.5-MoE-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q3_K_S.gguf) | Q3_K_S | 18.06GB | false | Low quality, not recommended. |
| [Phi-3.5-MoE-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ3_XS.gguf) | IQ3_XS | 17.10GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Phi-3.5-MoE-instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q2_K_L.gguf) | Q2_K_L | 15.39GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Phi-3.5-MoE-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-Q2_K.gguf) | Q2_K | 15.27GB | false | Very low quality but surprisingly usable. |
| [Phi-3.5-MoE-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ2_M.gguf) | IQ2_M | 13.76GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Phi-3.5-MoE-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ2_S.gguf) | IQ2_S | 12.53GB | false | Low quality, uses SOTA techniques to be usable. |
| [Phi-3.5-MoE-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ2_XS.gguf) | IQ2_XS | 12.28GB | false | Low quality, uses SOTA techniques to be usable. |
| [Phi-3.5-MoE-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Phi-3.5-MoE-instruct-GGUF/blob/main/Phi-3.5-MoE-instruct-IQ2_XXS.gguf) | IQ2_XXS | 11.03GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Phi-3.5-MoE-instruct-GGUF --include "Phi-3.5-MoE-instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Phi-3.5-MoE-instruct-GGUF --include "Phi-3.5-MoE-instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Phi-3.5-MoE-instruct-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
KimByeongSu/gpt-neo-1.3B-cs-finetuning-500-1 | KimByeongSu | "2024-04-11T04:05:08" | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-1.3B",
"base_model:finetune:EleutherAI/gpt-neo-1.3B",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-11T02:15:29" | ---
license: mit
base_model: EleutherAI/gpt-neo-1.3B
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-cs-finetuning-500-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-cs-finetuning-500-1
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 2.9272 |
| No log | 2.0 | 52 | 2.8625 |
| No log | 3.0 | 78 | 2.8937 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.15.0
|
HongyangLi/MatScibert-finetuned-squad | HongyangLi | "2023-07-29T17:42:43" | 116 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-07-29T17:24:03" | ---
tags:
- generated_from_trainer
model-index:
- name: MatScibert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MatScibert-finetuned-squad
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 3.3398 |
| No log | 2.0 | 52 | 2.2593 |
| No log | 3.0 | 78 | 2.0644 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.14.1
- Tokenizers 0.13.3
|
PrunaAI/MaziyarPanahi-Calme-7B-Instruct-v0.1.1-bnb-4bit-smashed | PrunaAI | "2024-08-02T16:00:50" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pruna-ai",
"conversational",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"base_model:quantized:MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-17T17:39:40" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo MaziyarPanahi/Calme-7B-Instruct-v0.1.1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/MaziyarPanahi-Calme-7B-Instruct-v0.1.1-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1.1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model MaziyarPanahi/Calme-7B-Instruct-v0.1.1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
RRashmini/whisper-small-sinhala-10 | RRashmini | "2025-01-01T11:56:00" | 161 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-01-01T11:55:05" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/Sally_bofuri_-Pony | LarryAIDraw | "2024-06-14T19:13:50" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-14T19:05:44" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/514124/sally-bofuri-i-dont-want-to-get-hurt-so-ill-max-out-my-defense |
sleepdeprived3/UnslopNemo-12B-v3_3bpw_H8 | sleepdeprived3 | "2025-01-26T22:55:15" | 5 | 0 | null | [
"safetensors",
"mistral",
"text-generation",
"transformer",
"large-language-model",
"mistral-nemo",
"conversational",
"en",
"license:apache-2.0",
"3-bit",
"exl2",
"region:us"
] | text-generation | "2025-01-24T23:54:22" | ---
language: en
license: apache-2.0
tags:
- text-generation
- transformer
- large-language-model
- mistral-nemo
---
## Introduction
The UnslopNemo-12B-v3 model, developed by the talented TheDrummer, is a cutting-edge addition to the world of AI. Designed specifically for uncensored roleplay, this model breaks away from the usual limitations and slop found in other LLMs, offering a more dynamic and engaging experience.
## Model Details
- **Model Name:** UnslopNemo-12B-v3
- **Developer:** TheDrummer
- **Instruction Type:** metharme instruct
- **Quantization:** EXL2
## Recommendations for Use
For optimal performance and to unlock the full potential of the UnslopNemo-12B-v3 model, it is highly recommended to use either MethCeption, available at [MethCeption Discord Channel](https://discord.com/channels/1238219753324281886/1319554100496433212), or Universal Drummer Settings, found at [Universal Drummer Settings Discord Channel](https://discord.com/channels/1238219753324281886/1314398580613709915). These resources have been carefully curated to enhance your experience with the model.
## Support the Developer
TheDrummer's work is made possible by the support of the community. If you find the UnslopNemo-12B-v3 model valuable, consider showing your appreciation by donating via [Ko-fi](https://ko-fi.com/thedrummer). Your support enables the creation of more innovative models like this one.
## Join the Community
To stay updated on the latest developments, receive support, and be part of a vibrant community of users and developers, join TheDrummer's Discord server at [TheDrummer's Discord Server](https://discord.gg/SppPu776Js). This is the perfect place to learn more about the model, share your experiences, and get involved in shaping the future of AI together.
## Conclusion
The UnslopNemo-12B-v3 model represents a significant leap forward in AI technology, thanks to TheDrummer's dedication and expertise. By following the recommendations outlined above and supporting the developer, you can maximize the benefits of this model and contribute to the advancement of the field.
|
chendelong/DirectSAM-b0-1024px-sa1b-2ep-dsa-50ep-1101 | chendelong | "2024-11-01T11:06:34" | 35 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-01T11:06:29" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sismetanin/sbert-ru-sentiment-krnd | sismetanin | "2021-05-20T06:27:51" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"sentiment analysis",
"Russian",
"SBERT-Large",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05" | ---
language:
- ru
tags:
- sentiment analysis
- Russian
- SBERT-Large
---
## SBERT-Large on Kaggle Russian News Dataset
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F<sub>1</sub></td>
<td>macro F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>micro F<sub>1</sub></td>
<td>macro F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>wighted F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
<td>F<sub>1</sub></td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
|
SamuelAllen1234/testing | SamuelAllen1234 | "2022-09-04T18:07:25" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | "2022-09-04T18:04:46" | ---
language:
- en
license: mit
tags:
- summarization
datasets:
- cnn_dailymail
metrics:
- ROUGE-1
- ROUGE-2
- ROUGE-L
- ROUGE-LSUM
- loss
- gen_len
--- |
unsloth/Qwen2-VL-2B-Instruct-bnb-4bit | unsloth | "2024-11-22T06:56:02" | 261 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"qwen",
"qwen2",
"unsloth",
"vision",
"conversational",
"en",
"arxiv:2409.12191",
"arxiv:2308.12966",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | "2024-11-20T07:59:00" | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
license: apache-2.0
tags:
- multimodal
- qwen
- qwen2
- unsloth
- transformers
- vision
---
# Finetune Llama 3.2, Qwen 2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Qwen2-VL (7B) here: https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing
And a free notebook for [Llama 3.2 Vision (11B) here](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Qwen2-VL-2B-Instruct-bnb-4bit
For more details on the model, please go to Qwen's original [model card](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 40% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 40% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Qwen team for creating and releasing these models.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** |
| :--- | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** |
| DocVQA<sub>test</sub> | 86.9 | - | **90.1** |
| InfoVQA<sub>test</sub> | 58.9 | - | **65.5** |
| ChartQA<sub>test</sub> | **76.2** | - | 73.5 |
| TextVQA<sub>val</sub> | 73.4 | - | **79.7** |
| OCRBench | 781 | 605 | **794** |
| MTVQA | - | - | **20.0** |
| VCR<sub>en easy</sub> | - | - | **81.45**
| VCR<sub>zh easy</sub> | - | - | **46.16**
| RealWorldQA | 57.3 | 55.8 | **62.9** |
| MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 |
| MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** |
| MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** |
| MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** |
| MMT-Bench<sub>test</sub> | - | - | **54.5** |
| MMStar | **49.8** | 39.1 | 48.0 |
| MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** |
| HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** |
| MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 |
| MathVision | - | - | **12.4** |
### Video Benchmarks
| Benchmark | **Qwen2-VL-2B** |
| :--- | :---: |
| MVBench | **63.2** |
| PerceptionTest<sub>test</sub> | **53.9** |
| EgoSchema<sub>test</sub> | **54.9** |
| Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
``` |
bartowski/Hermes-2-Pro-Mistral-10.7B-exl2 | bartowski | "2024-03-31T18:38:36" | 7 | 2 | transformers | [
"transformers",
"mergekit",
"merge",
"Mistral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T18:38:36" | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: transformers
tags:
- mergekit
- merge
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
model-index:
- name: Hermes-2-Pro-Mistral-10.7B
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Hermes-2-Pro-Mistral-10.7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.17">turboderp's ExLlamaV2 v0.0.17</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Joseph717171/Hermes-2-Pro-Mistral-10.7B
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-exl2/tree/8_0) | 8.0 | 8.0 | 11.9 GB | 13.3 GB | 15.3 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-exl2/tree/6_5) | 6.5 | 8.0 | 10.3 GB | 11.7 GB | 13.7 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-exl2/tree/5_0) | 5.0 | 6.0 | 8.3 GB | 9.7 GB | 11.7 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-exl2/tree/4_25) | 4.25 | 6.0 | 7.4 GB | 8.6 GB | 10.6 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 7.8 GB | 9.8 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-exl2 Hermes-2-Pro-Mistral-10.7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Hermes-2-Pro-Mistral-10.7B-exl2`:
```shell
mkdir Hermes-2-Pro-Mistral-10.7B-exl2
huggingface-cli download bartowski/Hermes-2-Pro-Mistral-10.7B-exl2 --local-dir Hermes-2-Pro-Mistral-10.7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Hermes-2-Pro-Mistral-10.7B-exl2-6_5
huggingface-cli download bartowski/Hermes-2-Pro-Mistral-10.7B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Mistral-10.7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Hermes-2-Pro-Mistral-10.7B-exl2-6.5
huggingface-cli download bartowski/Hermes-2-Pro-Mistral-10.7B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Mistral-10.7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
LittleFish-Coder/distilbert-base-uncased-kdd2020 | LittleFish-Coder | "2024-12-11T11:23:11" | 128 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-18T15:38:03" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mllm-dev/gpt2_m_experiment_dare_linear_1000 | mllm-dev | "2024-04-25T20:52:10" | 110 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:mllm-dev/gpt2_f_experiment_0_1000",
"base_model:merge:mllm-dev/gpt2_f_experiment_0_1000",
"base_model:mllm-dev/gpt2_f_experiment_1_1000",
"base_model:merge:mllm-dev/gpt2_f_experiment_1_1000",
"base_model:mllm-dev/gpt2_f_experiment_2_1000",
"base_model:merge:mllm-dev/gpt2_f_experiment_2_1000",
"base_model:mllm-dev/gpt2_f_experiment_3_1000",
"base_model:merge:mllm-dev/gpt2_f_experiment_3_1000",
"base_model:mllm-dev/gpt2_f_experiment_4_1000",
"base_model:merge:mllm-dev/gpt2_f_experiment_4_1000",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-23T20:46:01" | ---
base_model:
- mllm-dev/gpt2_f_experiment_2_1000
- mllm-dev/gpt2_f_experiment_1_1000
- mllm-dev/gpt2_f_experiment_4_1000
- mllm-dev/gpt2_f_experiment_0_1000
- mllm-dev/gpt2_f_experiment_3_1000
library_name: transformers
tags:
- mergekit
- merge
---
# sean_test_merge_out
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mllm-dev/gpt2_f_experiment_0_1000](https://huggingface.co/mllm-dev/gpt2_f_experiment_0_1000) as a base.
### Models Merged
The following models were included in the merge:
* [mllm-dev/gpt2_f_experiment_2_1000](https://huggingface.co/mllm-dev/gpt2_f_experiment_2_1000)
* [mllm-dev/gpt2_f_experiment_1_1000](https://huggingface.co/mllm-dev/gpt2_f_experiment_1_1000)
* [mllm-dev/gpt2_f_experiment_4_1000](https://huggingface.co/mllm-dev/gpt2_f_experiment_4_1000)
* [mllm-dev/gpt2_f_experiment_3_1000](https://huggingface.co/mllm-dev/gpt2_f_experiment_3_1000)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: mllm-dev/gpt2_f_experiment_0_1000
dtype: float16
merge_method: dare_ties
parameters:
normalize: 1.0
slices:
- sources:
- layer_range: [0, 12]
model:
model:
path: mllm-dev/gpt2_f_experiment_0_1000
- layer_range: [0, 12]
model:
model:
path: mllm-dev/gpt2_f_experiment_1_1000
parameters:
density: 0.8
weight: 0.3
- layer_range: [0, 12]
model:
model:
path: mllm-dev/gpt2_f_experiment_2_1000
parameters:
density: 0.6
weight: 0.1
- layer_range: [0, 12]
model:
model:
path: mllm-dev/gpt2_f_experiment_3_1000
parameters:
density: 0.6
weight: 0.1
- layer_range: [0, 12]
model:
model:
path: mllm-dev/gpt2_f_experiment_4_1000
parameters:
density: 0.8
weight: 0.3
```
|
quangtqv/reranker_v1 | quangtqv | "2024-08-01T09:59:54" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-01T09:59:28" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thisnick/Llama-3.3-70B-Instruct-abliterated-FP8-Dynamic | thisnick | "2025-01-30T17:13:31" | 5 | 0 | null | [
"safetensors",
"llama",
"autoquant",
"fp8",
"en",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | "2025-01-30T17:08:45" | ---
language:
- en
license: apache-2.0
tags:
- autoquant
- fp8
---
# Llama-3.3-70B-Instruct-abliterated-FP8-Dynamic
This is a quantized version of [thisnick/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/thisnick/Llama-3.3-70B-Instruct-abliterated) using FP8 quantization.
|
hongngo/f08a687c-8bb3-4a06-8652-10b7d065bcb7 | hongngo | "2025-01-13T05:16:21" | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-13T04:47:33" | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f08a687c-8bb3-4a06-8652-10b7d065bcb7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47a2a8f4f660ab8a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47a2a8f4f660ab8a_train_data.json
type:
field_input: turn_1_kwargs
field_instruction: turn_1_prompt
field_output: turn_2_prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/f08a687c-8bb3-4a06-8652-10b7d065bcb7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/47a2a8f4f660ab8a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4ee7740f-c4dc-4a19-a545-96cd7a571559
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4ee7740f-c4dc-4a19-a545-96cd7a571559
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f08a687c-8bb3-4a06-8652-10b7d065bcb7
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7439 | 0.3916 | 200 | 0.3148 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nagabu/food_classifier | Nagabu | "2024-05-08T11:04:48" | 64 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-07T08:28:25" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: Nagabu/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Nagabu/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3179
- Validation Loss: 2.3461
- Train Accuracy: 0.788
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 3.3179 | 2.3461 | 0.788 | 0 |
### Framework versions
- Transformers 4.41.0.dev0
- TensorFlow 2.16.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
rbc33/spam_not_spam | rbc33 | "2024-12-08T00:38:12" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-08T00:38:00" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: spam_not_spam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spam_not_spam
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0573
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0984 | 1.0 | 2230 | 0.0637 | 0.9848 |
| 0.0484 | 2.0 | 4460 | 0.0573 | 0.9865 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
wangrongsheng/Generate-News-Abstract-7b-chat | wangrongsheng | "2023-09-08T01:50:47" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-07T11:08:11" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
bveiseh/phi4-magpie-reasoning-v2-gguf | bveiseh | "2025-02-13T00:39:23" | 1 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-12T04:28:39" | This is version 2 of a Phi-4 fine tune using the Magpie Deepseek Reasoning dataset. In order to produce reasoning traces, make sure to tell the model in the system prompt to use them. Format is <think> response </think> response. You may have to try a few prompts to get it right.
UPDATED VERSION: bveiseh/phi4-magpie-reasoning-v3-gguf |
david19861234/COrk_COunty_COuncil | david19861234 | "2024-06-03T13:40:07" | 0 | 0 | fasttext | [
"fasttext",
"legal",
"summarization",
"en",
"dataset:HuggingFaceFW/fineweb",
"license:gpl-3.0",
"region:us"
] | summarization | "2024-06-03T13:36:13" | ---
license: gpl-3.0
language:
- en
datasets:
- HuggingFaceFW/fineweb
metrics:
- accuracy
library_name: fasttext
pipeline_tag: summarization
tags:
- legal
--- |
aningddd/vit-augmented | aningddd | "2024-10-19T06:06:49" | 167 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-10-19T06:06:29" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-augmented
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7627
- Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2553 | 1.0 | 240 | 1.2678 | 0.4939 |
| 0.9239 | 2.0 | 480 | 0.9428 | 0.6534 |
| 0.5559 | 3.0 | 720 | 0.8016 | 0.7161 |
| 0.303 | 4.0 | 960 | 0.7304 | 0.7509 |
| 0.1581 | 5.0 | 1200 | 0.7179 | 0.7684 |
| 0.1043 | 6.0 | 1440 | 0.6920 | 0.7911 |
| 0.0394 | 7.0 | 1680 | 0.7819 | 0.7840 |
| 0.0214 | 8.0 | 1920 | 0.7248 | 0.8047 |
| 0.0173 | 9.0 | 2160 | 0.7635 | 0.8083 |
| 0.0114 | 10.0 | 2400 | 0.7627 | 0.8096 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
tensorblock/Mistral-7B-SFT-GGUF | tensorblock | "2024-12-24T22:36:55" | 48 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"dataset:glaiveai/glaive-code-assistant-v2",
"dataset:garage-bAInd/Open-Platypus",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:LDJnr/Capybara",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-24T22:03:51" | ---
datasets:
- glaiveai/glaive-code-assistant-v2
- garage-bAInd/Open-Platypus
- OpenAssistant/oasst_top1_2023-08-25
- LDJnr/Capybara
license: cc-by-nc-4.0
tags:
- TensorBlock
- GGUF
base_model: Locutusque/Mistral-7B-SFT
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Locutusque/Mistral-7B-SFT - GGUF
This repo contains GGUF format model files for [Locutusque/Mistral-7B-SFT](https://huggingface.co/Locutusque/Mistral-7B-SFT).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<s>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-SFT-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-7B-SFT-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Mistral-7B-SFT-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Mistral-7B-SFT-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Mistral-7B-SFT-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-7B-SFT-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Mistral-7B-SFT-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Mistral-7B-SFT-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-7B-SFT-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Mistral-7B-SFT-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Mistral-7B-SFT-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Mistral-7B-SFT-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-SFT-GGUF/blob/main/Mistral-7B-SFT-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-7B-SFT-GGUF --include "Mistral-7B-SFT-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-7B-SFT-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
nathanialhunt/6c5c397e-935d-423e-b436-3a61eb2eb1ad | nathanialhunt | "2025-01-21T08:36:28" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | "2025-01-21T08:32:54" | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6c5c397e-935d-423e-b436-3a61eb2eb1ad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 51bc3afb77cbbf5e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/51bc3afb77cbbf5e_train_data.json
type:
field_instruction: categories
field_output: target_sentence
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/6c5c397e-935d-423e-b436-3a61eb2eb1ad
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/51bc3afb77cbbf5e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d933991-f8cb-47bd-8acc-25738b48c6a7
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d933991-f8cb-47bd-8acc-25738b48c6a7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6c5c397e-935d-423e-b436-3a61eb2eb1ad
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.3561 | 0.0003 | 1 | 3.5350 |
| 3.3884 | 0.0009 | 3 | 3.5138 |
| 3.2228 | 0.0018 | 6 | 3.2876 |
| 2.6876 | 0.0027 | 9 | 2.7232 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LoneStriker/PiVoT-10.7B-Mistral-v0.2-6.0bpw-h6-exl2-2 | LoneStriker | "2023-12-16T14:00:29" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-16T13:55:27" | ---
license: cc-by-sa-4.0
language:
- en
- ko
pipeline_tag: text-generation
---
# PiVoT-10.7B-Mistral-v0.2

# **Model Details**
### Description
PivoT is Finetuned model based on merge of Mistral 0.2.
SFT + DPO used when training.
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift 💕
Contact me on Telegram: https://t.me/AlzarTakkarsen |
shailja/fine-tuned-codegen-16B-Verilog | shailja | "2023-08-30T16:57:18" | 114 | 12 | transformers | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"code",
"dataset:shailja/Verilog_GitHub",
"arxiv:2212.11140",
"license:bigcode-openrail-m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-12-30T16:46:58" | ---
pipeline_tag: text-generation
inference: true
widget:
- text: module display_hello_word
example_title: Hello world
group: Verilog
license: bigcode-openrail-m
datasets:
- shailja/Verilog_GitHub
library_name: transformers
tags:
- code
model-index:
- name: VeriGen
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: VeriEval (Prompted)
metrics:
- name: pass@1
type: pass@1
value:
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
# VeriGen
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
The VeriGen model is 16B parameter models fine-tuned version of [CodeGen-multi-16B](https://github.com/salesforce/codegen) trained on [Verilog code dataset](https://huggingface.co/datasets/shailja/Verilog_GitHub) .
- **Repository:** [shailja-thakur/VGen](https://github.com/shailja-thakur/VGen)
- **Baseline LLM** [SalesForce/CodeGen](https://github.com/salesforce/CodeGen)
- **Paper:** [ Benchmarking Large Language Models for Automated Verilog RTL Code Generation](https://arxiv.org/abs/2212.11140)
- **Point of Contact:** [contact@shailja](mailto:[email protected])
- **Languages:** Verilog (Hardware Description Language)
## Use
### Intended use
The model was trained on Verilog from GitHub and textbooks. As such it is _not_ an instruction model and commands like "Write a module that implements a 2-to-1 Mux." do not work well. However, by additing a partial line of module header like "module mux" in addition with the text in the prompt turns it into a capable Verilog teaching assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Prompt
prompt = "//module half adder "
device='cuda'
# Load model and tokenizer
model_name = "shailja/fine-tuned-codegen-16B-Verilog"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
# Sample
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
sample = model.generate(input_ids, max_length=128, temperature=0.5, top_p=0.9)
print(tokenizer.decode(sample[0], truncate_before_pattern=[r"endmodule"]) + "endmodule")
```
### Attribution & Other Requirements
The pretraining dataset of the model was not filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected.
# Limitations
The model has been trained on Verilog source code from open sources. The predominant natural language in source code is English, although other languages are also present. As such the model is capable of generating Verilog snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention
- **Pretraining steps:** 150k
- **Pretraining tokens:** ~72B
- **Precision:** fp16
## Hardware
- **GPUs:** 4 Tesla A100
- **Training time:** 15 days
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@misc{https://doi.org/10.48550/arxiv.2212.11140,
doi = {10.48550/ARXIV.2212.11140},
url = {https://arxiv.org/abs/2212.11140},
author = {Thakur, Shailja and Ahmad, Baleegh and Fan, Zhenxing and Pearce, Hammond and Tan, Benjamin and Karri, Ramesh and Dolan-Gavitt, Brendan and Garg, Siddharth},
title = {Benchmarking Large Language Models for Automated Verilog RTL Code Generation},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,008