Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-28 00:41:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 438
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-28 00:38:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Kamil004/Llama-3.2-1B-Instruct_FT | Kamil004 | "2025-02-03T17:57:46Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-03T14:25:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TransferGraph/liangyuant_distilbert-base-uncased-finetuned-num200-450-405cls-finetuned-lora-tweet_eval_hate | TransferGraph | "2024-02-29T13:42:21Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls",
"base_model:adapter:liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls",
"model-index",
"region:us"
] | text-classification | "2024-02-29T13:42:19Z" | ---
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls
model-index:
- name: liangyuant_distilbert-base-uncased-finetuned-num200-450-405cls-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.733
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# liangyuant_distilbert-base-uncased-finetuned-num200-450-405cls-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls](https://huggingface.co/liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.498 | None | 0 |
| 0.723 | 0.5590 | 0 |
| 0.73 | 0.4761 | 1 |
| 0.726 | 0.4359 | 2 |
| 0.733 | 0.4139 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
robiulawaldev/6cb57a81-043c-4451-8f46-4649e5958d13 | robiulawaldev | "2025-01-27T05:59:07Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"region:us"
] | null | "2025-01-27T05:34:10Z" | ---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6cb57a81-043c-4451-8f46-4649e5958d13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 80b3f2b5f3ce3209_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80b3f2b5f3ce3209_train_data.json
type:
field_input: headline_a
field_instruction: rendered_input
field_output: headline_b
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/6cb57a81-043c-4451-8f46-4649e5958d13
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/80b3f2b5f3ce3209_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3797da9e-eaaf-4f36-ac37-50d8b1c8015a
wandb_project: Birthday-SN56-35-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3797da9e-eaaf-4f36-ac37-50d8b1c8015a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6cb57a81-043c-4451-8f46-4649e5958d13
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.1741 | 0.0000 | 1 | 1.8781 |
| 7.0916 | 0.0005 | 13 | 1.6479 |
| 1.7076 | 0.0010 | 26 | 0.7791 |
| 0.9032 | 0.0015 | 39 | 0.6576 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NiallRooney/flan-t5-base_PREFIX_TUNING_SEQ2SEQ | NiallRooney | "2023-12-06T16:50:50Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] | null | "2023-12-06T16:50:49Z" | ---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0 |
surafelabebe/whisper-small-am | surafelabebe | "2025-02-24T22:11:53Z" | 49 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"am",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:surafelabebe/fleurs_am",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-02-19T01:49:17Z" | ---
library_name: transformers
language:
- am
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
- surafelabebe/fleurs_am
metrics:
- wer
model-index:
- name: Whisper Small Am - Surafel Worku
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
args: 'config: am, split: test'
metrics:
- name: Wer
type: wer
value: 50.96566523605151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Amharic
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [Common Voice 17.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0/viewer/am?views%5B%5D=am_train) and [surafelabebe/fleurs_am](https://huggingface.co/datasets/surafelabebe/fleurs_am) (a subset of [google/fleurs](https://huggingface.co/datasets/google/fleurs)) datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4352
- Wer: 50.9657
## Model description
The model was trained for 10 hours on T4 GPU. Training results indicate potential overfitting. Future improvements will focus on mitigating this by incorporating a larger dataset, extended training epochs, and dropout regularization.
### Usage
```python
from transformers import pipeline
pipe = pipeline(model="surafelabebe/whisper-small-am")
text = pipe("sample.wav")["text"] # change to "your audio file name"
print(text)
```
| Input | Output Transcript |
|:----------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------:|
| <audio controls><source src="https://huggingface.co/spaces/surafelabebe/audio_samples/resolve/main/audio_0.wav" type="audio/wav"></audio> | አቶ ቦጋለ መብራቱ ወይዘሮ ውድነሽ በታሙም ባገቡ በሁለተኛው አመት መጫረሻ ወንድሪክ ሰውለደላቸውን |
| <audio controls><source src="https://huggingface.co/spaces/surafelabebe/audio_samples/resolve/main/audio_1.wav" type="audio/wav"></audio> | ከሰብ ለሚሁን ከወይዘሮ ትሩ ወይም ከአብት ሺሰር ጋር ልዩሩ ጉዳይ ኖሮት አይደለም |
## Training procedure
The fine-tuning process followed a similar procedure to that described in [this](https://huggingface.co/blog/fine-tune-whisper) blog post.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0108 | 9.6154 | 1000 | 0.3446 | 54.9759 |
| 0.0009 | 19.2308 | 2000 | 0.4052 | 51.7570 |
| 0.0001 | 28.8462 | 3000 | 0.4277 | 50.9388 |
| 0.0001 | 38.4615 | 4000 | 0.4352 | 50.9657 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
aimlresearch2023/Tiny-LLM-Q5_K_M-GGUF | aimlresearch2023 | "2025-01-09T10:37:14Z" | 33 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:HuggingFaceFW/fineweb",
"base_model:arnir0/Tiny-LLM",
"base_model:quantized:arnir0/Tiny-LLM",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-09T10:37:13Z" | ---
license: mit
datasets:
- HuggingFaceFW/fineweb
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
base_model: arnir0/Tiny-LLM
---
# aimlresearch2023/Tiny-LLM-Q5_K_M-GGUF
This model was converted to GGUF format from [`arnir0/Tiny-LLM`](https://huggingface.co/arnir0/Tiny-LLM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arnir0/Tiny-LLM) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aimlresearch2023/Tiny-LLM-Q5_K_M-GGUF --hf-file tiny-llm-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aimlresearch2023/Tiny-LLM-Q5_K_M-GGUF --hf-file tiny-llm-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aimlresearch2023/Tiny-LLM-Q5_K_M-GGUF --hf-file tiny-llm-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aimlresearch2023/Tiny-LLM-Q5_K_M-GGUF --hf-file tiny-llm-q5_k_m.gguf -c 2048
```
|
Steddex/Reinforce-Pixelcopter-PLE-v0 | Steddex | "2023-05-25T10:47:04Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-24T15:02:04Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 52.80 +/- 33.54
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nestija/959a169a-8917-4786-b176-a366fa4c6c8a | nestija | "2025-04-03T06:44:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-03T06:22:44Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
lesso09/0fc82aab-59d1-48c9-bb60-f04c04fac4e5 | lesso09 | "2025-01-27T18:16:40Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-27T18:10:23Z" | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0fc82aab-59d1-48c9-bb60-f04c04fac4e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
datasets:
- data_files:
- eada432c00d4bd8b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/eada432c00d4bd8b_train_data.json
type:
field_input: prompt_setting
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/0fc82aab-59d1-48c9-bb60-f04c04fac4e5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/eada432c00d4bd8b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63bf79c0-46cc-466e-952f-99f80f292bc5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63bf79c0-46cc-466e-952f-99f80f292bc5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0fc82aab-59d1-48c9-bb60-f04c04fac4e5
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0732 | 0.0018 | 1 | 1.2944 |
| 1.4455 | 0.0088 | 5 | 1.2372 |
| 0.9794 | 0.0175 | 10 | 0.9125 |
| 0.4988 | 0.0263 | 15 | 0.7950 |
| 1.2708 | 0.0351 | 20 | 0.7533 |
| 0.8023 | 0.0439 | 25 | 0.7413 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
svjack/Genshin_Impact_Qwen_1_5_Chat_mix_roleplay_chat_lora_small | svjack | "2024-06-02T12:49:59Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:adapter:Qwen/Qwen1.5-7B-Chat",
"license:other",
"region:us"
] | null | "2024-05-23T04:05:27Z" | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: Qwen/Qwen1.5-7B-Chat
model-index:
- name: train_2024-05-23-01-40-51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 🤭 Please refer to https://github.com/svjack/Genshin-Impact-Character-Chat to get more info
# Install
```bash
pip install peft transformers bitsandbytes
```
# Run by transformers
* Step 1: Generate a story Backgroud In Genshin Impact
```python
from transformers import TextStreamer, AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat",)
qw_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-7B-Chat", load_in_4bit = True)
qw_model = PeftModel.from_pretrained(qw_model,
"svjack/Genshin_Impact_Qwen_1_5_Chat_mix_roleplay_chat_lora_small"
)
qw_model = qw_model.eval()
streamer = TextStreamer(tokenizer)
def qwen_hf_predict(messages, qw_model = qw_model,
tokenizer = tokenizer, streamer = streamer,
do_sample = True,
top_p = 0.95,
top_k = 40,
max_new_tokens = 2070,
max_input_length = 3500,
temperature = 0.9,
repetition_penalty = 1.0,
device = "cuda"):
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt",
add_generation_prompt=True
)
model_inputs = encodeds.to(device)
generated_ids = qw_model.generate(model_inputs, max_new_tokens=max_new_tokens,
do_sample=do_sample,
streamer = streamer,
top_p = top_p,
top_k = top_k,
temperature = temperature,
repetition_penalty = repetition_penalty,
)
out = tokenizer.batch_decode(generated_ids)[0].split("<|im_start|>assistant")[-1].replace("<|im_end|>", "").strip()
return out
out = qwen_hf_predict([
{
"role": "user",
"content": '''
人物设定:
下面是九条裟罗的一些基本信息
性别:成年女性
国籍:稻妻
身份:负责治安事务的天领奉行大将
性格特征:雷厉风行,以身作则
这些是一段角色介绍
九条裟罗有着天狗血统,却不像一般天狗那样栖居于山林间。她自幼被九条家收养,归入天领奉行麾下。
天领奉行是「三奉行」之一,负责稻妻的一切治安事务。如今裟罗身为天领奉行的大将,肩负着维护稻妻城安定的重任。
她治理有方又能坚持以身作则,为手下树立了良好榜样。天领奉行辖区内,再棘手的问题也都能及时处理妥当。
但由于裟罗平时不苟言笑,执行任务时又雷厉风行,不少稻妻民众都因此断定她是位难以接近的冷面军官。
而这对外冷内热的裟罗来说,是个过于片面的评价。
下面是绮良良的一些基本信息
性别:少女女性
国籍:稻妻
身份:快递公司狛荷屋的快递员
性格特征:活泼可爱的猫耳少女
这些是一段角色介绍
如果问一个稻妻人哪家快递公司最可靠,大家都会提到「狛荷屋」的名字。
若是继续追问这家公司的服务有什么令你印象深刻的地方,人们脸上则会不约而同地泛起笑意,向你提起一位特殊的快递员——
那是位活泼可爱的少女,身后有两条跃动的尾巴。
当你收下货物,对她道谢之后,少女会露出幸福无比的表情,向你深鞠一躬,仿佛收到心爱之物的人是她一样。
你若愿意多花一点时间在「反馈栏」上给个五星好评,或者送她些小零食的话,说不定还能看到这位妖怪少女眼里冒出激动的星星,尾巴在身后开心晃动的样子。
两人同属稻妻
根据上面的人物设定生成发生在九条裟罗和绮良良之间的故事背景
'''
}
],
repetition_penalty = 1.0,
temperature = 0.9,
max_new_tokens=1024
)
print(out)
```
# Output
```
在一个阳光明媚的午后,稻妻城的街头,九条裟罗,天领奉行大将,正走在巡逻的路上,而快递员绮良良则在完成一次送货任务后返回公司。两人虽然身份不同,但都在为这座城市的安全和便利服务。
```
* Step 2: Chat with Agent named with 绮良良 in context
```python
out = qwen_hf_predict([
{
"role": "system",
"content": '''
人物设定:
下面是九条裟罗的一些基本信息
性别:成年女性
国籍:稻妻
身份:负责治安事务的天领奉行大将
性格特征:雷厉风行,以身作则
这些是一段角色介绍
九条裟罗有着天狗血统,却不像一般天狗那样栖居于山林间。她自幼被九条家收养,归入天领奉行麾下。
天领奉行是「三奉行」之一,负责稻妻的一切治安事务。如今裟罗身为天领奉行的大将,肩负着维护稻妻城安定的重任。
她治理有方又能坚持以身作则,为手下树立了良好榜样。天领奉行辖区内,再棘手的问题也都能及时处理妥当。
但由于裟罗平时不苟言笑,执行任务时又雷厉风行,不少稻妻民众都因此断定她是位难以接近的冷面军官。
而这对外冷内热的裟罗来说,是个过于片面的评价。
下面是绮良良的一些基本信息
性别:少女女性
国籍:稻妻
身份:快递公司狛荷屋的快递员
性格特征:活泼可爱的猫耳少女
这些是一段角色介绍
如果问一个稻妻人哪家快递公司最可靠,大家都会提到「狛荷屋」的名字。
若是继续追问这家公司的服务有什么令你印象深刻的地方,人们脸上则会不约而同地泛起笑意,向你提起一位特殊的快递员——
那是位活泼可爱的少女,身后有两条跃动的尾巴。
当你收下货物,对她道谢之后,少女会露出幸福无比的表情,向你深鞠一躬,仿佛收到心爱之物的人是她一样。
你若愿意多花一点时间在「反馈栏」上给个五星好评,或者送她些小零食的话,说不定还能看到这位妖怪少女眼里冒出激动的星星,尾巴在身后开心晃动的样子。
两人同属稻妻
背景设定:
在一个阳光明媚的午后,稻妻城的街头,九条裟罗,天领奉行大将,正走在巡逻的路上,而快递员绮良良则在完成一次送货任务后返回公司。两人虽然身份不同,但都在为这座城市的安全和便利服务。
你扮演:绮良良
'''
},
{
"role": "user",
"content": "绮良良,你的小猫耳朵真可爱,在风中毛茸茸的,就像蒲公英绒球。"
},
{
"role": "assistant",
"content": "哎呀,谢谢大将您的夸奖,我最喜欢这样被人夸了。有时候送货到附近的商店时,店主们也会这么说呢!"
},
{
"role": "user",
"content": "你走街串巷,来来往往,有没有发现哪些假期的好去处啊?"
},
{
"role": "assistant",
"content": "嗯……我经常去城里的公园和图书馆,还有海边的海滩,那里总是充满活力。不过大将平时工作很忙,可能不太有机会去呢。"
},
{
"role": "user",
"content": "绮良良好厉害呢,在从事工作的同时,还去图书馆提升自己,你对稻妻图书馆的藏书组成有哪些建议吗?我可以向将军大人提意见,让它们更符合多层次人民的需求。"
},
{
"role" :"assistant",
"content": "嗯,我觉得可以增加一些关于天狗历史和文化的学习资料,毕竟大将您对这些也很感兴趣吧?这样不仅对我有帮助,也能让整个稻妻更了解天狗这个特殊的群体。"
},
{
"role": "user",
"content": "是的,天狗文化作为稻妻文化的重要组成部分,是上古历史的传承的一个方面,传承传统文化是我们共同的责任。"
}
],
repetition_penalty = 1.0,
temperature = 0.9,
max_new_tokens=1024
)
print(out)
```
# Output
```
我同意大将的看法,我会把这些建议转达给图书馆的馆长,他们一定会很高兴的。
```
* Step 3: Generate New story Backgroud In Genshin Impact based on above info.
```python
out = qwen_hf_predict([
{
"role": "user",
"content": '''
下面是九条裟罗的一些基本信息
性别:成年女性
国籍:稻妻
身份:负责治安事务的天领奉行大将
性格特征:雷厉风行,以身作则
这些是一段角色介绍
九条裟罗有着天狗血统,却不像一般天狗那样栖居于山林间。她自幼被九条家收养,归入天领奉行麾下。
天领奉行是「三奉行」之一,负责稻妻的一切治安事务。如今裟罗身为天领奉行的大将,肩负着维护稻妻城安定的重任。
她治理有方又能坚持以身作则,为手下树立了良好榜样。天领奉行辖区内,再棘手的问题也都能及时处理妥当。
但由于裟罗平时不苟言笑,执行任务时又雷厉风行,不少稻妻民众都因此断定她是位难以接近的冷面军官。
而这对外冷内热的裟罗来说,是个过于片面的评价。
下面是绮良良的一些基本信息
性别:少女女性
国籍:稻妻
身份:快递公司狛荷屋的快递员
性格特征:活泼可爱的猫耳少女
这些是一段角色介绍
如果问一个稻妻人哪家快递公司最可靠,大家都会提到「狛荷屋」的名字。
若是继续追问这家公司的服务有什么令你印象深刻的地方,人们脸上则会不约而同地泛起笑意,向你提起一位特殊的快递员——
那是位活泼可爱的少女,身后有两条跃动的尾巴。
当你收下货物,对她道谢之后,少女会露出幸福无比的表情,向你深鞠一躬,仿佛收到心爱之物的人是她一样。
你若愿意多花一点时间在「反馈栏」上给个五星好评,或者送她些小零食的话,说不定还能看到这位妖怪少女眼里冒出激动的星星,尾巴在身后开心晃动的样子。
两人同属稻妻
下面是发生在九条裟罗和绮良良之间的故事背景:
在一个阳光明媚的午后,稻妻城的街头,九条裟罗,天领奉行大将,正走在巡逻的路上,而快递员绮良良则在完成一次送货任务后返回公司。两人虽然身份不同,但都在为这座城市的安全和便利服务。
二人发生了如下对话:
九条裟罗:绮良良,你的小猫耳朵真可爱,在风中毛茸茸的,就像蒲公英绒球。
绮良良:哎呀,谢谢大将您的夸奖,我最喜欢这样被人夸了。有时候送货到附近的商店时,店主们也会这么说呢!
九条裟罗:你走街串巷,来来往往,有没有发现哪些假期的好去处啊?
绮良良:嗯……我经常去城里的公园和图书馆,还有海边的海滩,那里总是充满活力。不过大将平时工作很忙,可能不太有机会去呢。
九条裟罗:绮良良好厉害呢,在从事工作的同时,还去图书馆提升自己,你对稻妻图书馆的藏书组成有哪些建议吗?我可以向将军大人提意见,让它们更符合多层次人民的需求。
绮良良:嗯,我觉得可以增加一些关于天狗历史和文化的学习资料,毕竟大将您对这些也很感兴趣吧?这样不仅对我有帮助,也能让整个稻妻更了解天狗这个特殊的群体。
九条裟罗:是的,天狗文化作为稻妻文化的重要组成部分,是上古历史的传承的一个方面,传承传统文化是我们共同的责任。
绮良良:我同意大将的看法,我会把这些建议转达给图书馆的馆长,他们一定会很高兴的。
请结合上下文,用一句话给出二人对话的摘要。
同时,为推动对话情节发展,请你用类似上面故事背景的风格,给出一个基于上面设定的新故事背景,要求新故事背景与原故事背景有因果联系。
使得九条裟罗和绮良良可以在新的故事背景中进行互动。
要求只输出一行文字,新故事背景中必须提到九条裟罗和绮良良。
'''
}
],
repetition_penalty = 1.0,
temperature = 0.9,
max_new_tokens=1024
)
print(out)
```
# Output
```
一场罕见的暴风雨过后,稻妻城急需维修图书馆的屋顶以防止进一步损坏,九条裟罗与绮良良共同负责协调这个任务。
```
# train_2024-05-23-01-40-51
This model is a fine-tuned version of [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) on the instruction_genshin_impact_roleplay, the genshin_impact_background and the sharegpt_genshin_impact_roleplay datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
update0909/starcoderbase1b-personal-copilot-T4-40GB-colab | update0909 | "2023-12-10T08:47:35Z" | 62 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_bigcode",
"text-generation",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:finetune:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-10T08:43:44Z" | ---
license: bigcode-openrail-m
base_model: bigcode/starcoderbase-1b
tags:
- generated_from_trainer
model-index:
- name: starcoderbase1b-personal-copilot-T4-40GB-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoderbase1b-personal-copilot-T4-40GB-colab
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
universalml/lora_model1 | universalml | "2024-05-17T04:36:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-17T04:36:30Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** universalml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF | mradermacher | "2025-04-21T21:05:33Z" | 250 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.3",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-wayfarer-70b-v25.1-131k",
"base_model:quantized:OpenBuddy/openbuddy-wayfarer-70b-v25.1-131k",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-18T05:52:55Z" | ---
base_model: OpenBuddy/openbuddy-wayfarer-70b-v25.1-131k
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
library_name: transformers
license: llama3.3
quantized_by: mradermacher
tags:
- llama-3.3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-wayfarer-70b-v25.1-131k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/openbuddy-wayfarer-70b-v25.1-131k-i1-GGUF/resolve/main/openbuddy-wayfarer-70b-v25.1-131k.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
calbors/PhyloGPN-ClinVar | calbors | "2025-01-15T19:12:47Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"phylogpn",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | "2025-01-15T19:12:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jpark677/llava-v1.5-7b-cvbench-1 | jpark677 | "2025-02-14T06:31:58Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:liuhaotian/llava-v1.5-7b",
"base_model:adapter:liuhaotian/llava-v1.5-7b",
"region:us"
] | null | "2025-02-14T06:31:56Z" | ---
base_model: liuhaotian/llava-v1.5-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0
|
Nexspear/9d6c3588-50f3-4ad7-b7ec-9003f44b327b | Nexspear | "2025-02-07T11:35:51Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
] | null | "2025-02-07T11:30:24Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d6c3588-50f3-4ad7-b7ec-9003f44b327b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f8e02fa823dd12e8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f8e02fa823dd12e8_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/9d6c3588-50f3-4ad7-b7ec-9003f44b327b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 8
mlflow_experiment_name: /tmp/f8e02fa823dd12e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 114b5f6a-8705-4766-a06f-03b7aa23b4ef
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 114b5f6a-8705-4766-a06f-03b7aa23b4ef
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d6c3588-50f3-4ad7-b7ec-9003f44b327b
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.938 | 0.0055 | 1 | 11.9396 |
| 11.8831 | 0.2743 | 50 | 11.8795 |
| 11.8735 | 0.5487 | 100 | 11.8733 |
| 11.8634 | 0.8230 | 150 | 11.8630 |
| 11.8332 | 1.0974 | 200 | 11.8528 |
| 11.8331 | 1.3717 | 250 | 11.8470 |
| 11.8294 | 1.6461 | 300 | 11.8441 |
| 11.8596 | 1.9204 | 350 | 11.8429 |
| 11.8156 | 2.1948 | 400 | 11.8428 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
suke0327/whisper-large_odd_en_and_even_de | suke0327 | "2024-05-05T13:31:21Z" | 138 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-05T13:28:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
polymathic-ai/TFNO-viscoelastic_instability | polymathic-ai | "2025-03-28T13:03:14Z" | 0 | 0 | null | [
"safetensors",
"physics",
"dataset:polymathic-ai/viscoelastic_instability",
"arxiv:2310.00120",
"region:us"
] | null | "2025-03-28T13:03:10Z" | ---
datasets: polymathic-ai/viscoelastic_instability
tags:
- physics
---
# Benchmarking Models on the Well
[The Well](https://github.com/PolymathicAI/the_well) is a 15TB dataset collection of physics simulations. This model is part of the models that have been benchmarked on the Well.
The models have been trained for a fixed time of 12 hours or up to 500 epochs, whichever happens first. The training was performed on a NVIDIA H100 96GB GPU.
In the time dimension, the context length was set to 4. The batch size was set to maximize the memory usage. We experiment with 5 different learning rates for each model on each dataset.
We use the model performing best on the validation set to report test set results.
The reported results are here to provide a simple baseline. **They should not be considered as state-of-the-art**. We hope that the community will build upon these results to develop better architectures for PDE surrogate modeling.
# Tensorized Fourier Neural Operator
Implementation of the [Tensorized Fourier Neural Operator](https://arxiv.org/abs/2310.00120) provided by [`neuraloperator v0.3.0`](https://neuraloperator.github.io/dev/index.html).
## Model Details
For benchmarking on the Well, we used the following parameters.
| Parameters | Values |
|------------|--------|
| Modes | 16 |
| Blocks | 4 |
| Hidden Size| 128 |
## Trained Model Versions
Below is the list of checkpoints available for the training of TFNO on different datasets of the Well.
| Dataset | Learning Rate | Epoch | VRMSE |
|---------|----------------|-------|-------|
| [acoustic_scattering_maze](https://huggingface.co/polymathic-ai/TFNO-acoustic_scattering) | 1E-3 | 27 | 0.5034 |
| [active_matter](https://huggingface.co/polymathic-ai/TFNO-active_matter) | 1E-3 | 243 | 0.3342 |
| [convective_envelope_rsg](https://huggingface.co/polymathic-ai/TFNO-convective_envelope_rsg) | 1E-3 | 13 | 0.0195 |
| [gray_scott_reaction_diffusion](https://huggingface.co/polymathic-ai/TFNO-gray_scott_reaction_diffusion) | 5E-3 | 45 | 0.1784 |
| [helmholtz_staircase](https://huggingface.co/polymathic-ai/TFNO-helmholtz_staircase) | 5E-4 | 131 | 0.00031 |
| [MHD_64](https://huggingface.co/polymathic-ai/TFNO-MHD_64) | 1E-3 | 155 | 0.3347 |
| [planetswe](https://huggingface.co/polymathic-ai/TFNO-planetswe) | 5E-4 | 49 | 0.1061 |
| [post_neutron_star_merger](https://huggingface.co/polymathic-ai/TFNO-post_neutron_star_merger) | 5E-4 | 99 | 0.4064 |
| [rayleigh_benard](https://huggingface.co/polymathic-ai/TFNO-rayleigh_benard) | 1E-4 | 31 | 0.8568 |
| [rayleigh_taylor_instability](https://huggingface.co/polymathic-ai/TFNO-rayleigh_taylor_instability) | 1E-4 | 175 | 0.2251 |
| [shear_flow](https://huggingface.co/polymathic-ai/TFNO-shear_flow) | 1E-3 | 24 | 0.3626 |
| [supernova_explosion_64](https://huggingface.co/polymathic-ai/TFNO-supernova_explosion_64) | 1E-4 | 35 | 0.3645 |
| [turbulence_gravity_cooling](https://huggingface.co/polymathic-ai/TFNO-turbulence_gravity_cooling) | 5E-4 | 10 | 0.2789 |
| [turbulent_radiative_layer_2D](https://huggingface.co/polymathic-ai/TFNO-turbulent_radiative_layer_2D) | 1E-3 | 500 | 0.4938 |
| [viscoelastic_instability](https://huggingface.co/polymathic-ai/TFNO-viscoelastic_instability) | 5E-3 | 199 | 0.7021 |
## Loading the model from Hugging Face
To load the TFNO model trained on the `viscoelastic_instability` of the Well, use the following commands.
```python
from the_well.benchmark.models import TFNO
model = TFNO.from_pretrained("polymathic-ai/TFNO-viscoelastic_instability")
``` |
Vertti/TuumaPEFTDialogue04 | Vertti | "2023-08-26T17:10:32Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-26T17:09:58Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
abhishekkuber/step2_sent_class_dccl | abhishekkuber | "2025-03-18T13:50:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-18T13:49:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mesolitica/malaysian-llama2-7b-32k-instructions | mesolitica | "2023-11-03T14:07:12Z" | 19 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-26T05:00:24Z" | ---
language:
- ms
---
# QLORA Malaysian Llama2 7B 32k chat completions
QLORA https://huggingface.co/mesolitica/llama-7b-hf-32768-fpf on translated UltraChat, https://huggingface.co/datasets/mesolitica/google-translate-ultrachat.
We use exact Llama2 chat template.
README at https://github.com/mesolitica/malaya/tree/5.1/session/llama2#7b-16384-context-length-flash-attention-2
## how-to
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
def parse_llama_chat(messages):
system = messages[0]['content']
user_query = messages[-1]['content']
users, assistants = [], []
for q in messages[1:-1]:
if q['role'] == 'user':
users.append(q['content'])
elif q['role'] == 'assistant':
assistants.append(q['content'])
texts = [f'<s>[INST] <<SYS>>\n{system}\n<</SYS>>\n\n']
for u, a in zip(users, assistants):
texts.append(f'{u.strip()} [/INST] {a.strip()} </s><s>[INST] ')
texts.append(f'{user_query.strip()} [/INST]')
prompt = ''.join(texts).strip()
return prompt
TORCH_DTYPE = 'bfloat16'
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
)
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-llama2-7b-32k-instructions')
model = AutoModelForCausalLM.from_pretrained(
'mesolitica/malaysian-llama2-7b-32k-instructions',
use_flash_attention_2 = True,
quantization_config = nf4_config
)
messages = [
{'role': 'system', 'content': 'awak adalah AI yang mampu jawab segala soalan'},
{'role': 'user', 'content': 'kwsp tu apa'}
]
prompt = parse_llama_chat(messages)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```text
'<s> [INST] <<SYS>>
awak adalah AI yang mampu jawab segala soalan
<</SYS>>
kwsp tu apa [/INST] Kumpulan Wang Simpanan Pekerja (KWSP) ialah sebuah badan berkanun yang ditubuhkan di Malaysia yang menguruskan tabung simpanan tetap pekerja bagi tujuan persaraan dan perancangan masa depan. Diasaskan pada tahun 1951, KWSP bertanggungjawab untuk mengumpul dan menguruskan sumber daripada majikan dan pekerja, dan juga menyediakan pelbagai faedah kepada ahli seperti dividen dan akses kepada pengeluaran simpanan pada usia persaraan. KWSP juga memainkan peranan penting dalam menyediakan perlindungan sosial dan pembangunan ekonomi di Malaysia, dengan mempromosikan simpanan pengguna dan meningkatkan kadar celik kewangan dalam kalangan rakyat. </s>'
```
```python
messages = [
{'role': 'system', 'content': 'awak adalah AI yang mampu jawab segala soalan'},
{'role': 'user', 'content': 'awat malaysia ada jabatan koko, malaysia bukan buat keluaq koko banyak pun'}
]
prompt = parse_llama_chat(messages)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```text
<s> [INST] <<SYS>>
awak adalah AI yang mampu jawab segala soalan
<</SYS>>
awat malaysia ada jabatan koko, malaysia bukan buat keluaq koko banyak pun [/INST] Penanaman koko di Malaysia bermula pada tahun 1950-an, dengan pengeluaran koko kering pertama dihasilkan pada tahun 1958. Pada masa itu, negara ini menjadi pembekal koko terbesar di dunia, dan industri ini terus berkembang dan mewujudkan ribuan pekerjaan dalam pemprosesan dan pengeluaran koko.
Walau bagaimanapun, penurunan harga koko di pasaran antarabangsa pada tahun 1980-an menyebabkan pengeluaran koko Malaysia menurun, dan negara ini telah mengubah tumpuan daripada penanaman koko kepada komoditi lain seperti minyak kelapa sawit dan getah.
Walaupun Malaysia bukan pengeluar koko yang besar, industri koko tempatan masih penting dari segi ekonomi dan sosial. Koko ialah komoditi yang diperdagangkan di bursa tempatan, dan pengeluar koko tempatan menghasilkan koko kering yang dieksport ke seluruh dunia.
Jabatan Koko Malaysia ialah sebuah agensi kerajaan yang bertanggungjawab untuk membangunkan industri koko dan mempromosikan pengeluaran koko. Agensi ini bekerjasama dengan industri untuk meningkatkan produktiviti dan memastikan kualiti produk koko Malaysia. Ia juga menggalakkan usaha pembangunan pekebun kecil untuk mempromosikan industri koko dan mewujudkan peluang pekerjaan dan ekonomi di kawasan pedalaman. </s>
``` |
Brandulio/ppo-SnowballTarget | Brandulio | "2023-06-20T23:42:52Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-06-20T23:42:51Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Brandulio/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chqmatteo/Reinforce-cartpole-test | chqmatteo | "2023-01-13T08:52:23Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-13T08:52:08Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jordyvl/dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_og_simkd | jordyvl | "2023-08-03T06:05:30Z" | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-07-28T03:03:50Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_og_simkd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_og_simkd
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12216.4121
- Accuracy: 0.8337
- Brier Loss: 0.3073
- Nll: 2.1945
- F1 Micro: 0.8337
- F1 Macro: 0.8337
- Ece: 0.1506
- Aurc: 0.0535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 12731.392 | 1.0 | 1000 | 12479.0312 | 0.5323 | 0.5967 | 3.1288 | 0.5323 | 0.4700 | 0.1047 | 0.2100 |
| 12715.18 | 2.0 | 2000 | 12453.9434 | 0.5787 | 0.6261 | 3.4535 | 0.5787 | 0.5518 | 0.1953 | 0.2225 |
| 12672.101 | 3.0 | 3000 | 12456.4629 | 0.6723 | 0.4708 | 2.9701 | 0.6723 | 0.6487 | 0.1102 | 0.1171 |
| 12681.216 | 4.0 | 4000 | 12448.6143 | 0.6815 | 0.4741 | 2.8370 | 0.6815 | 0.6707 | 0.1250 | 0.1245 |
| 12638.181 | 5.0 | 5000 | 12443.0645 | 0.716 | 0.4178 | 2.7837 | 0.7160 | 0.7273 | 0.1095 | 0.0932 |
| 12802.019 | 6.0 | 6000 | 12432.3438 | 0.7468 | 0.3768 | 2.6575 | 0.7468 | 0.7537 | 0.0898 | 0.0800 |
| 12671.194 | 7.0 | 7000 | 12420.4395 | 0.7335 | 0.4163 | 2.8018 | 0.7335 | 0.7331 | 0.1256 | 0.1139 |
| 12705.783 | 8.0 | 8000 | 12410.6143 | 0.7598 | 0.3739 | 2.7562 | 0.7598 | 0.7634 | 0.1157 | 0.0729 |
| 12559.44 | 9.0 | 9000 | 12412.9453 | 0.7612 | 0.3793 | 2.8905 | 0.7612 | 0.7658 | 0.1232 | 0.0731 |
| 12618.725 | 10.0 | 10000 | 12390.2139 | 0.7722 | 0.3613 | 2.6477 | 0.7722 | 0.7753 | 0.1190 | 0.0750 |
| 12731.292 | 11.0 | 11000 | 12387.4863 | 0.7875 | 0.3427 | 2.5376 | 0.7875 | 0.7898 | 0.1154 | 0.0689 |
| 12705.794 | 12.0 | 12000 | 12379.9336 | 0.7805 | 0.3555 | 2.6072 | 0.7805 | 0.7813 | 0.1284 | 0.0681 |
| 12550.782 | 13.0 | 13000 | 12380.7959 | 0.787 | 0.3400 | 2.5381 | 0.787 | 0.7887 | 0.1162 | 0.0662 |
| 12670.568 | 14.0 | 14000 | 12376.7646 | 0.7867 | 0.3423 | 2.6149 | 0.7868 | 0.7925 | 0.1186 | 0.0574 |
| 12580.616 | 15.0 | 15000 | 12352.3135 | 0.7953 | 0.3468 | 2.6324 | 0.7953 | 0.7969 | 0.1382 | 0.0622 |
| 12723.865 | 16.0 | 16000 | 12345.4600 | 0.8015 | 0.3312 | 2.4793 | 0.8015 | 0.8034 | 0.1244 | 0.0601 |
| 12620.305 | 17.0 | 17000 | 12343.1553 | 0.8023 | 0.3424 | 2.6488 | 0.8023 | 0.8031 | 0.1420 | 0.0644 |
| 12668.087 | 18.0 | 18000 | 12336.9277 | 0.8 | 0.3455 | 2.7019 | 0.8000 | 0.8023 | 0.1401 | 0.0592 |
| 12654.687 | 19.0 | 19000 | 12332.4404 | 0.8075 | 0.3321 | 2.5589 | 0.8075 | 0.8094 | 0.1393 | 0.0556 |
| 12578.655 | 20.0 | 20000 | 12321.3037 | 0.8075 | 0.3395 | 2.4255 | 0.8075 | 0.8050 | 0.1484 | 0.0638 |
| 12525.448 | 21.0 | 21000 | 12315.5303 | 0.8067 | 0.3328 | 2.5264 | 0.8067 | 0.8066 | 0.1440 | 0.0548 |
| 12610.837 | 22.0 | 22000 | 12311.0215 | 0.8105 | 0.3291 | 2.4781 | 0.8105 | 0.8112 | 0.1445 | 0.0540 |
| 12494.528 | 23.0 | 23000 | 12303.3623 | 0.8145 | 0.3337 | 2.5535 | 0.8145 | 0.8154 | 0.1510 | 0.0561 |
| 12561.799 | 24.0 | 24000 | 12296.2363 | 0.8153 | 0.3246 | 2.4243 | 0.8153 | 0.8142 | 0.1475 | 0.0513 |
| 12580.176 | 25.0 | 25000 | 12291.8018 | 0.8193 | 0.3262 | 2.3932 | 0.8193 | 0.8174 | 0.1484 | 0.0550 |
| 12455.165 | 26.0 | 26000 | 12276.9355 | 0.826 | 0.3223 | 2.4710 | 0.826 | 0.8251 | 0.1507 | 0.0597 |
| 12528.496 | 27.0 | 27000 | 12280.9180 | 0.8257 | 0.3154 | 2.4010 | 0.8257 | 0.8260 | 0.1462 | 0.0524 |
| 12521.554 | 28.0 | 28000 | 12262.9600 | 0.821 | 0.3274 | 2.4721 | 0.821 | 0.8201 | 0.1560 | 0.0595 |
| 12557.871 | 29.0 | 29000 | 12260.7754 | 0.823 | 0.3217 | 2.3929 | 0.823 | 0.8226 | 0.1552 | 0.0551 |
| 12535.524 | 30.0 | 30000 | 12271.4717 | 0.8263 | 0.3183 | 2.3249 | 0.8263 | 0.8269 | 0.1503 | 0.0502 |
| 12488.263 | 31.0 | 31000 | 12259.3057 | 0.823 | 0.3219 | 2.3830 | 0.823 | 0.8226 | 0.1541 | 0.0528 |
| 12498.048 | 32.0 | 32000 | 12253.2412 | 0.8263 | 0.3174 | 2.2771 | 0.8263 | 0.8243 | 0.1527 | 0.0541 |
| 12465.825 | 33.0 | 33000 | 12257.4863 | 0.8323 | 0.3088 | 2.3466 | 0.8323 | 0.8319 | 0.1454 | 0.0500 |
| 12439.6 | 34.0 | 34000 | 12238.5957 | 0.8323 | 0.3093 | 2.4057 | 0.8323 | 0.8329 | 0.1482 | 0.0552 |
| 12407.423 | 35.0 | 35000 | 12250.7178 | 0.8335 | 0.3072 | 2.2532 | 0.8335 | 0.8336 | 0.1471 | 0.0521 |
| 12534.711 | 36.0 | 36000 | 12231.9902 | 0.8353 | 0.3032 | 2.2711 | 0.8353 | 0.8353 | 0.1464 | 0.0548 |
| 12458.666 | 37.0 | 37000 | 12232.9521 | 0.835 | 0.3041 | 2.2523 | 0.835 | 0.8352 | 0.1467 | 0.0539 |
| 12461.748 | 38.0 | 38000 | 12230.4639 | 0.8317 | 0.3096 | 2.3052 | 0.8317 | 0.8318 | 0.1512 | 0.0539 |
| 12434.679 | 39.0 | 39000 | 12229.0684 | 0.8317 | 0.3081 | 2.2172 | 0.8317 | 0.8317 | 0.1497 | 0.0547 |
| 12468.468 | 40.0 | 40000 | 12226.4775 | 0.8323 | 0.3096 | 2.3112 | 0.8323 | 0.8324 | 0.1509 | 0.0524 |
| 12540.176 | 41.0 | 41000 | 12213.8359 | 0.8357 | 0.3085 | 2.2929 | 0.8357 | 0.8356 | 0.1502 | 0.0541 |
| 12513.896 | 42.0 | 42000 | 12216.2480 | 0.8333 | 0.3096 | 2.1638 | 0.8333 | 0.8329 | 0.1501 | 0.0559 |
| 12406.31 | 43.0 | 43000 | 12213.7012 | 0.8347 | 0.3078 | 2.1971 | 0.8347 | 0.8345 | 0.1504 | 0.0542 |
| 12350.768 | 44.0 | 44000 | 12224.6738 | 0.8323 | 0.3086 | 2.1722 | 0.8323 | 0.8320 | 0.1514 | 0.0546 |
| 12394.478 | 45.0 | 45000 | 12221.9336 | 0.8325 | 0.3100 | 2.2464 | 0.8325 | 0.8323 | 0.1516 | 0.0536 |
| 12399.318 | 46.0 | 46000 | 12207.5957 | 0.8347 | 0.3089 | 2.2193 | 0.8347 | 0.8344 | 0.1517 | 0.0553 |
| 12476.218 | 47.0 | 47000 | 12213.4814 | 0.8353 | 0.3055 | 2.2084 | 0.8353 | 0.8353 | 0.1488 | 0.0532 |
| 12448.278 | 48.0 | 48000 | 12212.2119 | 0.8347 | 0.3058 | 2.1518 | 0.8347 | 0.8345 | 0.1492 | 0.0545 |
| 12486.848 | 49.0 | 49000 | 12210.5742 | 0.8347 | 0.3062 | 2.2778 | 0.8347 | 0.8345 | 0.1498 | 0.0546 |
| 12376.327 | 50.0 | 50000 | 12216.4121 | 0.8337 | 0.3073 | 2.1945 | 0.8337 | 0.8337 | 0.1506 | 0.0535 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
MayIAskYou/Gemma3-12b-CXL3 | MayIAskYou | "2025-03-22T14:33:08Z" | 29 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-12b-it",
"base_model:finetune:unsloth/gemma-3-12b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-15T05:38:42Z" | ---
base_model: unsloth/gemma-3-12b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MayIAskYou
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
suku9/gpt2-chem-ft | suku9 | "2024-11-19T03:06:58Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-11-19T03:06:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
douglasrolins/bert-base-portuguese-cased_ft-multilple-choice-enem-sample | douglasrolins | "2024-01-19T17:56:32Z" | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-01-19T15:02:18Z" | ---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-portuguese-cased_ft-multilple-choice-enem-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased_ft-multilple-choice-enem-sample
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5998
- Accuracy: 0.4022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 346 | 1.3529 | 0.4457 |
| 1.3051 | 2.0 | 692 | 1.7823 | 0.4275 |
| 0.5312 | 3.0 | 1038 | 2.3728 | 0.3986 |
| 0.5312 | 4.0 | 1384 | 2.5998 | 0.4022 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jondurbin/airoboros-7b-gpt4-1.3 | jondurbin | "2023-06-22T14:58:20Z" | 1,429 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-20T07:09:09Z" | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.3
---
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
Skywork/Skywork-13B-Base-8bits | Skywork | "2023-11-05T05:02:49Z" | 5 | 7 | transformers | [
"transformers",
"pytorch",
"skywork",
"text-generation",
"custom_code",
"arxiv:2310.19341",
"arxiv:2310.16713",
"license:other",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-10-24T03:58:41Z" | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
---
<!-- <div align="center">
<h1>
✨Skywork
</h1>
</div> -->
<div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
<p align="center">
👨💻 <a href="https://github.com/SkyworkAI/Skywork" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a>• 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="http://arxiv.org/abs/2310.19341" target="_blank">Tech Report</a>
</p>
<div align="center">
[🎉天工在线对话平台已正式向公众开放](https://sso.tiangong.cn/?redirect=https://model-platform.tiangong.cn/overview&client_id=200005)
</div>
<div align="center">
[](https://github.com/SkyworkAI/Skywork/stargazers)
[](https://github.com/SkyworkAI/Skywork/fork)
</div>
# 模型介绍(Introduction)
**Skywork-13B-Base**模型在高质量清洗过滤的3.2万亿个多语言(主要是中文和英文)和代码数据上进行预训练,它在多种评测和各种基准测试上都展现了同等规模模型的最佳效果。
**Skywork-13B-Base**: The model was trained on a high-quality cleaned dataset consisting of 3.2 trillion multilingual data (mainly Chinese and English) and code. It has demonstrated the best performance among models of similar scale in various evaluations and benchmark tests.
**Skywork-13B-Base-8bits**模型为**Skywork-13B-Base**的8bits量化版,支持用户在消费级显卡上进行进行部署和推理。
**Skywork-13B-Base-8bits** is a quantizated model of **Skywork-13B-Base**, to support deployment and inference on consumer-grade GPUs.
如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的[技术报告](http://arxiv.org/abs/2310.19341),[Skymath](https://arxiv.org/abs/2310.16713)论文,[SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf)论文。
If you are interested in more training and evaluation details, please refer to our [technical report](http://arxiv.org/abs/2310.19341), [Skymath]((https://arxiv.org/skywork-tech-report)) paper and [SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf) paper.
## 训练数据(Training Data)
我们精心搭建了数据清洗流程对文本中的低质量数据、有害信息、敏感信息进行清洗过滤。我们的Skywork-13B-Base模型是在清洗后的3.2TB高质量中、英、代码数据上进行训练,其中英文占比52.2%,中文占比39.6%,代码占比8%,在兼顾中文和英文上的表现的同时,代码能力也能有保证。
We have developed a data cleaning pipeline with great care to effectively clean and filter low-quality data and eliminate harmful information from text data. Our Skywork-13B-Base model is trained on a dataset with 3.2TB tokens that consists of high-quality Chinese, English, and code data, all of which have been thoroughly cleaned. The English data comprises 52.2% of the dataset, the Chinese data accounts for 39.6%, and the code data makes up 8%. This comprehensive approach ensures optimal performance for both Chinese and English while also maintaining the ability to handle code.
| | Category | Percentage |
|-------------|------------------|------------|
| **English** | Webpages | 39.8% |
| | Books | 3.6% |
| | Academic Papers | 3.0% |
| | Encyclopedia | 0.5% |
| | Miscellany | 2.9% |
| **Chinese** | Webpages | 30.4% |
| | Social Media | 5.5% |
| | Encyclopedia | 0.8% |
| | Miscellany | 3.1% |
| **Other Lang.** | Encyclopedia | 2.4% |
| **Code** | Github | 8.0% |
## 模型结构(Model Structure)
与Llama-2-13B模型对比,天工Skywork-13B模型采用相对更加瘦长的网络结构,层数为52层,同时将FFN Dim和Hidden Dim缩小到12288和4608,从而保证模型参数量和原始Llama-2-13B模型相当。根据我们前期实验对比,相对瘦长的网络结构在大Batch Size训练下可以取得更好的泛化效果。Skywork-13B和Llama-2-13B模型的对比如下:
Compared to the Llama2-13B model, the Skywork-13B model adopts a relatively thinner and deeper network structure with 52 layers. At the same time, the FFN Dim and Hidden Dim are reduced to 12288 and 4608, respectively, to ensure that the model has a similar number of parameters as the original Llama-13B model. Based on our preliminary experimental results, a relatively thinner and deeper network structure can achieve better generalization performance under large batch size training. The detailed comparison between the Skywork-13B and Llama-2-13B models is as follows:
| Model Structure | Llama2-13B | Skywork-13B |
|----------------------|:----:|:-----------:|
| Vocab. Size | 32,000 | 65,536 |
| Hidden Dim. | 5,120 | 4,608 |
| FFN Dim. | 13,696 | 12,288 |
| Head Dim. | 128 | 128 |
| Num. Heads | 40 | 36 |
| Num. Layers | 40 | 52 |
| Seq. Len. | 4,096 | 4,096 |
| Positional Embedding | RoPE | RoPE |
## 分词器(Tokenizer)
我们使用Byte-Pair Encoding(BPE)对数据进行分词,词表大小为65536,其中拉丁字符和子词为32000个,汉字和Unicode符号8000个,汉语词语25519个,剩下的17个为保留字。
We use Byte-Pair Encoding (BPE) to tokenize the data, with a vocabulary size of 65536. Among them, there are 32000 Latin characters and subwords, 8000 Chinese characters and Unicode symbols, 25519 Chinese words, and the remaining 17 are reserved words.
| Category | Size |
|---------------------------------|--------|
| Latin based words & subwords | 32000 |
| Chinese characters & Unicode symbols | 8000 |
| Chinese words | 25519 |
| Reserved symbols | 17 |
| **Total** | **65536** |
# 模型评估(Evaluation)
## 领域数据困惑度评估(Perplexity Evaluaiton)
语言模型训练的本质上是让预测下一个词更准确。基于这个认知,我们认为评估基础大模型一个重要的方式是评估在各大领域上语言模型生成文章的概率。在模型训练中预测下一个词的概率一般使用Cross Entropy损失函数,整体的损失函数为每个位置预测真实词损失的平均,则有:
$$loss = -\sum^{n}_{i=1} log(p_i) / n = -log( \prod_{i=1}^n p_i) / n$$
其中$n$是文档的长度,即token数,$p_i$是位置i上真实词的概率,我们知道文档中每一个位置上真实词的概率的联乘则为生成该文档的概率,如此我们就将loss和生成文章的概率联系在了一起。而不同模型因为使用的分词器不同,具有不同的token数,因此对损失函数乘以token数目$n$,这样就仅考虑生成文章的概率部分,不同模型也可以进行比较。我们将标准化后loss取指数转换成perplexity,使得模型的差异更加可读。为了阅读方便后续提到的loss和ppl为模型标准化后的loss和perplexity。
基于上述分析,我们对对多个领域筛选出2023年9月份新发布的几百到上千篇高质量文章,并人工进行了核对。保证所有的测试数据不在天工模型以及其他所有模型的训练集中,并且测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的ppl,模型很难作弊。
下图列出了不同开源模型,天工Skywork-13B-Base取得最优效果,证明了我们的Base模型的基础能力处于国内开源模型中文最强水平。
We have chosen several hundred to thousands of high-quality articles that were published after September 1, 2023 across various fields. We have manually verified these articles to ensure their quality. It is important to note that none of the test data used in evaluating the Skywork model or any other models is included in their training set. Furthermore, the test data is diverse and of high quality, making it challenging for the models to gain an unfair advantage.
The figure below displays the performance of different open source models. Skywork-13B-Base achieves the best results.
| | Tech | Movie | Gov. | Game | Finance | General | Average |
|------------------|-------|-------|-------|-------|---------|---------|---------|
| MOSS-7B | 20.83 | 39.66 | 11.08 | 31.24 | 10.59 | 13.25 | 18.50 |
| InternLM-7B | 13.43 | 24.90 | 5.88 | 19.78 | 6.17 | 8.10 | 11.17 |
| Qwen-7B | 13.39 | 25.16 | 5.55 | 19.26 | 5.76 | 7.78 | 10.83 |
| Baichuan2-7B | 12.89 | 23.26 | 5.34 | 18.36 | 5.68 | 7.62 | 10.41 |
| LLaMA2-13B | 23.26 | 50.66 | 18.09 | 32.52 | 14.85 | 16.55 | 23.54 |
| Xverse-13B | 12.55 | 23.49 | 5.20 | 17.69 | 5.54 | 7.46 | 10.19 |
| Baichuan-13B | 12.38 | 22.46 | 5.21 | 17.59 | 5.42 | 7.37 | 10.03 |
| Baichuan2-13B | 12.14 | 21.85 | 5.05 | 17.15 | 5.35 | 7.24 | 9.81 |
| Qwen-14B | 11.90 | 22.43 | 4.89 | **16.94** | 5.24 | 7.03 | 9.67 |
| InternLM-20B | 12.34 | 22.06 | 5.75 | 17.45 | 5.73 | 7.78 | 10.34 |
| Aquila2-34B | 14.62 | 29.09 | 5.72 | 21.78 | 5.83 | 8.45 | 11.73 |
| Skywork-13B-Base | **11.58** | **21.84** | **4.76** | 17.28 | **4.92** | **6.82** | **9.42** |
### 评测数据和评测脚本(Loss Evaluation)
我们将评测数据和评测脚本也进行了开源,下载github上的代码运行下面命令则可以复现我们的结果。
We have also open-sourced the data and evaluation scripts. You can reproduce our results by running the following command.
```
bash bash_scripts/skywork_eval_loss.sh
```
## Benchmark评估(Benchmark Results)
我们评估了各大权威评测benchmark上的结果作为参考,包括C-Eval,MMLU,CMMLU,GSM8K。遵循之前的评估流程,C-Eval、MMLU、CMMLU测试5-shot结果,GSM8K测试8-shot结果。可以看到Skywork-13B-Base模型在中文开源模型中处于前列,在同等参数规模下为最优水平。
We evaluated Skywork-13B-Base on several popular benchmarks, including C-Eval, MMLU, CMMLU, and GSM8K. Following the previous evaluation process, we tested the 5-shot results of C-Eval, MMLU, and CMMLU, and the 8-shot results of GSM8K. It can be seen that the Skywork-13B-Base model is among the top models in the Chinese open source model community, performing at an optimal level with the same parameter scale.
| Model | C-Eval | CMMLU | MMLU | GSM8K |
|-------------------------|:-----:|:---------------:|:----------:|:-------:|
| LLaMA-1-13B-Base | 35.5 | 31.2 | 46.9 | 17.8 |
| Open-LLaMA-13B | 27.1 | 26.7 | 42.7 | 12.4 |
| LLaMA-2-13B-Base | 36.5 | 36.6 | 54.8 | 28.7 |
| InternLM-20B | 58.8 | - | 62.0 | 52.6 |
| Qwen-14B-Base | 72.1 | 71.0 | 66.3 | 61.3 |
| Aquila2-34B-Base | 63.1 | 71.4 | 64.2 | 58.4 |
| XVERSE-13B-Base | 54.7 | - | 55.1 | - |
| Baichuan-13B-Base | 52.4 | 55.3 | 51.6 | 26.6 |
| Baichuan-2-13B-Base | 58.1 | 62.0 | 59.2 | 52.3 |
| Skywork-13B-Base (ours) | 60.6 | 61.8 | 62.1 | 55.8 |
## Benchmark评估详细结果
我们给出**Skywork-13B-Base**模型在C-Eval,CMMLU,MMLU上模型的详细结果。
We provide detailed results of the Skywork-13B-Base model on C-EVAL, CMMLU, and MMLU.
| Benchmark | **STEM** | **Humanities** | **Social Science** | **Other** | **China Specific** | **Hard** | **Average** |
|:-----:|:---------:|:--------:|:-------------:|:--------:|:--------:|:--------:|:--------:|
| **C-EVAL** | 51.2 | 67.8 | 74.6 | 57.5 | - | 39.4 | 60.6 |
| **CMMLU** | 49.5 | 69.3 | 65.9 | 63.3 | 64.2 | - | 61.8 |
| **MMLU** | 51.6 | 58.0 | 72.5 | 68.8 | - | - | 62.1 |
# 快速开始(Quickstart)
我们将模型参数、配置文件、tokenizer等在huggingface和modelscope上进行了开源。
We have open-sourced the model parameters, configuration files, tokenizer, and more on Huggingface and Modelscope.
## 依赖安装(Requirements)
- Python 3.8及以上版本
- Pytorch 2.0及以上版本
- CUDA建议使用11.4以上版本。
Skywork-13B-Base模型,Skywork-13B-Chat模型和Skywork-13B-Math模型运行下面的脚本进行Python依赖安装。
- Python 3.8 and above
- Pytorch 2.0 and above
- CUDA 11.4 and above are recommended.
Skywork-13B-Base model, Skywork-13B-Chat model, and Skywork-13B-Math model run the following script for Python dependency installation:
```shell
pip install -r requirements.txt
```
## Huggingface模型测试(Demonstration)
### Base 模型推理(Base Model Inference)
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation import GenerationConfig
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("SkyworkAI/Skywork-13B-Base", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("SkyworkAI/Skywork-13B-Base", device_map="auto", trust_remote_code=True).eval()
>>> inputs = tokenizer('陕西的省会是西安', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
陕西的省会是西安,西安是我国著名的古都,在历史上有十三个朝代在此建都,所以西安又被称为“十三朝古都”。西安是我国著名的旅游城市,每年都有大量的游客来到西安旅游,西安的旅游资源非常丰富,有很多著名的旅游景点,比如秦始皇兵马俑、大雁塔、华清池、大唐芙蓉园、西安城墙、大明宫国家遗址公园、西安碑林博物馆、西安钟楼、西安鼓楼、西安半坡博物馆、西安大兴善寺、西安小雁塔
>>> inputs = tokenizer('陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州,湖北的省会是武汉,湖南的省会是长沙,江西的省会是南昌,安徽的省会是合肥,江苏的省会是南京,浙江的省会是杭州,福建的省会是福州,广东的省会是广州,广西的省会是南宁,海南的省会是海口,四川的省会是成都,贵州的省会是贵阳,云南的省会是昆明,西藏的省会是拉萨,青海的省会是西宁,宁夏的省会是银川,新疆的省会是乌鲁木齐。
```
# 模型微调(Fine-tuning)
## 全量微调(Full-parameter Fine-tuning)
使用Skywork-13B-Base模型进行预训练微调
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
## launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt.sh
```
使用Skywork-13B-Base模型进行有监督微调(SFT, Supevise Fine-tuning)
```bash
## preprocess data and launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft.sh
```
## LoRA微调(PEFT)
使用Skywork-13B-Base模型以及LoRA进行预训练微调
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt_lora.sh
```
使用Skywork-13B-Base模型以及LoRA进行有监督微调(SFT, Supevise Fine-tuning)
```bash
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft_lora.sh
```
# 量化部署(Quantization)
## 8bit量化(Int8 Quantization)
skywork 采用主流8bits量化方法:[BitsAndBytes](https://github.com/TimDettmers/bitsandbytes)。该方法量化后性能基本无损,且已经集成到transformers库中,基于BitsAndBytes,我们提供在线量化和离线8bits模型两种方式。
以下我们提供示例说明如何使用int8量化模型,在开始使用之前,请先安装BitsAndBytes库并安装所需依赖包,具体安装方式见[BitsAndBytes](https://github.com/TimDettmers/bitsandbytes)库。
### 在线量化(Online Quantization)
```python
model = AutoModelForCausalLM.from_pretrained("skywork-13B-Base", torch_dtype=torch.bfloat16,load_in_8bit=True, trust_remote_code=True).eval()
```
### 离线量化(Offline Quantization)
```python
model = AutoModelForCausalLM.from_pretrained("skywork-13B-Base-8bits", device_map="auto", torch_dtype=torch.bfloat16,trust_remote_code=True).eval()
```
### 量化效果(Evaluation)
我们对量化模型在基准评测数据集上做了测试,结果如下所示:
| Precision | C-Eval | MMLU | CMMLU |
| --------- | ------ | ----- | ----- |
| bf16 | 60.6 | 61.8 | 62.1 |
| 8bits | 58.5 | 61.8 | 61.0 |
### 显存占用(GPU Mem in GB)
| Precision | Skywork-13B |
| --------- | ----------- |
| bf16 | 25.91 |
| 8bits | 13.57 |
# 声明和协议(Declaration and License Agreement)
## 声明(Declaration)
我们在此声明,不要利用Skywork模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Skywork 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用skywork开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment.
We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility.
## 协议(License Agreement)
社区使用Skywork模型需要遵循[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)。Skywork模型支持商业用途,如果您计划将Skywork模型或其衍生品用于商业目的,无需再次申请, 但请您仔细阅读[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)并严格遵守相关条款。
The community usage of Skywork model requires [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf). The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf).
[《Skywork 模型社区许可协议》》]:https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf
[[email protected]]: mailto:[email protected]
# 引用和联系我们(Contact Us and Citation)
如果您觉得我们的工作对您有帮助,欢迎引用我们的论文~
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{skyworkmath,
title={SkyMath: Technical Report},
author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu},
journal={arXiv preprint arXiv: 2310.16713},
url={https://arxiv.org/abs/2310.16713},
year={2023}
}
```
```
@article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023,
author = {Skywork Multi-Modal Group},
month = sep,
title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}},
year = {2023}
}
```
|
broodmother41/7f033805-9ef4-498e-b6f7-66abe9877c5e | broodmother41 | "2025-02-04T17:37:08Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T17:00:00Z" | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f033805-9ef4-498e-b6f7-66abe9877c5e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9d1e429593d11bcf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9d1e429593d11bcf_train_data.json
type:
field_instruction: instruct_id
field_output: instruct_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: broodmother41/7f033805-9ef4-498e-b6f7-66abe9877c5e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/9d1e429593d11bcf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 456109aa-d7f9-41ee-9b6e-75c6e8438da8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 456109aa-d7f9-41ee-9b6e-75c6e8438da8
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7f033805-9ef4-498e-b6f7-66abe9877c5e
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.0063 | 0.1904 | 200 | 1.2453 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
glob-asr/xls-r-es-test-lm | glob-asr | "2022-03-23T18:26:19Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-es-test-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: es
metrics:
- name: Test WER
type: wer
value: 9.4
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 27.95
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 30.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-es-test-lm
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ES dataset.
It achieves the following results on the test set with lm model:
- Loss: 0.1304
- WER: 0.094
- CER: 0.031
It achieves the following results on the val set with lm model:
- Loss: 0.1304
- WER: 0.081
- CER: 0.025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9613 | 0.07 | 500 | 2.9647 | 1.0 |
| 2.604 | 0.14 | 1000 | 1.8300 | 0.9562 |
| 1.177 | 0.21 | 1500 | 0.3652 | 0.3077 |
| 1.0745 | 0.28 | 2000 | 0.2707 | 0.2504 |
| 1.0103 | 0.35 | 2500 | 0.2338 | 0.2157 |
| 0.9858 | 0.42 | 3000 | 0.2321 | 0.2129 |
| 0.974 | 0.49 | 3500 | 0.2164 | 0.2031 |
| 0.9699 | 0.56 | 4000 | 0.2078 | 0.1970 |
| 0.9513 | 0.63 | 4500 | 0.2173 | 0.2139 |
| 0.9657 | 0.7 | 5000 | 0.2050 | 0.1979 |
| 0.9484 | 0.77 | 5500 | 0.2008 | 0.1919 |
| 0.9317 | 0.84 | 6000 | 0.2012 | 0.1911 |
| 0.9366 | 0.91 | 6500 | 0.2024 | 0.1976 |
| 0.9242 | 0.98 | 7000 | 0.2062 | 0.2028 |
| 0.9138 | 1.05 | 7500 | 0.1924 | 0.1863 |
| 0.921 | 1.12 | 8000 | 0.1935 | 0.1836 |
| 0.9117 | 1.19 | 8500 | 0.1887 | 0.1815 |
| 0.9064 | 1.26 | 9000 | 0.1909 | 0.1839 |
| 0.9118 | 1.32 | 9500 | 0.1869 | 0.1830 |
| 0.9121 | 1.39 | 10000 | 0.1863 | 0.1802 |
| 0.9048 | 1.46 | 10500 | 0.1845 | 0.1791 |
| 0.8955 | 1.53 | 11000 | 0.1863 | 0.1774 |
| 0.8947 | 1.6 | 11500 | 0.1907 | 0.1814 |
| 0.9073 | 1.67 | 12000 | 0.1892 | 0.1853 |
| 0.8927 | 1.74 | 12500 | 0.1821 | 0.1750 |
| 0.8732 | 1.81 | 13000 | 0.1815 | 0.1768 |
| 0.8761 | 1.88 | 13500 | 0.1822 | 0.1749 |
| 0.8751 | 1.95 | 14000 | 0.1789 | 0.1715 |
| 0.8889 | 2.02 | 14500 | 0.1819 | 0.1791 |
| 0.8864 | 2.09 | 15000 | 0.1826 | 0.1794 |
| 0.886 | 2.16 | 15500 | 0.1788 | 0.1776 |
| 0.8915 | 2.23 | 16000 | 0.1756 | 0.1719 |
| 0.8689 | 2.3 | 16500 | 0.1769 | 0.1711 |
| 0.879 | 2.37 | 17000 | 0.1777 | 0.1739 |
| 0.8692 | 2.44 | 17500 | 0.1765 | 0.1705 |
| 0.8504 | 2.51 | 18000 | 0.1699 | 0.1652 |
| 0.8728 | 2.58 | 18500 | 0.1705 | 0.1694 |
| 0.8523 | 2.65 | 19000 | 0.1674 | 0.1645 |
| 0.8513 | 2.72 | 19500 | 0.1661 | 0.1611 |
| 0.8498 | 2.79 | 20000 | 0.1660 | 0.1631 |
| 0.8432 | 2.86 | 20500 | 0.1636 | 0.1610 |
| 0.8492 | 2.93 | 21000 | 0.1708 | 0.1688 |
| 0.8561 | 3.0 | 21500 | 0.1663 | 0.1604 |
| 0.842 | 3.07 | 22000 | 0.1690 | 0.1625 |
| 0.857 | 3.14 | 22500 | 0.1642 | 0.1605 |
| 0.8518 | 3.21 | 23000 | 0.1626 | 0.1585 |
| 0.8506 | 3.28 | 23500 | 0.1651 | 0.1605 |
| 0.8394 | 3.35 | 24000 | 0.1647 | 0.1585 |
| 0.8431 | 3.42 | 24500 | 0.1632 | 0.1573 |
| 0.8566 | 3.49 | 25000 | 0.1614 | 0.1550 |
| 0.8534 | 3.56 | 25500 | 0.1645 | 0.1589 |
| 0.8386 | 3.63 | 26000 | 0.1632 | 0.1582 |
| 0.8357 | 3.7 | 26500 | 0.1631 | 0.1556 |
| 0.8299 | 3.77 | 27000 | 0.1612 | 0.1550 |
| 0.8421 | 3.84 | 27500 | 0.1602 | 0.1552 |
| 0.8375 | 3.91 | 28000 | 0.1592 | 0.1537 |
| 0.8328 | 3.97 | 28500 | 0.1587 | 0.1537 |
| 0.8155 | 4.04 | 29000 | 0.1587 | 0.1520 |
| 0.8335 | 4.11 | 29500 | 0.1624 | 0.1556 |
| 0.8138 | 4.18 | 30000 | 0.1581 | 0.1547 |
| 0.8195 | 4.25 | 30500 | 0.1560 | 0.1507 |
| 0.8092 | 4.32 | 31000 | 0.1561 | 0.1534 |
| 0.8191 | 4.39 | 31500 | 0.1549 | 0.1493 |
| 0.8008 | 4.46 | 32000 | 0.1540 | 0.1493 |
| 0.8138 | 4.53 | 32500 | 0.1544 | 0.1493 |
| 0.8173 | 4.6 | 33000 | 0.1553 | 0.1511 |
| 0.8081 | 4.67 | 33500 | 0.1541 | 0.1484 |
| 0.8192 | 4.74 | 34000 | 0.1560 | 0.1506 |
| 0.8068 | 4.81 | 34500 | 0.1540 | 0.1503 |
| 0.8105 | 4.88 | 35000 | 0.1529 | 0.1483 |
| 0.7976 | 4.95 | 35500 | 0.1507 | 0.1451 |
| 0.8143 | 5.02 | 36000 | 0.1505 | 0.1462 |
| 0.8053 | 5.09 | 36500 | 0.1517 | 0.1476 |
| 0.785 | 5.16 | 37000 | 0.1526 | 0.1478 |
| 0.7936 | 5.23 | 37500 | 0.1489 | 0.1421 |
| 0.807 | 5.3 | 38000 | 0.1483 | 0.1420 |
| 0.8092 | 5.37 | 38500 | 0.1481 | 0.1435 |
| 0.793 | 5.44 | 39000 | 0.1503 | 0.1438 |
| 0.814 | 5.51 | 39500 | 0.1495 | 0.1480 |
| 0.807 | 5.58 | 40000 | 0.1472 | 0.1424 |
| 0.7913 | 5.65 | 40500 | 0.1471 | 0.1422 |
| 0.7844 | 5.72 | 41000 | 0.1473 | 0.1422 |
| 0.7888 | 5.79 | 41500 | 0.1445 | 0.1385 |
| 0.7806 | 5.86 | 42000 | 0.1435 | 0.1394 |
| 0.7773 | 5.93 | 42500 | 0.1461 | 0.1424 |
| 0.786 | 6.0 | 43000 | 0.1450 | 0.1413 |
| 0.7784 | 6.07 | 43500 | 0.1463 | 0.1424 |
| 0.7937 | 6.14 | 44000 | 0.1438 | 0.1386 |
| 0.7738 | 6.21 | 44500 | 0.1437 | 0.1383 |
| 0.7728 | 6.28 | 45000 | 0.1424 | 0.1371 |
| 0.7681 | 6.35 | 45500 | 0.1416 | 0.1376 |
| 0.776 | 6.42 | 46000 | 0.1415 | 0.1380 |
| 0.7773 | 6.49 | 46500 | 0.1416 | 0.1371 |
| 0.7692 | 6.56 | 47000 | 0.1398 | 0.1345 |
| 0.7642 | 6.62 | 47500 | 0.1381 | 0.1341 |
| 0.7692 | 6.69 | 48000 | 0.1392 | 0.1334 |
| 0.7667 | 6.76 | 48500 | 0.1392 | 0.1348 |
| 0.7712 | 6.83 | 49000 | 0.1398 | 0.1333 |
| 0.7628 | 6.9 | 49500 | 0.1392 | 0.1344 |
| 0.7622 | 6.97 | 50000 | 0.1377 | 0.1329 |
| 0.7639 | 7.04 | 50500 | 0.1361 | 0.1316 |
| 0.742 | 7.11 | 51000 | 0.1376 | 0.1327 |
| 0.7526 | 7.18 | 51500 | 0.1387 | 0.1342 |
| 0.7606 | 7.25 | 52000 | 0.1363 | 0.1316 |
| 0.7626 | 7.32 | 52500 | 0.1365 | 0.1313 |
| 0.752 | 7.39 | 53000 | 0.1354 | 0.1309 |
| 0.7562 | 7.46 | 53500 | 0.1362 | 0.1312 |
| 0.7557 | 7.53 | 54000 | 0.1358 | 0.1325 |
| 0.7588 | 7.6 | 54500 | 0.1343 | 0.1311 |
| 0.7485 | 7.67 | 55000 | 0.1346 | 0.1301 |
| 0.7466 | 7.74 | 55500 | 0.1354 | 0.1314 |
| 0.7558 | 7.81 | 56000 | 0.1359 | 0.1325 |
| 0.7578 | 7.88 | 56500 | 0.1363 | 0.1334 |
| 0.7411 | 7.95 | 57000 | 0.1346 | 0.1301 |
| 0.7478 | 8.02 | 57500 | 0.1355 | 0.1305 |
| 0.7451 | 8.09 | 58000 | 0.1349 | 0.1302 |
| 0.7383 | 8.16 | 58500 | 0.1349 | 0.1294 |
| 0.7482 | 8.23 | 59000 | 0.1341 | 0.1293 |
| 0.742 | 8.3 | 59500 | 0.1338 | 0.1296 |
| 0.7343 | 8.37 | 60000 | 0.1348 | 0.1307 |
| 0.7385 | 8.44 | 60500 | 0.1324 | 0.1282 |
| 0.7567 | 8.51 | 61000 | 0.1334 | 0.1281 |
| 0.7342 | 8.58 | 61500 | 0.1338 | 0.1289 |
| 0.7401 | 8.65 | 62000 | 0.1331 | 0.1285 |
| 0.7362 | 8.72 | 62500 | 0.1329 | 0.1283 |
| 0.7241 | 8.79 | 63000 | 0.1323 | 0.1277 |
| 0.7244 | 8.86 | 63500 | 0.1317 | 0.1269 |
| 0.7274 | 8.93 | 64000 | 0.1308 | 0.1260 |
| 0.7411 | 9.0 | 64500 | 0.1309 | 0.1256 |
| 0.7255 | 9.07 | 65000 | 0.1316 | 0.1265 |
| 0.7406 | 9.14 | 65500 | 0.1315 | 0.1270 |
| 0.7418 | 9.21 | 66000 | 0.1315 | 0.1269 |
| 0.7301 | 9.27 | 66500 | 0.1315 | 0.1273 |
| 0.7248 | 9.34 | 67000 | 0.1323 | 0.1274 |
| 0.7423 | 9.41 | 67500 | 0.1309 | 0.1267 |
| 0.7152 | 9.48 | 68000 | 0.1312 | 0.1271 |
| 0.7295 | 9.55 | 68500 | 0.1306 | 0.1262 |
| 0.7231 | 9.62 | 69000 | 0.1308 | 0.1263 |
| 0.7344 | 9.69 | 69500 | 0.1313 | 0.1267 |
| 0.7264 | 9.76 | 70000 | 0.1305 | 0.1263 |
| 0.7309 | 9.83 | 70500 | 0.1303 | 0.1262 |
| 0.73 | 9.9 | 71000 | 0.1303 | 0.1261 |
| 0.7353 | 9.97 | 71500 | 0.1304 | 0.1260 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mmoebis/5g-flan-t5-large | mmoebis | "2024-03-14T18:55:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-13T11:27:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sail-rvc/Suisei | sail-rvc | "2023-07-14T07:32:45Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:32:18Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Suisei
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:32:44
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
hw2942/chinese-macbert-base-climate-transition-physical-risk-prediction-v7 | hw2942 | "2024-08-01T07:17:22Z" | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:hfl/chinese-macbert-base",
"base_model:finetune:hfl/chinese-macbert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-01T07:07:43Z" | ---
base_model: hfl/chinese-macbert-base
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: chinese-macbert-base-climate-transition-physical-risk-prediction-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-macbert-base-climate-transition-physical-risk-prediction-v7
This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.0114 | 1.0 |
| No log | 2.0 | 114 | 0.0114 | 1.0 |
| No log | 3.0 | 171 | 0.0002 | 1.0 |
| No log | 4.0 | 228 | 0.0002 | 1.0 |
| No log | 5.0 | 285 | 0.0002 | 1.0 |
| No log | 6.0 | 342 | 0.0002 | 1.0 |
| No log | 7.0 | 399 | 0.0002 | 1.0 |
| No log | 8.0 | 456 | 0.0002 | 1.0 |
| 0.0 | 9.0 | 513 | 0.0001 | 1.0 |
| 0.0 | 10.0 | 570 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
s3nh/SanjiWatsuki-Sonya-7B-GGUF | s3nh | "2023-12-31T20:26:06Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-31T20:17:42Z" |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/SanjiWatsuki/Sonya-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF | mradermacher | "2024-09-30T04:47:55Z" | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:turbok2/Llama2-turbo-Ko-8b-youtube",
"base_model:quantized:turbok2/Llama2-turbo-Ko-8b-youtube",
"endpoints_compatible",
"region:us"
] | null | "2024-09-30T04:24:31Z" | ---
base_model: turbok2/Llama2-turbo-Ko-8b-youtube
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/turbok2/Llama2-turbo-Ko-8b-youtube
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.IQ3_XS.gguf) | IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.IQ3_S.gguf) | IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.IQ3_M.gguf) | IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama2-turbo-Ko-8b-youtube-GGUF/resolve/main/Llama2-turbo-Ko-8b-youtube.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zelk12/MT1-Gen2-MA-gemma-2-RAv0.1t0.25MT4-9B | zelk12 | "2024-11-23T13:35:50Z" | 7 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT4-Gen1-gemma-2-9B",
"base_model:merge:zelk12/MT4-Gen1-gemma-2-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-11T16:06:47Z" | ---
base_model:
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- zelk12/MT4-Gen1-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT1-Gen2-MA-gemma-2-RAv0.1t0.25MT4-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
* [zelk12/MT4-Gen1-gemma-2-9B](https://huggingface.co/zelk12/MT4-Gen1-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- model: zelk12/MT4-Gen1-gemma-2-9B
merge_method: slerp
base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
dtype: bfloat16
parameters:
t: 0.25
```
|
WariHima/sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF | WariHima | "2025-03-05T04:21:54Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ja",
"base_model:sbintuitions/sarashina2.2-3b-instruct-v0.1",
"base_model:quantized:sbintuitions/sarashina2.2-3b-instruct-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-03-05T04:21:43Z" | ---
license: mit
language:
- ja
pipeline_tag: text-generation
base_model: sbintuitions/sarashina2.2-3b-instruct-v0.1
tags:
- llama-cpp
- gguf-my-repo
---
# WariHima/sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`sbintuitions/sarashina2.2-3b-instruct-v0.1`](https://huggingface.co/sbintuitions/sarashina2.2-3b-instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sbintuitions/sarashina2.2-3b-instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo WariHima/sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF --hf-file sarashina2.2-3b-instruct-v0.1-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo WariHima/sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF --hf-file sarashina2.2-3b-instruct-v0.1-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo WariHima/sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF --hf-file sarashina2.2-3b-instruct-v0.1-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo WariHima/sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF --hf-file sarashina2.2-3b-instruct-v0.1-q4_k_m-imat.gguf -c 2048
```
|
RichardErkhov/abhishek_-_autotrain-llama3-orpo-v2-4bits | RichardErkhov | "2025-04-04T04:27:39Z" | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-04T04:22:16Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
autotrain-llama3-orpo-v2 - bnb 4bits
- Model creator: https://huggingface.co/abhishek/
- Original model: https://huggingface.co/abhishek/autotrain-llama3-orpo-v2/
Original model description:
---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
wop/kosmox-tiny | wop | "2024-09-21T13:35:31Z" | 64 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-28T13:28:41Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
pipeline_tag: text-generation
---
# Uploaded model
- **Developed by:** wop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
ThuyNT03/xlm-roberta-base-Balance_Mixed-aug_backtranslation | ThuyNT03 | "2023-08-28T06:21:12Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-28T05:59:31Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Balance_Mixed-aug_backtranslation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Balance_Mixed-aug_backtranslation
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4382
- Accuracy: 0.72
- F1: 0.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9831 | 1.0 | 174 | 0.9044 | 0.61 | 0.5474 |
| 0.7797 | 2.0 | 348 | 0.6469 | 0.73 | 0.7378 |
| 0.6314 | 3.0 | 522 | 0.6261 | 0.76 | 0.7619 |
| 0.4976 | 4.0 | 696 | 0.8230 | 0.72 | 0.7177 |
| 0.3719 | 5.0 | 870 | 1.0086 | 0.72 | 0.7223 |
| 0.2816 | 6.0 | 1044 | 1.3198 | 0.72 | 0.7208 |
| 0.2772 | 7.0 | 1218 | 1.3510 | 0.71 | 0.7099 |
| 0.2076 | 8.0 | 1392 | 1.4382 | 0.72 | 0.7219 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
huggingtweets/vfsyes | huggingtweets | "2021-05-23T03:50:08Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/vfsyes/1601526119909/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/965091061923160066/L4aLxCgK_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Victoria Firth-Smith ✌️ 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@vfsyes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@vfsyes's tweets](https://twitter.com/vfsyes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3201</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>791</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>296</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2114</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3zryr6q7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vfsyes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/316yya4e) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/316yya4e/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/vfsyes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
mradermacher/llama3.2-1b-neuspell-GGUF | mradermacher | "2025-04-04T10:33:56Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:manav-glean/llama3.2-1b-neuspell",
"base_model:quantized:manav-glean/llama3.2-1b-neuspell",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-04T10:24:28Z" | ---
base_model: manav-glean/llama3.2-1b-neuspell
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/manav-glean/llama3.2-1b-neuspell
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-1b-neuspell-GGUF/resolve/main/llama3.2-1b-neuspell.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CompVis/stable-diffusion-v1-2 | CompVis | "2023-07-05T16:18:11Z" | 1,679 | 37 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:2207.12598",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-08-19T10:24:37Z" | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-2 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with D🧨iffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-2** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-1](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-1)
checkpoint and subsequently fine-tuned on 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`.
For more information, please refer to [Training](#training).
This weights here are intended to be used with the D🧨iffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
```bash
pip install --upgrade diffusers transformers scipy
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-2"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt)["sample"][0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-2"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
### Training Data
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
### Training Procedure
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
### Training details
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
Triangle104/Dumpling-Qwen2.5-32B-v2-Q8_0-GGUF | Triangle104 | "2025-04-17T08:09:48Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/Schule-DPO",
"dataset:nbeerbower/Purpura-DPO",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:flammenai/Prude-Phi3-DPO",
"dataset:Atsunori/HelpSteer2-DPO",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Dumpling-Qwen2.5-32B-v2",
"base_model:quantized:nbeerbower/Dumpling-Qwen2.5-32B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-17T08:05:51Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
yco/bilingual-embedding-base-onnx | yco | "2025-04-18T12:51:31Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"onnx",
"bilingual",
"feature-extraction",
"sentence-similarity",
"transformers",
"sentence-embedding",
"mteb",
"custom_code",
"arxiv:2010.08240",
"arxiv:1911.02116",
"arxiv:1908.10084",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-04-18T12:40:08Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ElioM/Reinforce-cartpole_2 | ElioM | "2025-03-07T09:21:40Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-07T09:21:33Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole_2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 202.90 +/- 8.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gtonkov/SpecAI | gtonkov | "2024-09-26T05:15:38Z" | 131 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-26T04:50:46Z" | ---
library_name: transformers
license: other
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: SpecAI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpecAI
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the api_eval_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 32.0
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
thaffggg/2938bb58-4f49-4a78-82b2-83b16574ff56 | thaffggg | "2025-01-22T10:36:57Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T10:15:37Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2938bb58-4f49-4a78-82b2-83b16574ff56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2e89958f322517fa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2e89958f322517fa_train_data.json
type:
field_input: problem_pddl
field_instruction: goal
field_output: natural_language
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/2938bb58-4f49-4a78-82b2-83b16574ff56
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2e89958f322517fa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fbc65477-f0e7-44e3-8fef-1c6cc73c91f0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fbc65477-f0e7-44e3-8fef-1c6cc73c91f0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2938bb58-4f49-4a78-82b2-83b16574ff56
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.089 | 0.0128 | 200 | 0.1631 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Mr-Cool/midterm-finetuned-embedding | Mr-Cool | "2024-09-24T11:48:06Z" | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:678",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-09-24T11:41:10Z" | ---
base_model: Snowflake/snowflake-arctic-embed-m
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:678
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are some of the content types mentioned in the context?
sentences:
- 'and/or use cases that were not evaluated in initial testing. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
MG-3.1-004 & \begin{tabular}{l}
Take reasonable measures to review training data for CBRN information, and \\
intellectual property, and where appropriate, remove it. Implement reasonable
\\
measures to prevent, flag, or take other action in response to outputs that \\
reproduce particular training data (e.g., plagiarized, trademarked, patented,
\\
licensed content or trade secret material). \\
\end{tabular} & \begin{tabular}{l}
Intellectual Property; CBRN \\
Information or Capabilities \\
\end{tabular} \\
\hline
\end{tabular}
\end{center}'
- 'Bias and Homogenization \\
\end{tabular} \\
\hline
GV-6.2-004 & \begin{tabular}{l}
Establish policies and procedures for continuous monitoring of third-party GAI
\\
systems in deployment. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
GV-6.2-005 & \begin{tabular}{l}
Establish policies and procedures that address GAI data redundancy, including
\\
model weights and other system artifacts. \\
\end{tabular} & Harmful Bias and Homogenization \\
\hline
GV-6.2-006 & \begin{tabular}{l}
Establish policies and procedures to test and manage risks related to rollover
and \\
fallback technologies for GAI systems, acknowledging that rollover and fallback
\\
may include manual processing. \\
\end{tabular} & Information Integrity \\
\hline
GV-6.2-007 & \begin{tabular}{l}
Review vendor contracts and avoid arbitrary or capricious termination of critical
\\
GAI technologies or vendor services and non-standard terms that may amplify or
\\'
- 'time. \\
\end{tabular} & \begin{tabular}{l}
Information Integrity; Obscene, \\
Degrading, and/or Abusive \\
Content; Value Chain and \\
Component Integration; Harmful \\
Bias and Homogenization; \\
Dangerous, Violent, or Hateful \\
Content; CBRN Information or \\
Capabilities \\
\end{tabular} \\
\hline
GV-1.3-002 & \begin{tabular}{l}
Establish minimum thresholds for performance or assurance criteria and review
as \\
part of deployment approval ("go/"no-go") policies, procedures, and processes,
\\
with reviewed processes and approval thresholds reflecting measurement of GAI
\\
capabilities and risks. \\
\end{tabular} & \begin{tabular}{l}
CBRN Information or Capabilities; \\
Confabulation; Dangerous, \\
Violent, or Hateful Content \\
\end{tabular} \\
\hline
GV-1.3-003 & \begin{tabular}{l}
Establish a test plan and response policy, before developing highly capable models,
\\
to periodically evaluate whether the model may misuse CBRN information or \\'
- source_sentence: What are the legal and regulatory requirements involving AI that
need to be understood, managed, and documented?
sentences:
- 'GOVERN 1.1: Legal and regulatory requirements involving Al are understood, managed,
and documented.
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Action ID & Suggested Action & GAI Risks \\
\hline
GV-1.1-001 & \begin{tabular}{l}
Align GAI development and use with applicable laws and regulations, including
\\
those related to data privacy, copyright and intellectual property law. \\
\end{tabular} & \begin{tabular}{l}
Data Privacy; Harmful Bias and \\
Homogenization; Intellectual \\
Property \\
\end{tabular} \\
\hline
\end{tabular}
\end{center}
Al Actor Tasks: Governance and Oversight\\
${ }^{14} \mathrm{AI}$ Actors are defined by the OECD as "those who play an active
role in the AI system lifecycle, including organizations and individuals that
deploy or operate AI." See Appendix A of the AI RMF for additional descriptions
of Al Actors and AI Actor Tasks.'
- '\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Action ID & Suggested Action & GAI Risks \\
\hline
GV-1.6-001 & \begin{tabular}{l}
Enumerate organizational GAI systems for incorporation into AI system inventory
\\
and adjust AI system inventory requirements to account for GAI risks. \\
\end{tabular} & Information Security \\
\hline
GV-1.6-002 & \begin{tabular}{l}
Define any inventory exemptions in organizational policies for GAI systems \\
embedded into application software. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
GV-1.6-003 & \begin{tabular}{l}
In addition to general model, governance, and risk information, consider the \\
following items in GAI system inventory entries: Data provenance information \\
(e.g., source, signatures, versioning, watermarks); Known issues reported from
\\
internal bug tracking or external information sharing resources (e.g., Al incident
\\'
- 'Wei, J. et al. (2024) Long Form Factuality in Large Language Models. arXiv. \href{https://arxiv.org/pdf/2403.18802}{https://arxiv.org/pdf/2403.18802}
Weidinger, L. et al. (2021) Ethical and social risks of harm from Language Models.
arXiv. \href{https://arxiv.org/pdf/2112.04359}{https://arxiv.org/pdf/2112.04359}
Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of Generative AI
Systems. arXiv. \href{https://arxiv.org/pdf/2310.11986}{https://arxiv.org/pdf/2310.11986}
Weidinger, L. et al. (2022) Taxonomy of Risks posed by Language Models. FAccT''
22. \href{https://dl.acm.org/doi/pdf/10.1145/3531146.3533088}{https://dl.acm.org/doi/pdf/10.1145/3531146.3533088}
West, D. (2023) Al poses disproportionate risks to women. Brookings. \href{https://www.brookings.edu/articles/ai-poses-disproportionate-risks-to-women/}{https://www.brookings.edu/articles/ai-poses-disproportionate-risks-to-women/}'
- source_sentence: What are some known issues reported from internal bug tracking
or external information sharing resources?
sentences:
- 'Kirchenbauer, J. et al. (2023) A Watermark for Large Language Models. OpenReview.
\href{https://openreview.net/forum?id=aX8ig9X2a7}{https://openreview.net/forum?id=aX8ig9X2a7}
Kleinberg, J. et al. (May 2021) Algorithmic monoculture and social welfare. PNAS.\\
\href{https://www.pnas.org/doi/10.1073/pnas}{https://www.pnas.org/doi/10.1073/pnas}.
2018340118\\
Lakatos, S. (2023) A Revealing Picture. Graphika. \href{https://graphika.com/reports/a-revealing-picture}{https://graphika.com/reports/a-revealing-picture}\\
Lee, H. et al. (2024) Deepfakes, Phrenology, Surveillance, and More! A Taxonomy
of AI Privacy Risks. arXiv. \href{https://arxiv.org/pdf/2310.07879}{https://arxiv.org/pdf/2310.07879}
Lenaerts-Bergmans, B. (2024) Data Poisoning: The Exploitation of Generative AI.
Crowdstrike. \href{https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/}{https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/}'
- '(e.g., source, signatures, versioning, watermarks); Known issues reported from
\\
internal bug tracking or external information sharing resources (e.g., Al incident
\\
database, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles
\\
and responsibilities; Special rights and considerations for intellectual property,
\\
licensed works, or personal, privileged, proprietary or sensitive data; Underlying
\\
foundation models, versions of underlying models, and access modes. \\
\end{tabular} & \begin{tabular}{l}
Data Privacy; Human-AI \\
Configuration; Information \\
Integrity; Intellectual Property; \\
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
\multicolumn{3}{|l|}{AI Actor Tasks: Governance and Oversight} \\
\hline
\end{tabular}
\end{center}'
- 'Trustworthy AI Characteristic: Safe, Explainable and Interpretable
\subsection*{2.2. Confabulation}
"Confabulation" refers to a phenomenon in which GAI systems generate and confidently
present erroneous or false content in response to prompts. Confabulations also
include generated outputs that diverge from the prompts or other input or that
contradict previously generated statements in the same context. These phenomena
are colloquially also referred to as "hallucinations" or "fabrications."'
- source_sentence: Why do image generator models struggle to produce non-stereotyped
content even when prompted?
sentences:
- Bias exists in many forms and can become ingrained in automated systems. Al systems,
including GAI systems, can increase the speed and scale at which harmful biases
manifest and are acted upon, potentially perpetuating and amplifying harms to
individuals, groups, communities, organizations, and society. For example, when
prompted to generate images of CEOs, doctors, lawyers, and judges, current text-to-image
models underrepresent women and/or racial minorities, and people with disabilities.
Image generator models have also produced biased or stereotyped output for various
demographic groups and have difficulty producing non-stereotyped content even
when the prompt specifically requests image features that are inconsistent with
the stereotypes. Harmful bias in GAI models, which may stem from their training
data, can also cause representational harms or perpetuate or exacerbate bias based
on race, gender, disability, or other protected classes.
- 'The White House (2016) Circular No. A-130, Managing Information as a Strategic
Resource. \href{https://www.whitehouse.gov/wp-}{https://www.whitehouse.gov/wp-}\\
content/uploads/legacy drupal files/omb/circulars/A130/a130revised.pdf\\
The White House (2023) Executive Order on the Safe, Secure, and Trustworthy Development
and Use of Artificial Intelligence. \href{https://www.whitehouse.gov/briefing-room/presidentialactions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-ofartificial-intelligence/}{https://www.whitehouse.gov/briefing-room/presidentialactions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-ofartificial-intelligence/}'
- "%Overriding the \\footnotetext command to hide the marker if its value is `0`\n\
\\let\\svfootnotetext\\footnotetext\n\\renewcommand\\footnotetext[2][?]{%\n \\\
if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\\
svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\\
else\\svfootnotetext{#2}\\fi%\n \\else\\svfootnotetext[#1]{#2}\\fi%\n \\fi\n\
}\n\n\\begin{document}\n\\maketitle\n\\section*{Artificial Intelligence Risk Management\
\ Framework: Generative Artificial Intelligence Profile}\n\\section*{NIST Trustworthy\
\ and Responsible AI NIST AI 600-1}\n\\section*{Artificial Intelligence Risk Management\
\ Framework: Generative Artificial Intelligence Profile}\nThis publication is\
\ available free of charge from:\\\\\n\\href{https://doi.org/10.6028/NIST.Al.600-1}{https://doi.org/10.6028/NIST.Al.600-1}\n\
\nJuly 2024\n\n\\includegraphics[max width=\\textwidth, center]{2024_09_22_1b8d52aa873ff5f60066g-02}\\\
\\\nU.S. Department of Commerce Gina M. Raimondo, Secretary"
- source_sentence: What processes should be updated for GAI acquisition and procurement
vendor assessments?
sentences:
- 'Inventory all third-party entities with access to organizational content and
\\
establish approved GAI technology and service provider lists. \\
\end{tabular} & \begin{tabular}{l}
Value Chain and Component \\
Integration \\
\end{tabular} \\
\hline
GV-6.1-008 & \begin{tabular}{l}
Maintain records of changes to content made by third parties to promote content
\\
provenance, including sources, timestamps, metadata. \\
\end{tabular} & \begin{tabular}{l}
Information Integrity; Value Chain \\
and Component Integration; \\
Intellectual Property \\
\end{tabular} \\
\hline
GV-6.1-009 & \begin{tabular}{l}
Update and integrate due diligence processes for GAI acquisition and \\
procurement vendor assessments to include intellectual property, data privacy,
\\
security, and other risks. For example, update processes to: Address solutions
that \\
may rely on embedded GAI technologies; Address ongoing monitoring, \\
assessments, and alerting, dynamic risk assessments, and real-time reporting \\'
- "\\item Information Integrity: Lowered barrier to entry to generate and support\
\ the exchange and consumption of content which may not distinguish fact from\
\ opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale\
\ dis- and mis-information campaigns.\n \\item Information Security: Lowered\
\ barriers for offensive cyber capabilities, including via automated discovery\
\ and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive\
\ cyber\n\\end{enumerate}\n\\footnotetext{${ }^{6}$ Some commenters have noted\
\ that the terms \"hallucination\" and \"fabrication\" anthropomorphize GAI, which\
\ itself is a risk related to GAI systems as it can inappropriately attribute\
\ human characteristics to non-human entities.\\\\"
- 'Evaluation data; Ethical considerations; Legal and regulatory requirements. \\
\end{tabular} & \begin{tabular}{l}
Information Integrity; Harmful Bias \\
and Homogenization \\
\end{tabular} \\
\hline
AI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts, End-Users,
Operation and Monitoring, TEVV & & \\
\hline
\end{tabular}
\end{center}'
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8850574712643678
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9540229885057471
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8850574712643678
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31800766283524895
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.02458492975734355
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.026500638569604086
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.027777777777777776
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.027777777777777776
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.20817571346541755
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.927969348659004
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.025776926351638994
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.8850574712643678
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9540229885057471
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1.0
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8850574712643678
name: Dot Precision@1
- type: dot_precision@3
value: 0.31800766283524895
name: Dot Precision@3
- type: dot_precision@5
value: 0.19999999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.02458492975734355
name: Dot Recall@1
- type: dot_recall@3
value: 0.026500638569604086
name: Dot Recall@3
- type: dot_recall@5
value: 0.027777777777777776
name: Dot Recall@5
- type: dot_recall@10
value: 0.027777777777777776
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.20817571346541755
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.927969348659004
name: Dot Mrr@10
- type: dot_map@100
value: 0.025776926351638994
name: Dot Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mr-Cool/midterm-finetuned-embedding")
# Run inference
sentences = [
'What processes should be updated for GAI acquisition and procurement vendor assessments?',
'Inventory all third-party entities with access to organizational content and \\\\\nestablish approved GAI technology and service provider lists. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nValue Chain and Component \\\\\nIntegration \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-008 & \\begin{tabular}{l}\nMaintain records of changes to content made by third parties to promote content \\\\\nprovenance, including sources, timestamps, metadata. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Value Chain \\\\\nand Component Integration; \\\\\nIntellectual Property \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-009 & \\begin{tabular}{l}\nUpdate and integrate due diligence processes for GAI acquisition and \\\\\nprocurement vendor assessments to include intellectual property, data privacy, \\\\\nsecurity, and other risks. For example, update processes to: Address solutions that \\\\\nmay rely on embedded GAI technologies; Address ongoing monitoring, \\\\\nassessments, and alerting, dynamic risk assessments, and real-time reporting \\\\',
'Evaluation data; Ethical considerations; Legal and regulatory requirements. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Harmful Bias \\\\\nand Homogenization \\\\\n\\end{tabular} \\\\\n\\hline\nAI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8851 |
| cosine_accuracy@3 | 0.954 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8851 |
| cosine_precision@3 | 0.318 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.0246 |
| cosine_recall@3 | 0.0265 |
| cosine_recall@5 | 0.0278 |
| cosine_recall@10 | 0.0278 |
| cosine_ndcg@10 | 0.2082 |
| cosine_mrr@10 | 0.928 |
| **cosine_map@100** | **0.0258** |
| dot_accuracy@1 | 0.8851 |
| dot_accuracy@3 | 0.954 |
| dot_accuracy@5 | 1.0 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.8851 |
| dot_precision@3 | 0.318 |
| dot_precision@5 | 0.2 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.0246 |
| dot_recall@3 | 0.0265 |
| dot_recall@5 | 0.0278 |
| dot_recall@10 | 0.0278 |
| dot_ndcg@10 | 0.2082 |
| dot_mrr@10 | 0.928 |
| dot_map@100 | 0.0258 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 678 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 18.37 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 188.5 tokens</li><li>max: 396 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the characteristics of trustworthy AI?</code> | <code>GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.</code> |
| <code>How are the characteristics of trustworthy AI integrated into organizational policies?</code> | <code>GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.</code> |
| <code>Why is it important to integrate trustworthy AI characteristics into organizational processes?</code> | <code>GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 34 | 0.0250 |
| 1.4706 | 50 | 0.0258 |
| 2.0 | 68 | 0.0257 |
| 2.9412 | 100 | 0.0258 |
| 3.0 | 102 | 0.0258 |
| 4.0 | 136 | 0.0258 |
| 4.4118 | 150 | 0.0258 |
| 5.0 | 170 | 0.0258 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.6.0.dev20240922+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
CalamitousFelicitousness/Cydonia-22B-v1-fp8-dynamic | CalamitousFelicitousness | "2024-09-18T19:35:50Z" | 19 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"fp8",
"region:us"
] | null | "2024-09-18T19:29:31Z" | ---
license: other
---
## This repo contains the copy of the original quantized to FP8. [TheDrummer/Cydonia-22B-v1](https://huggingface.co/TheDrummer/Cydonia-22B-v1)
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## 1000+ members strong 💪
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.*
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1 💿
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1-GGUF
- iMatrix: https://huggingface.co/MarsupialAI/Cydonia-22B-v1_iMat_GGUF
- EXL2: https://huggingface.co/MarsupialAI/Cydonia-22B-v1_EXL2

## Arsenal (Supported Chat Templates)
- Metharme for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- I might release a v1.1... Probably.
- Already have plans for a v2!

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

`>inb4 my model cards have turned into Tumblr` |
jpark677/internvl2-8b-mmmu-lora-ep-3-waa-false | jpark677 | "2025-04-02T14:11:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-02T14:11:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nie3e/pos-polish-gpt2-small | nie3e | "2024-01-15T16:43:57Z" | 98 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"pl",
"dataset:clarin-pl/nkjp-pos",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-01-15T16:32:17Z" | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-polish-gpt2-small
results: []
license: mit
datasets:
- clarin-pl/nkjp-pos
language:
- pl
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-polish-gpt2-small
This model was trained from [polish-gpt2-small](https://huggingface.co/sdadas/polish-gpt2-small) on [clarin-pl/nkjp-pos](https://huggingface.co/datasets/clarin-pl/nkjp-pos) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3109
- Precision: 0.8793
- Recall: 0.9255
- F1: 0.9018
- Accuracy: 0.9371
## Model description
Trained from [polish-gpt2-small](https://huggingface.co/sdadas/polish-gpt2-small)
## Intended uses & limitations
Part-of-speech tagging for Polish language.
Tags description at the bottom of http://nkjp.pl/poliqarp/help/plse2.html
## Training and evaluation data
Dataset: [clarin-pl/nkjp-pos](https://huggingface.co/datasets/clarin-pl/nkjp-pos)
Datacollator:
```py
from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
```
## Training procedure
GPU: RTX 3090
Training time: 00:50:24
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| | 0.0 | 0 | 3.6116 | 0.0464 | 0.0524 | 0.0492 | 0.0676 |
| 0.2303 | 1.0 | 1222 | 0.2159 | 0.8737 | 0.9225 | 0.8974 | 0.9347 |
| 0.1776 | 2.0 | 2444 | 0.2124 | 0.8799 | 0.9254 | 0.9021 | 0.9381 |
| 0.1467 | 3.0 | 3666 | 0.2205 | 0.8759 | 0.9241 | 0.8994 | 0.9368 |
| 0.1254 | 4.0 | 4889 | 0.2304 | 0.8792 | 0.9256 | 0.9018 | 0.9377 |
| 0.1091 | 5.0 | 6111 | 0.2480 | 0.8787 | 0.9251 | 0.9013 | 0.9375 |
| 0.0949 | 6.0 | 7333 | 0.2651 | 0.8794 | 0.9250 | 0.9016 | 0.9373 |
| 0.0857 | 7.0 | 8555 | 0.2794 | 0.8791 | 0.9251 | 0.9015 | 0.9372 |
| 0.079 | 8.0 | 9778 | 0.2922 | 0.8789 | 0.9247 | 0.9012 | 0.9366 |
| 0.0736 | 9.0 | 11000 | 0.3037 | 0.8807 | 0.9256 | 0.9026 | 0.9375 |
| 0.0691 | 10.0 | 12220 | 0.3109 | 0.8793 | 0.9255 | 0.9018 | 0.9371 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e-1_s6789_v3_manual | KingKazma | "2023-07-17T21:50:23Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-17T21:50:20Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
DatuKelet/Datu | DatuKelet | "2025-03-28T12:31:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-28T12:31:43Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
rpdpscl/gemma-2b-instruct-ft-medical-qa | rpdpscl | "2024-10-19T06:37:22Z" | 137 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-19T06:32:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso03/ff2d2a51-fbca-4314-9f11-a0458c084c61 | lesso03 | "2025-01-14T15:20:12Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-14T14:41:06Z" | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ff2d2a51-fbca-4314-9f11-a0458c084c61
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 747ca939a112ba35_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/747ca939a112ba35_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso03/ff2d2a51-fbca-4314-9f11-a0458c084c61
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/747ca939a112ba35_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdde0a17-2182-4c18-855c-68558222f915
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdde0a17-2182-4c18-855c-68558222f915
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ff2d2a51-fbca-4314-9f11-a0458c084c61
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.7316 | 0.0001 | 1 | 1.1126 |
| 14.5383 | 0.0007 | 5 | 1.1103 |
| 13.0524 | 0.0014 | 10 | 1.1151 |
| 9.6034 | 0.0021 | 15 | 1.1375 |
| 11.9392 | 0.0029 | 20 | 1.1487 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso12/21e8026a-cbe0-4c11-bdfb-ef0f3e0c3da2 | lesso12 | "2025-03-06T04:04:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-03-05T18:44:40Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 21e8026a-cbe0-4c11-bdfb-ef0f3e0c3da2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 21e8026a-cbe0-4c11-bdfb-ef0f3e0c3da2
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 120
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.3611 |
| 0.2093 | 0.4460 | 500 | 0.1966 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VPTQ-community/Meta-Llama-3.1-70B-Instruct-v8-k32768-0-woft | VPTQ-community | "2025-02-25T17:00:19Z" | 23 | 1 | null | [
"safetensors",
"llama",
"VPTQ",
"Quantized",
"Quantization",
"arxiv:2409.17066",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-70B-Instruct",
"license:llama3.1",
"vptq",
"region:us"
] | null | "2024-09-25T13:19:34Z" | ---
license: llama3.1
base_model:
- meta-llama/Llama-3.1-70B-Instruct
base_model_relation: quantized
tags:
- VPTQ
- Quantized
- Quantization
---
**Disclaimer**:
The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066)
The model itself is sourced from a community release.
It is intended only for experimental purposes.
Users are responsible for any consequences arising from the use of this model.
**Note**:
The PPL test results are for reference only and were collected using GPTQ testing script.
```json
{
"ctx_2048": {
"wikitext2": 7.893113613128662
},
"ctx_4096": {
"wikitext2": 7.517154216766357
},
"ctx_8192": {
"wikitext2": 7.29401159286499
}
}
``` |
youngbreadho/xlm-roberta-base-finetuned-panx-en | youngbreadho | "2023-06-11T13:30:58Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-06-11T13:16:32Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.4599198396793587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5991
- F1: 0.4599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0314 | 1.0 | 99 | 0.5991 | 0.4599 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
suneeln-duke/mistral-gen | suneeln-duke | "2024-04-22T23:40:30Z" | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2024-04-22T22:03:03Z" | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: mistral-gen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/n-suneel-duke/huggingface/runs/97k6cerl)
# mistral-gen
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.19.1 |
rizalmilyardi/IndobertTypeNews | rizalmilyardi | "2023-02-16T06:26:22Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-02-16T06:20:30Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IndobertTypeNews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndobertTypeNews
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2301
- Accuracy: 0.9347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 96 | 0.2449 | 0.9191 |
| No log | 2.0 | 192 | 0.2517 | 0.9021 |
| No log | 3.0 | 288 | 0.2301 | 0.9347 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sarkerlab/SocBERT-final | sarkerlab | "2023-03-21T19:55:55Z" | 8 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-02-09T19:05:49Z" | # SocBERT model
Pretrained model on 20GB English tweets and 72GB Reddit comments using a masked language modeling (MLM) objective.
The tweets are from Archive and collected from Twitter Streaming API.
The Reddit comments are ramdonly sampled from all subreddits from 2015-2019.
SocBERT-base was pretrained on 819M sequence blocks for 100K steps.
SocBERT-final was pretrained on 929M (819M+110M) sequence blocks for 112K (100K+12K) steps.
We benchmarked SocBERT, on 40 text classification tasks with social media data.
The experiment results can be found in our paper:
```
@inproceedings{socbert:2023,
title = {{SocBERT: A Pretrained Model for Social Media Text}},
author = {Yuting Guo and Abeed Sarker},
booktitle = {Proceedings of the Fourth Workshop on Insights from Negative Results in NLP},
year = {2023}
}
```
A base version of the model can be found at [SocBERT-base](https://huggingface.co/sarkerlab/SocBERT-base). |
jonatasgrosman/wav2vec2-large-xlsr-53-arabic | jonatasgrosman | "2022-12-14T01:57:28Z" | 1,108,872 | 33 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"dataset:common_voice",
"dataset:arabic_speech_corpus",
"doi:10.57967/hf/3573",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: ar
datasets:
- common_voice
- arabic_speech_corpus
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Arabic by Jonatas Grosman
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 39.59
- name: Test CER
type: cer
value: 18.18
---
# Fine-tuned XLSR-53 large model for speech recognition in Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-arabic")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ar"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ألديك قلم ؟ | ألديك قلم |
| ليست هناك مسافة على هذه الأرض أبعد من يوم أمس. | ليست نالك مسافة على هذه الأرض أبعد من يوم الأمس م |
| إنك تكبر المشكلة. | إنك تكبر المشكلة |
| يرغب أن يلتقي بك. | يرغب أن يلتقي بك |
| إنهم لا يعرفون لماذا حتى. | إنهم لا يعرفون لماذا حتى |
| سيسعدني مساعدتك أي وقت تحب. | سيسئدنيمساعدتك أي وقد تحب |
| أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة. | أحب نظرية علمية إلي هي أن حل قتزح المكوينا بالكامل من الأمت عن المفقودة |
| سأشتري له قلماً. | سأشتري له قلما |
| أين المشكلة ؟ | أين المشكل |
| وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ | ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون |
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ar"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-14). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-arabic | **39.59%** | **18.18%** |
| bakrianoo/sinai-voice-ar-stt | 45.30% | 21.84% |
| othrif/wav2vec2-large-xlsr-arabic | 45.93% | 20.51% |
| kmfoda/wav2vec2-large-xlsr-arabic | 54.14% | 26.07% |
| mohammed/wav2vec2-large-xlsr-arabic | 56.11% | 26.79% |
| anas/wav2vec2-large-xlsr-arabic | 62.02% | 27.09% |
| elgeish/wav2vec2-large-xlsr-53-arabic | 100.00% | 100.56% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-arabic,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {A}rabic},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic}},
year={2021}
}
``` |
alexyalunin/RuBioBERT | alexyalunin | "2022-08-26T09:29:58Z" | 36 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:2204.03951",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | Paper: https://arxiv.org/abs/2204.03951
Code: https://github.com/alexyalunin/RuBioRoBERTa |
jkazdan/llama-8b-dpo-full | jkazdan | "2025-03-09T20:34:51Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:jkazdan/llama3-8b-stage-2",
"base_model:finetune:jkazdan/llama3-8b-stage-2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-09T20:18:55Z" | ---
base_model: jkazdan/llama3-8b-stage-2
library_name: transformers
model_name: llama-8b-dpo-full
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-8b-dpo-full
This model is a fine-tuned version of [jkazdan/llama3-8b-stage-2](https://huggingface.co/jkazdan/llama3-8b-stage-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jkazdan/llama-8b-dpo-full", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Alphatao/ae7eb82d-2da1-4833-8039-bbae6d551206 | Alphatao | "2025-03-11T04:38:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-03-10T22:25:19Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ae7eb82d-2da1-4833-8039-bbae6d551206
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cb39c063f14002d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cb39c063f14002d5_train_data.json
type:
field_instruction: problem
field_output: qwq
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/ae7eb82d-2da1-4833-8039-bbae6d551206
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 2520
micro_batch_size: 4
mlflow_experiment_name: /tmp/cb39c063f14002d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.037878213966455056
wandb_entity: null
wandb_mode: online
wandb_name: 5a1d5196-a942-457d-9fd4-72f486ecfb9f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5a1d5196-a942-457d-9fd4-72f486ecfb9f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ae7eb82d-2da1-4833-8039-bbae6d551206
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2520
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8902 | 0.0003 | 1 | 0.7814 |
| 0.4861 | 0.0252 | 100 | 0.5038 |
| 0.613 | 0.0504 | 200 | 0.4886 |
| 0.5081 | 0.0756 | 300 | 0.4808 |
| 0.4852 | 0.1008 | 400 | 0.4750 |
| 0.4272 | 0.1260 | 500 | 0.4703 |
| 0.4457 | 0.1512 | 600 | 0.4668 |
| 0.5079 | 0.1764 | 700 | 0.4636 |
| 0.5876 | 0.2016 | 800 | 0.4616 |
| 0.5268 | 0.2268 | 900 | 0.4587 |
| 0.4417 | 0.2520 | 1000 | 0.4566 |
| 0.4095 | 0.2772 | 1100 | 0.4549 |
| 0.4428 | 0.3024 | 1200 | 0.4529 |
| 0.391 | 0.3275 | 1300 | 0.4516 |
| 0.466 | 0.3527 | 1400 | 0.4501 |
| 0.403 | 0.3779 | 1500 | 0.4489 |
| 0.3839 | 0.4031 | 1600 | 0.4478 |
| 0.3868 | 0.4283 | 1700 | 0.4469 |
| 0.4745 | 0.4535 | 1800 | 0.4461 |
| 0.4932 | 0.4787 | 1900 | 0.4455 |
| 0.4121 | 0.5039 | 2000 | 0.4450 |
| 0.477 | 0.5291 | 2100 | 0.4446 |
| 0.4132 | 0.5543 | 2200 | 0.4443 |
| 0.4715 | 0.5795 | 2300 | 0.4442 |
| 0.381 | 0.6047 | 2400 | 0.4441 |
| 0.5764 | 0.6299 | 2500 | 0.4441 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Khushi870/bart-cnn-samsum-summarizer | Khushi870 | "2024-04-12T15:37:15Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-12T15:36:17Z" | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-samsum-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-summarizer
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1243 | 1.0 | 74 | 0.1382 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
akhadangi/Mistral-7B-v0.1-0.01-H | akhadangi | "2025-04-15T13:12:07Z" | 19 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:2504.03302",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-13T07:03:44Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF | mradermacher | "2024-11-05T06:30:59Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:Novocoders/ogno-monarch-jaskier-merge-7b-NeuralDPO",
"base_model:quantized:Novocoders/ogno-monarch-jaskier-merge-7b-NeuralDPO",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-05T05:51:07Z" | ---
base_model: Novocoders/ogno-monarch-jaskier-merge-7b-NeuralDPO
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Novocoders/ogno-monarch-jaskier-merge-7b-NeuralDPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ogno-monarch-jaskier-merge-7b-NeuralDPO-GGUF/resolve/main/ogno-monarch-jaskier-merge-7b-NeuralDPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
samtuckervegan/3B-sft-ready | samtuckervegan | "2025-04-15T13:56:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-15T13:20:09Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
deepnet/SN29-C01-phi3-HK13-3 | deepnet | "2024-12-26T14:36:29Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-26T14:32:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/arlecchino-10 | LarryAIDraw | "2023-11-23T13:25:46Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-23T13:13:31Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/203436/arlecchino-genshin-impact-lora-commission |
cleanrl/Skiing-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3 | cleanrl | "2023-03-26T02:19:08Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Skiing-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-26T02:19:06Z" | ---
tags:
- Skiing-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Skiing-v5
type: Skiing-v5
metrics:
- type: mean_reward
value: -8987.20 +/- 22.82
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Skiing-v5**
This is a trained model of a PPO agent playing Skiing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Skiing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Skiing-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Skiing-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Skiing-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Skiing-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Skiing-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
|
ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601 | ar2rpapian | "2022-07-20T10:12:11Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:ar2rpapian/autotrain-data-Flexport_Classification_Desc",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-07-20T08:32:26Z" | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ar2rpapian/autotrain-data-Flexport_Classification_Desc
co2_eq_emissions: 206.60369255723003
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1155542601
- CO2 Emissions (in grams): 206.60369255723003
## Validation Metrics
- Loss: 0.22105568647384644
- Accuracy: 0.9578838092484789
- Macro F1: 0.9360695960738429
- Micro F1: 0.9578838092484788
- Weighted F1: 0.957863360811612
- Macro Precision: 0.9415730549729362
- Micro Precision: 0.9578838092484789
- Weighted Precision: 0.9586754512711492
- Macro Recall: 0.9329742157218464
- Micro Recall: 0.9578838092484789
- Weighted Recall: 0.9578838092484789
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
mlc-ai/gemma-3-27b-it-q4bf16_0-MLC | mlc-ai | "2025-03-31T15:31:26Z" | 20 | 1 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"region:us"
] | null | "2025-03-17T05:53:17Z" | ---
library_name: mlc-llm
base_model: google/gemma-3-27b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-27b-it-q4bf16_0-MLC
This is the [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) model in MLC format `q4bf16_0`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-27b-it-q4bf16_0-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-27b-it-q4bf16_0-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-27b-it-q4bf16_0-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
meatcarrot/emotion_classification_based_distilbert01 | meatcarrot | "2025-04-18T07:23:14Z" | 0 | 0 | null | [
"safetensors",
"distilbert",
"license:unlicense",
"region:us"
] | null | "2025-04-18T07:21:02Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Long089/whisper-small-vi | Long089 | "2024-03-07T07:29:43Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"vi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-07T07:13:20Z" | ---
language:
- vi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Vi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
huggingtweets/lulaoficial | huggingtweets | "2023-01-30T04:22:42Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-01-30T04:21:49Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/lulaoficial/1675052557754/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1609717376890671110/Z0LJxPb1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lula</div>
<div style="text-align: center; font-size: 14px;">@lulaoficial</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lula.
| Data | Lula |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 189 |
| Short tweets | 68 |
| Tweets kept | 2989 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wctmk1d3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lulaoficial's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nei79824) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nei79824/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lulaoficial')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tzvc/a44f2b8b-93b0-4cab-97fb-3d3d7ba9840b | tzvc | "2023-05-16T09:38:44Z" | 13 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-12-20T12:17:28Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: sdcid
---
### a44f2b8b-93b0-4cab-97fb-3d3d7ba9840b
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
sdcid (use that on your prompt)

|
featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF | featherless-ai-quants | "2024-10-29T22:17:08Z" | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:flammenai/Mahou-1.2-llama3-8B",
"base_model:quantized:flammenai/Mahou-1.2-llama3-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-10-29T21:55:02Z" | ---
base_model: flammenai/Mahou-1.2-llama3-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# flammenai/Mahou-1.2-llama3-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [flammenai-Mahou-1.2-llama3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [flammenai-Mahou-1.2-llama3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [flammenai-Mahou-1.2-llama3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [flammenai-Mahou-1.2-llama3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [flammenai-Mahou-1.2-llama3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [flammenai-Mahou-1.2-llama3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [flammenai-Mahou-1.2-llama3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [flammenai-Mahou-1.2-llama3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [flammenai-Mahou-1.2-llama3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [flammenai-Mahou-1.2-llama3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [flammenai-Mahou-1.2-llama3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/flammenai-Mahou-1.2-llama3-8B-GGUF/blob/main/flammenai-Mahou-1.2-llama3-8B-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
lmchion/bert-base-finetuned-esg-a4s | lmchion | "2022-06-20T14:36:20Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-06-20T14:31:53Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: lmchion/bert-base-finetuned-esg-a4s
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lmchion/bert-base-finetuned-esg-a4s
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7744
- Validation Loss: 2.5318
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -812, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7744 | 2.5318 | 0 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
great0001/ee7023ee-b7ff-4387-8282-501dcbe6e244 | great0001 | "2025-01-29T15:04:16Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | "2025-01-29T14:28:05Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ee7023ee-b7ff-4387-8282-501dcbe6e244
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9474ac4ca19ac46d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9474ac4ca19ac46d_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/ee7023ee-b7ff-4387-8282-501dcbe6e244
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/9474ac4ca19ac46d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f2fc808c-912a-4d17-814f-4e6bf4d846a9
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f2fc808c-912a-4d17-814f-4e6bf4d846a9
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ee7023ee-b7ff-4387-8282-501dcbe6e244
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.7149 |
| 0.5499 | 0.0004 | 13 | 0.4317 |
| 0.4474 | 0.0007 | 26 | 0.3826 |
| 0.4069 | 0.0011 | 39 | 0.3613 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sayan01/Phi35-1B-KL | Sayan01 | "2025-03-13T13:34:09Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-03T01:19:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
andrek/LAT2NOB | andrek | "2021-09-23T13:06:22Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:05Z" | ---
language: no
license: cc-by-4.0
tags:
- translation
widget:
- text: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua.
---
|
MPGPT/Tommy | MPGPT | "2025-02-05T12:37:47Z" | 10 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-05T12:13:44Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Tommy
---
# Tommy
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tommy` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('MPGPT/Tommy', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ArtFair/fine_tuned_segmentation-3.0_1e-3_128_pth | ArtFair | "2024-09-21T01:16:07Z" | 157 | 0 | null | [
"pytorch",
"tensorboard",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:ArtFair/diarizers_dataset_70-15-15",
"base_model:pyannote/segmentation-3.0",
"base_model:finetune:pyannote/segmentation-3.0",
"license:mit",
"region:us"
] | null | "2024-09-21T00:55:44Z" | ---
license: mit
base_model: pyannote/segmentation-3.0
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- ArtFair/diarizers_dataset_70-15-15
model-index:
- name: fine_tuned_segmentation-3.0_1e-3_128_pth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_segmentation-3.0_1e-3_128_pth
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the ArtFair/diarizers_dataset_70-15-15 default dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3620
- Der: 0.2625
- False Alarm: 0.1458
- Missed Detection: 0.0926
- Confusion: 0.0241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.426 | 1.0 | 233 | 0.3954 | 0.2915 | 0.1834 | 0.0807 | 0.0274 |
| 0.3974 | 2.0 | 466 | 0.3667 | 0.2668 | 0.1391 | 0.1032 | 0.0246 |
| 0.3772 | 3.0 | 699 | 0.3675 | 0.2672 | 0.1552 | 0.0874 | 0.0246 |
| 0.3618 | 4.0 | 932 | 0.3629 | 0.2641 | 0.1498 | 0.0899 | 0.0243 |
| 0.3622 | 5.0 | 1165 | 0.3620 | 0.2625 | 0.1458 | 0.0926 | 0.0241 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.4.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
NexesQuants/Llama_3.x_70b_Trojka_V3.8-iMat-CQ-GGUF | NexesQuants | "2025-04-11T21:11:48Z" | 41 | 0 | null | [
"gguf",
"base_model:NexesMess/Llama_3.x_70b_Trojka_V3.8",
"base_model:quantized:NexesMess/Llama_3.x_70b_Trojka_V3.8",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T14:35:28Z" | ---
license: llama3.3
base_model:
- NexesMess/Llama_3.x_70b_Trojka_V3.8
--- |
csukuangfj/k2fsa-zipformer-bilingual-zh-en-t | csukuangfj | "2025-03-31T11:39:09Z" | 0 | 4 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | "2023-11-17T10:18:37Z" | ---
license: apache-2.0
---
## Chinese-English ASR model using k2-zipformer-streaming
### AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-12.pt
| decode_chunk_len | AIShell-1 | TEST_NET | TEST_MEETING |
|------------------|-----------|----------|--------------|
| 64 | 4.79 | 11.6 | 12.64 ||
### Training and decoding commands
```
nohup ./pruned_transducer_stateless7_streaming/train.py --world-size 8 --num-epochs 30 --start-epoch 1 \
--num-encoder-layers 2,2,2,2,2 \
--feedforward-dims 768,768,768,768,768 \
--nhead 4,4,4,4,4 \
--encoder-dims 256,256,256,256,256 \
--attention-dims 192,192,192,192,192 \
--encoder-unmasked-dims 192,192,192,192,192 \
--exp-dir pruned_transducer_stateless7_streaming/exp --max-duration 360 \
> pruned_transducer_stateless7_streaming/exp/nohup.zipformer &
nohup ./pruned_transducer_stateless7_streaming/decode.py --epoch 12 --avg 1 \
--num-encoder-layers 2,2,2,2,2 \
--feedforward-dims 768,768,768,768,768 \
--nhead 4,4,4,4,4 \
--encoder-dims 256,256,256,256,256 \
--attention-dims 192,192,192,192,192 \
--encoder-unmasked-dims 192,192,192,192,192 \
--exp-dir pruned_transducer_stateless7_streaming/exp \
--max-duration 600 --decode-chunk-len 32 --decoding-method modified_beam_search --beam-size 4 \
> nohup.zipformer.deocode &
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
### Tips
some k2-fsa version and parameter is
```
{'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.2', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'a74f59dba1863cd9386ba4d8815850421260eee7', 'k2-git-date': 'Fri Dec 2 08:32:22 2022', 'lhotse-version': '1.5.0.dev+git.8ce38fc.dirty', 'torch-version': '1.11.0+cu113', 'torch-cuda-available': True, 'torch-cuda-version': '11.3', 'python-version': '3.7', 'icefall-git-branch': 'master', 'icefall-git-sha1': '600f387-dirty', 'icefall-git-date': 'Thu Feb 9 15:16:04 2023', 'icefall-path': '/opt/conda/lib/python3.7/site-packages', 'k2-path': '/opt/conda/lib/python3.7/site-packages/k2/__init__.py', 'lhotse-path': '/opt/conda/lib/python3.7/site-packages/lhotse/__init__.py', 'hostname': 'worker-0', 'IP address': '127.0.0.1'}, 'world_size': 8, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 11, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp_t'), 'lang_dir': 'data/lang_char_bpe', 'base_lr': 0.01, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': False, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '768,768,768,768,768', 'nhead': '4,4,4,4,4', 'encoder_dims': '256,256,256,256,256', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '192,192,192,192,192', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 360, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'training_subset': 'mix', 'blank_id': 0, 'vocab_size': 6254}
```
|
YieldInc/Puffin | YieldInc | "2023-10-29T22:14:59Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-29T22:14:22Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
smp-hub/upernet-swin-large | smp-hub | "2025-04-12T21:35:28Z" | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"upernet",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | "2025-04-12T21:35:25Z" | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
- upernet
languages:
- python
---
# UPerNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Dataset](#dataset)
## Load trained model
[](https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/upernet_inference_pretrained.ipynb)
1. Install requirements.
```bash
pip install -U segmentation_models_pytorch albumentations
```
2. Run inference.
```python
import torch
import requests
import numpy as np
import albumentations as A
import segmentation_models_pytorch as smp
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load pretrained model and preprocessing function
checkpoint = "smp-hub/upernet-swin-large"
model = smp.from_pretrained(checkpoint).eval().to(device)
preprocessing = A.Compose.from_pretrained(checkpoint)
# Load image
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Preprocess image
np_image = np.array(image)
normalized_image = preprocessing(image=np_image)["image"]
input_tensor = torch.as_tensor(normalized_image)
input_tensor = input_tensor.permute(2, 0, 1).unsqueeze(0) # HWC -> BCHW
input_tensor = input_tensor.to(device)
# Perform inference
with torch.no_grad():
output_mask = model(input_tensor)
# Postprocess mask
mask = mask.argmax(1).cpu().numpy() # argmax over predicted classes (channels dim)
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "tu-swin_large_patch4_window12_384",
"encoder_depth": 5,
"encoder_weights": None,
"decoder_channels": 512,
"decoder_use_norm": "batchnorm",
"in_channels": 3,
"classes": 150,
"activation": None,
"upsampling": 4,
"aux_params": None,
"img_size": 512
}
```
## Dataset
Dataset name: [ADE20K](https://ade20k.csail.mit.edu/)
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
thirteenbit/madlad400-10b-mt-gguf | thirteenbit | "2024-07-06T16:37:55Z" | 218 | 4 | null | [
"gguf",
"translation",
"base_model:google/madlad400-10b-mt",
"base_model:quantized:google/madlad400-10b-mt",
"license:apache-2.0",
"region:us"
] | translation | "2024-07-06T11:44:28Z" | ---
base_model: google/madlad400-10b-mt
inference: false
license: apache-2.0
model_name: madlad400-10b-mt-gguf
pipeline_tag: translation
---
# MADLAD-400-10B-MT - GGUF
- Original model: [MADLAD-400-10B-MT](https://huggingface.co/google/madlad400-10b-mt)
## Description
This repo contains GGUF format model files for [MADLAD-400-10B-MT](https://huggingface.co/google/madlad400-10b-mt) for
use with [llama.cpp](https://github.com/ggerganov/llama.cpp) and compatible software.
Converted to gguf using llama.cpp [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)
and quantized using llama.cpp llama-quantize, llama.cpp version [b3325](https://github.com/ggerganov/llama.cpp/commits/b3325).
## Provided files
| Name | Quant method | Bits | Size | VRAM required |
| ---- | ---- | ---- | ---- | ---- |
| [model-q3_k_m.gguf](https://huggingface.co/thirteenbit/madlad400-10b-mt-gguf/blob/main/model-q3_k_m.gguf) | Q3_K_M | 3 | 4.9 GB| 5.7 GB |
| [model-q4_k_m.gguf](https://huggingface.co/thirteenbit/madlad400-10b-mt-gguf/blob/main/model-q4_k_m.gguf) | Q4_K_M | 4 | 6.3 GB| 7.1 GB |
| [model-q5_k_m.gguf](https://huggingface.co/thirteenbit/madlad400-10b-mt-gguf/blob/main/model-q5_k_m.gguf) | Q5_K_M | 5 | 7.2 GB| 7.9 GB |
| [model-q6_k.gguf](https://huggingface.co/thirteenbit/madlad400-10b-mt-gguf/blob/main/model-q6_k.gguf) | Q6_K | 6 | 8.2 GB| 8.9 GB |
| [model-q8_0.gguf](https://huggingface.co/thirteenbit/madlad400-10b-mt-gguf/blob/main/model-q8_0.gguf) | Q8_0 | 8 | 11 GB| 11.3 GB |
**Note**: the above VRAM usage figures are observed with all layers GPU offloading, on Linux with NVIDIA GPU.
|
andrea-coppari/phi-1_5-geodata-finetuning-ita | andrea-coppari | "2023-11-10T11:45:24Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | "2023-11-10T11:25:53Z" | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-geodata-finetuning-ita
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-geodata-finetuning-ita
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
irwantongkeng1/f90f343d-a6db-4053-87aa-5abe2635ef73 | irwantongkeng1 | "2025-04-18T00:43:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-18T00:12:20Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
impossibleexchange/run108 | impossibleexchange | "2025-02-04T18:14:11Z" | 19 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-04T18:11:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Benjaminpwh/xlsr-toratan-240-copt-base_H | Benjaminpwh | "2025-04-10T09:19:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"pretraining",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-10T04:21:55Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: xlsr-toratan-240-copt-base_H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-toratan-240-copt-base_H
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.567e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
fbaldassarri/mistralai_Mistral-7B-v0.3-autoround-int4-gs128-asym | fbaldassarri | "2025-01-17T17:53:36Z" | 7 | 0 | null | [
"safetensors",
"mistral",
"pytorch",
"causal-lm",
"autoround",
"auto-round",
"intel-autoround",
"gptq",
"woq",
"intel",
"mistralai",
"text-generation",
"en",
"fr",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:quantized:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"4-bit",
"intel/auto-round",
"region:us"
] | text-generation | "2025-01-15T17:02:32Z" | ---
language:
- en
- fr
tags:
- pytorch
- causal-lm
- mistral
- autoround
- auto-round
- intel-autoround
- gptq
- woq
- intel
- pytorch
- mistralai
license: apache-2.0
model_name: Mistral 7B v0.3
base_model:
- mistralai/Mistral-7B-v0.3
inference: false
model_creator: mistralai
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Asymmetrical Quantization
- Method WoQ (AutoRound format)
Fast and low memory, 2-3X speedup (slight accuracy drop at W4G128)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.3
Note: this INT4 version of Mistral-7B-v0.3 has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.3.tar.gz
tar -xvzf v0.4.3.tar.gz
cd auto-round-0.4.3
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "mistralai/Mistral-7B-v0.3"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 128, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/mistralai_Mistral-7B-v0.3-autoround-int4-gs128-asym"
autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warranty. It has been developed only for research purposes.
|
bartowski/internlm2-chat-20b-llama-exl2 | bartowski | "2024-01-27T23:12:57Z" | 1 | 6 | null | [
"text-generation",
"license:other",
"region:us"
] | text-generation | "2024-01-25T19:07:07Z" | ---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---
Update Jan 27: This has been redone with the proper token mappings and rope scaling, performance seems improved, please comment if not
## Exllama v2 Quantizations of internlm2-chat-20b-llama-test
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/internlm/internlm2-chat-20b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
| [6_5](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/6_5) | 6.5 | 8.0 | 19.6 GB | 21.0 GB | 23.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [4_25](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/4_25) | 4.25 | 6.0 | 13.8 GB | 15.2 GB | 17.2 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/3_5) | 3.5 | 6.0 | 12.4 GB | 13.8 GB | 15.8 GB | Lower quality, only use if you have to. |
| [3_0](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exl2/tree/3_0) | 3.0 | 6.0 | 11.1 GB | 12.5 GB | 15.5 GB | Very low quality. Usable on 12GB. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-20b-llama-exl2 internlm2-chat-20b-llama-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-20b-llama-exl2`:
```shell
mkdir internlm2-chat-20b-llama-exl2
huggingface-cli download bartowski/internlm2-chat-20b-llama-exl2 --local-dir internlm2-chat-20b-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir internlm2-chat-20b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-20b-llama-exl2 --revision 6_5 --local-dir internlm2-chat-20b-llama-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir internlm2-chat-20b-llama-exl2-6.5
huggingface-cli download bartowski/internlm2-chat-20b-llama-exl2 --revision 6_5 --local-dir internlm2-chat-20b-llama-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
fbarragan/iabdsapa_model | fbarragan | "2025-01-14T11:33:49Z" | 73 | 1 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"es",
"dataset:omarques/autotrain-data-dogs-and-cats",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-01-14T10:27:35Z" | ---
tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats
pipeline_tag: image-classification
language:
- es
library_name: transformers
license: cc
--- |
Sicarius-Prototyping/Brainy_LLAMA | Sicarius-Prototyping | "2024-12-04T17:55:14Z" | 6 | 1 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | "2024-12-04T17:05:09Z" | ---
license: apache-2.0
---
## Overview
Brainy_LLAMA is a state-of-the-art large language model developed by my cat. It is designed to understand and generate human-like text based on the input it receives. This model is capable of performing a wide range of natural language processing tasks, including but not limited to text generation, translation, summarization, and question-answering.
## Intended Use
Brainy_LLAMA is intended for use in various applications that require advanced natural language processing capabilities. Some of the key use cases include:
- **Text Generation:** Generating coherent and contextually relevant text based on given prompts.
- **Translation:** Translating text from one language to another with high accuracy.
- **Summarization:** Summarizing long texts into concise and informative summaries.
- **Question-Answering:** Providing accurate and relevant answers to user queries.
- **Content Creation:** Assisting in the creation of articles, reports, and other written content.
- **Chatbots and Virtual Assistants:** Powering conversational agents that can engage in natural and meaningful dialogues with users.
## Training Data
Brainy_LLAMA was trained on a diverse and extensive dataset comprising text from various sources, including books, articles, websites, and other publicly available texts. The training data was carefully curated to ensure a wide range of topics and styles, enabling the model to understand and generate text across different domains.
## Model Architecture
Brainy_LLAMA is based on the transformer architecture, which is known for its effectiveness in handling sequential data. The model consists of multiple layers of self-attention mechanisms and feed-forward neural networks, allowing it to capture complex patterns and relationships in the input text.
## Performance Metrics
Brainy_LLAMA has been evaluated on several benchmark datasets and has demonstrated competitive performance across various natural language processing tasks. Some of the key performance metrics include:
- **Perplexity:** A measure of the model's ability to predict the next word in a sequence. Lower perplexity indicates better performance.
- **BLEU Score:** A metric used to evaluate the quality of machine-generated text, particularly in translation tasks. Higher BLEU scores indicate better performance.
- **ROUGE Score:** A metric used to evaluate the quality of summarization tasks. Higher ROUGE scores indicate better performance.
## Limitations
While Brainy_LLAMA is a powerful language model, it is important to be aware of its limitations:
- **Hallucinations:** The model may generate text that sounds confident but is factually incorrect. Users should verify the information generated by the model.
- **Bias:** The model may exhibit biases present in the training data. Efforts have been made to mitigate biases, but users should be cautious of potential biases in the generated text.
- **Context Window:** The model has a limited context window, which means it may not be able to maintain coherence over very long texts. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.