modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-20 06:26:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 429
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-20 06:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/Digest0703_-_test_llm-gguf | RichardErkhov | "2025-04-05T01:18:43Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-05T00:05:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
test_llm - GGUF
- Model creator: https://huggingface.co/Digest0703/
- Original model: https://huggingface.co/Digest0703/test_llm/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [test_llm.Q2_K.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q2_K.gguf) | Q2_K | 1.27GB |
| [test_llm.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [test_llm.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [test_llm.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [test_llm.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [test_llm.Q3_K.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q3_K.gguf) | Q3_K | 1.57GB |
| [test_llm.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [test_llm.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [test_llm.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [test_llm.Q4_0.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q4_0.gguf) | Q4_0 | 1.79GB |
| [test_llm.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [test_llm.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [test_llm.Q4_K.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q4_K.gguf) | Q4_K | 1.88GB |
| [test_llm.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [test_llm.Q4_1.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q4_1.gguf) | Q4_1 | 1.95GB |
| [test_llm.Q5_0.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q5_0.gguf) | Q5_0 | 2.11GB |
| [test_llm.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [test_llm.Q5_K.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q5_K.gguf) | Q5_K | 2.16GB |
| [test_llm.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [test_llm.Q5_1.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q5_1.gguf) | Q5_1 | 2.28GB |
| [test_llm.Q6_K.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q6_K.gguf) | Q6_K | 2.46GB |
| [test_llm.Q8_0.gguf](https://huggingface.co/RichardErkhov/Digest0703_-_test_llm-gguf/blob/main/test_llm.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pfunk/CartPole-v1-CP_DQPN_x5-seed3 | pfunk | "2023-03-20T02:59:44Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-20T02:59:41Z" | ---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x5.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x5]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x5 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x5-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x5 --policy-network-frequency 100 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x5',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
andradejunior/ppo-LunarLander-v2 | andradejunior | "2022-12-20T02:34:03Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-20T02:33:35Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.51 +/- 39.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mirella-guenther/distil-whisper-distil-large-v3-torgo-2-epochs | mirella-guenther | "2024-06-05T00:23:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T00:23:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kondara/gemma-3-12b-it-Q4_K_M-GGUF | Kondara | "2025-03-13T04:05:18Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | "2025-03-13T04:04:41Z" | ---
base_model: google/gemma-3-12b-it
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Kondara/gemma-3-12b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-12b-it`](https://huggingface.co/google/gemma-3-12b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Kondara/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Kondara/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Kondara/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Kondara/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048
```
|
prushton/dreambooth-myra-3000 | prushton | "2023-12-04T01:22:16Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-03T21:41:37Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of myra
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - prushton/dreambooth-myra-3000
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of myra using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
TheBloke/MelloGPT-AWQ | TheBloke | "2023-12-16T15:02:11Z" | 11 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:nbertagnolli/counsel-chat",
"base_model:steve-cse/MelloGPT",
"base_model:quantized:steve-cse/MelloGPT",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-12-16T14:44:18Z" | ---
base_model: steve-cse/MelloGPT
datasets:
- nbertagnolli/counsel-chat
inference: false
license: mit
model_creator: Steve Boby George
model_name: MelloGPT
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MelloGPT - AWQ
- Model creator: [Steve Boby George](https://huggingface.co/steve-cse)
- Original model: [MelloGPT](https://huggingface.co/steve-cse/MelloGPT)
<!-- description start -->
## Description
This repo contains AWQ model files for [Steve Boby George's MelloGPT](https://huggingface.co/steve-cse/MelloGPT).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MelloGPT-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MelloGPT-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MelloGPT-GGUF)
* [Steve Boby George's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/steve-cse/MelloGPT)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/MelloGPT-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/MelloGPT-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `MelloGPT-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/MelloGPT-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/MelloGPT-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/MelloGPT-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/MelloGPT-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Steve Boby George's MelloGPT
A fine tuned version of Mistral-7B-v0.1 on Counsel Chat dataset for mental health conversations.
In an era where mental health support is of paramount importance, A large language
model fine-tuned on mental health counseling conversations stands as a pioneering solution. This
approach aims to elevate natural language understanding and generation within the realm of mental
health support. Leveraging a diverse dataset of anonymized counseling sessions, the model has
been trained to recognize and respond to a wide range of mental health concerns, including anxiety,
depression, stress, and more. The fine-tuning process incorporates ethical considerations, privacy
concerns, and sensitivity to the nuances of mental health conversations. The resulting model will
demonstrate an intricate understanding of mental health issues and provide empathetic and
supportive responses, offering a valuable tool for individuals seeking guidance, mental health
professionals, and the broader healthcare community.
|
stablediffusionapi/godhorror | stablediffusionapi | "2024-05-24T10:09:06Z" | 29 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-24T10:06:52Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# GodHorror API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "godhorror"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/godhorror)
Model link: [View model](https://modelslab.com/models/godhorror)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "godhorror",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
timmyAlvice/house_md_transfer_learning | timmyAlvice | "2025-03-23T09:42:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-11T09:00:32Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/prefect-pony-xl-v3-sdxl | John6666 | "2024-09-11T03:32:21Z" | 388 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"animagine",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-09-11T03:22:17Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- animagine
- pony
---
Original model is [here](https://civitai.com/models/439889/prefect-pony-xl?modelVersionId=828380).
This model created by [Goofy_Ai](https://civitai.com/user/Goofy_Ai).
|
MrinmoySaikia/t5-small-finetuned-wikisql | MrinmoySaikia | "2024-04-29T05:59:19Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-28T21:45:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
|
RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf | RichardErkhov | "2024-04-17T10:23:47Z" | 164 | 0 | null | [
"gguf",
"arxiv:2012.05628",
"endpoints_compatible",
"region:us"
] | null | "2024-04-17T10:20:46Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-small-italian-embeddings - GGUF
- Model creator: https://huggingface.co/GroNLP/
- Original model: https://huggingface.co/GroNLP/gpt2-small-italian-embeddings/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-small-italian-embeddings.Q2_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q2_K.gguf) | Q2_K | 0.06GB |
| [gpt2-small-italian-embeddings.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.IQ3_XS.gguf) | IQ3_XS | 0.06GB |
| [gpt2-small-italian-embeddings.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.IQ3_S.gguf) | IQ3_S | 0.06GB |
| [gpt2-small-italian-embeddings.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q3_K_S.gguf) | Q3_K_S | 0.06GB |
| [gpt2-small-italian-embeddings.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [gpt2-small-italian-embeddings.Q3_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q3_K.gguf) | Q3_K | 0.07GB |
| [gpt2-small-italian-embeddings.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [gpt2-small-italian-embeddings.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [gpt2-small-italian-embeddings.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [gpt2-small-italian-embeddings.Q4_0.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q4_0.gguf) | Q4_0 | 0.08GB |
| [gpt2-small-italian-embeddings.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [gpt2-small-italian-embeddings.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [gpt2-small-italian-embeddings.Q4_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q4_K.gguf) | Q4_K | 0.08GB |
| [gpt2-small-italian-embeddings.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [gpt2-small-italian-embeddings.Q4_1.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q4_1.gguf) | Q4_1 | 0.08GB |
| [gpt2-small-italian-embeddings.Q5_0.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q5_0.gguf) | Q5_0 | 0.09GB |
| [gpt2-small-italian-embeddings.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
| [gpt2-small-italian-embeddings.Q5_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q5_K.gguf) | Q5_K | 0.09GB |
| [gpt2-small-italian-embeddings.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [gpt2-small-italian-embeddings.Q5_1.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q5_1.gguf) | Q5_1 | 0.1GB |
| [gpt2-small-italian-embeddings.Q6_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-embeddings-gguf/blob/main/gpt2-small-italian-embeddings.Q6_K.gguf) | Q6_K | 0.1GB |
Original model description:
---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wyl88/rdt_5000 | wyl88 | "2025-03-08T14:01:37Z" | 1 | 0 | null | [
"pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-03-06T13:39:20Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://huggingface.co/robotics-diffusion-transformer/rdt-1b
- Docs: [More Information Needed] |
cleanrl/Asterix-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1 | cleanrl | "2023-03-02T23:00:22Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Asterix-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-02T23:00:21Z" | ---
tags:
- Asterix-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Asterix-v5
type: Asterix-v5
metrics:
- type: mean_reward
value: 338180.00 +/- 103580.12
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Asterix-v5**
This is a trained model of a PPO agent playing Asterix-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_machado_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_machado_atari_wrapper --env-id Asterix-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Asterix-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Asterix-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Asterix-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_machado_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Asterix-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Asterix-v5',
'exp_name': 'cleanba_ppo_envpool_machado_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
tensorblock/my_Llama-3.2-3B-Instruct-GGUF | tensorblock | "2025-01-01T17:43:21Z" | 27 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:pavan01729/my_Llama-3.2-3B-Instruct",
"base_model:quantized:pavan01729/my_Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-01T17:28:26Z" | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: pavan01729/my_Llama-3.2-3B-Instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## pavan01729/my_Llama-3.2-3B-Instruct - GGUF
This repo contains GGUF format model files for [pavan01729/my_Llama-3.2-3B-Instruct](https://huggingface.co/pavan01729/my_Llama-3.2-3B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 02 Jan 2025
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [my_Llama-3.2-3B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q2_K.gguf) | Q2_K | 1.364 GB | smallest, significant quality loss - not recommended for most purposes |
| [my_Llama-3.2-3B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.543 GB | very small, high quality loss |
| [my_Llama-3.2-3B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.687 GB | very small, high quality loss |
| [my_Llama-3.2-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 1.815 GB | small, substantial quality loss |
| [my_Llama-3.2-3B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q4_0.gguf) | Q4_0 | 1.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [my_Llama-3.2-3B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q4_K_S.gguf) | Q4_K_S | 1.928 GB | small, greater quality loss |
| [my_Llama-3.2-3B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q4_K_M.gguf) | Q4_K_M | 2.019 GB | medium, balanced quality - recommended |
| [my_Llama-3.2-3B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q5_0.gguf) | Q5_0 | 2.270 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [my_Llama-3.2-3B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q5_K_S.gguf) | Q5_K_S | 2.270 GB | large, low quality loss - recommended |
| [my_Llama-3.2-3B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q5_K_M.gguf) | Q5_K_M | 2.322 GB | large, very low quality loss - recommended |
| [my_Llama-3.2-3B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q6_K.gguf) | Q6_K | 2.644 GB | very large, extremely low quality loss |
| [my_Llama-3.2-3B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/my_Llama-3.2-3B-Instruct-GGUF/blob/main/my_Llama-3.2-3B-Instruct-Q8_0.gguf) | Q8_0 | 3.422 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/my_Llama-3.2-3B-Instruct-GGUF --include "my_Llama-3.2-3B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/my_Llama-3.2-3B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Themira/smollm-mt5-en-si | Themira | "2025-03-25T14:42:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:adapter:HuggingFaceTB/SmolLM2-135M-Instruct",
"region:us"
] | null | "2025-03-14T18:12:11Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero | OpenDILabCommunity | "2024-02-01T07:03:04Z" | 0 | 0 | pytorch | [
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"TicTacToe-play-with-bot",
"en",
"arxiv:2310.08348",
"license:apache-2.0",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-01T07:02:57Z" | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- TicTacToe-play-with-bot
benchmark_name: OpenAI/Gym/Atari
task_name: TicTacToe-play-with-bot
pipeline_tag: reinforcement-learning
model-index:
- name: GumbelMuZero
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: TicTacToe-play-with-bot
type: TicTacToe-play-with-bot
metrics:
- type: mean_reward
value: 0.7 +/- 0.46
name: mean_reward
---
# Play **TicTacToe-play-with-bot** with **GumbelMuZero** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This implementation applies **GumbelMuZero** to the OpenAI/Gym/Atari **TicTacToe-play-with-bot** environment using [LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine).
**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env,video]
pip3 install LightZero
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from lzero.agent import GumbelMuZeroAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = GumbelMuZeroAgent(
env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-GumbelMuZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from lzero.agent import GumbelMuZeroAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero")
# Instantiate the agent
agent = GumbelMuZeroAgent(
env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-GumbelMuZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from lzero.agent import GumbelMuZeroAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = GumbelMuZeroAgent(env_id="TicTacToe-play-with-bot", exp_name="TicTacToe-play-with-bot-GumbelMuZero")
# Train the agent
return_ = agent.train(step=int(10000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Atari",
task_name="TicTacToe-play-with-bot",
algo_name="GumbelMuZero",
github_repo_url="https://github.com/opendilab/LightZero",
github_doc_model_url=None,
github_doc_env_url=None,
installation_guide='''
pip3 install DI-engine[common_env,video]
pip3 install LightZero
''',
usage_file_by_git_clone="./gumbel_muzero/tictactoe_play_with_bot_gumbel_muzero_deploy.py",
usage_file_by_huggingface_ding="./gumbel_muzero/tictactoe_play_with_bot_gumbel_muzero_download.py",
train_file="./gumbel_muzero/tictactoe_play_with_bot_gumbel_muzero.py",
repo_id="OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero",
platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)",
model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).",
create_repo=True
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'main_config': {
'exp_name': 'TicTacToe-play-with-bot-GumbelMuZero',
'seed': 0,
'env': {
'env_id': 'TicTacToe-play-with-bot',
'battle_mode': 'play_with_bot_mode',
'collector_env_num': 8,
'evaluator_env_num': 5,
'n_evaluator_episode': 5,
'manager': {
'shared_memory': False
}
},
'policy': {
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'model': {
'observation_shape': [3, 3, 3],
'action_space_size': 9,
'image_channel': 3,
'num_res_blocks': 1,
'num_channels': 16,
'fc_reward_layers': [8],
'fc_value_layers': [8],
'fc_policy_layers': [8],
'support_scale': 10,
'reward_support_size': 21,
'value_support_size': 21
},
'use_rnd_model': False,
'sampled_algo': False,
'gumbel_algo': True,
'mcts_ctree': True,
'collector_env_num': 8,
'evaluator_env_num': 5,
'env_type': 'board_games',
'action_type': 'varied_action_space',
'battle_mode': 'play_with_bot_mode',
'monitor_extra_statistics': True,
'game_segment_length': 5,
'transform2string': False,
'gray_scale': False,
'use_augmentation': False,
'augmentation': ['shift', 'intensity'],
'ignore_done': False,
'update_per_collect': 50,
'model_update_ratio': 0.1,
'batch_size': 256,
'optim_type': 'Adam',
'learning_rate': 0.003,
'target_update_freq': 100,
'target_update_freq_for_intrinsic_reward': 1000,
'weight_decay': 0.0001,
'momentum': 0.9,
'grad_clip_value': 0.5,
'n_episode': 8,
'num_simulations': 30,
'discount_factor': 1,
'td_steps': 9,
'num_unroll_steps': 3,
'reward_loss_weight': 1,
'value_loss_weight': 0.25,
'policy_loss_weight': 1,
'policy_entropy_loss_weight': 0,
'ssl_loss_weight': 0,
'lr_piecewise_constant_decay': False,
'threshold_training_steps_for_final_lr': 50000,
'manual_temperature_decay': False,
'threshold_training_steps_for_final_temperature': 100000,
'fixed_temperature_value': 0.25,
'use_ture_chance_label_in_chance_encoder': False,
'use_priority': True,
'priority_prob_alpha': 0.6,
'priority_prob_beta': 0.4,
'root_dirichlet_alpha': 0.3,
'root_noise_weight': 0.25,
'random_collect_episode_num': 0,
'eps': {
'eps_greedy_exploration_in_collect': False,
'type': 'linear',
'start': 1.0,
'end': 0.05,
'decay': 100000
},
'cfg_type': 'GumbelMuZeroPolicyDict',
'max_num_considered_actions': 3,
'reanalyze_ratio': 0.0,
'eval_freq': 2000,
'replay_buffer_size': 10000
},
'wandb_logger': {
'gradient_logger': False,
'video_logger': False,
'plot_logger': False,
'action_logger': False,
'return_logger': False
}
},
'create_config': {
'env': {
'type': 'tictactoe',
'import_names': ['zoo.board_games.tictactoe.envs.tictactoe_env']
},
'env_manager': {
'type': 'subprocess'
},
'policy': {
'type': 'gumbel_muzero',
'import_names': ['lzero.policy.gumbel_muzero']
}
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](<TODO>)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/LightZero)
- **Doc**: [Algorithm link](<TODO>)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 91.5 KB
- **Last Update Date:** 2024-02-01
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Atari
- **Task:** TicTacToe-play-with-bot
- **Gym version:** 0.25.1
- **DI-engine version:** v0.5.0
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [Environments link](<TODO>)
|
mnavas/roberta-finetuned-WebClassification-v2-smalllinguaEN | mnavas | "2023-05-05T13:32:49Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-05T12:06:16Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-finetuned-WebClassification-v2-smalllinguaEN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-WebClassification-v2-smalllinguaEN
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5844
- Accuracy: 0.7143
- F1: 0.7143
- Precision: 0.7143
- Recall: 0.7143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 7 | 2.3084 | 0.0714 | 0.0714 | 0.0714 | 0.0714 |
| No log | 2.0 | 14 | 2.2951 | 0.2857 | 0.2857 | 0.2857 | 0.2857 |
| No log | 3.0 | 21 | 2.2725 | 0.2143 | 0.2143 | 0.2143 | 0.2143 |
| No log | 4.0 | 28 | 2.0608 | 0.2143 | 0.2143 | 0.2143 | 0.2143 |
| No log | 5.0 | 35 | 1.8552 | 0.3571 | 0.3571 | 0.3571 | 0.3571 |
| No log | 6.0 | 42 | 1.6846 | 0.5714 | 0.5714 | 0.5714 | 0.5714 |
| No log | 7.0 | 49 | 1.5844 | 0.7143 | 0.7143 | 0.7143 | 0.7143 |
| No log | 8.0 | 56 | 1.4531 | 0.7143 | 0.7143 | 0.7143 | 0.7143 |
| No log | 9.0 | 63 | 1.3746 | 0.7143 | 0.7143 | 0.7143 | 0.7143 |
| No log | 10.0 | 70 | 1.3663 | 0.7143 | 0.7143 | 0.7143 | 0.7143 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
|
trapoom555/Phi-2-Text-Embedding-cft | trapoom555 | "2024-08-05T16:43:32Z" | 0 | 3 | transformers | [
"transformers",
"safetensors",
"sentence-embedding",
"sentence-similarity",
"feature-extraction",
"en",
"arxiv:2408.00690",
"license:mit",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-05-07T14:49:52Z" | ---
license: mit
language:
- en
tags:
- sentence-embedding
- sentence-similarity
- transformers
- feature-extraction
pipeline_tag: sentence-similarity
---
# Phi-2-Text-Embedding-cft
## Description
This is a fine-tuned version of [Phi-2](https://huggingface.co/microsoft/phi-2) to perform Text Embedding tasks. The model is fine-tuned using the Contrastive Fine-tuning and LoRA technique on NLI datasets. The paper can be found [here](https://arxiv.org/abs/2408.00690).
## Base Model
[Phi-2](https://huggingface.co/microsoft/phi-2)
## Usage
1. Clone Phi-2 repository
```bash
git clone https://huggingface.co/microsoft/phi-2
```
2. Change a tokenizer setting in `tokenizer_config.json`
```json
"add_eos_token": true
```
3. Use the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import numpy as np
class PhiSentenceEmbedding:
def __init__(self, model_path='microsoft/phi-2', adapter_path=None):
self.tokenizer = AutoTokenizer.from_pretrained(model_path)
self.model = AutoModelForCausalLM.from_pretrained(model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
trust_remote_code=True)
if adapter_path != None:
# Load fine-tuned LoRA
self.model.load_adapter(adapter_path)
def get_last_hidden_state(self, text):
inputs = self.tokenizer(text, return_tensors="pt").to('cuda')
with torch.no_grad():
out = self.model(**inputs, output_hidden_states=True).hidden_states[-1][0, -1, :]
return out.squeeze().float().cpu().numpy()
def encode(self, sentences: list[str], **kwargs) -> list[np.ndarray]:
"""
Returns a list of embeddings for the given sentences.
Args:
sentences: List of sentences to encode
Returns:
List of embeddings for the given sentences
"""
out = []
for s in sentences:
out.append(self.get_last_hidden_state(s))
return out
phi_sentence_embedding = PhiSentenceEmbedding(<your-cloned-base-model-path>, 'trapoom555/Phi-2-Text-Embedding-cft')
example_sentences = ["I don't like apples", "I like apples"]
encoded_sentences = phi_sentence_embedding.encode(example_sentences)
print(encoded_sentences)
```
## Training Details
| **Training Details** | **Value** |
|-------------------------|-------------------|
| Loss | InfoNCE |
| Batch Size | 60 |
| InfoNCE Temperature | 0.05 |
| Learning Rate | 5e-05 |
| Warmup Steps | 100 |
| Learning Rate Scheduler | CosineAnnealingLR |
| LoRA Rank | 8 |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.1 |
| Training Precision | bf16 |
| Max Epoch | 1 |
| GPU | RTX3090 |
| Num GPUs | 4 |
## Training Scripts
The training script for this model is written in this [Github repository](https://github.com/trapoom555/Language-Model-STS-CFT/tree/main).
## Checkpoints
We provide checkpoints every 500 training steps which can be found [here](https://huggingface.co/trapoom555/Phi-2-Text-Embedding-cft-checkpoints).
## Evaluation Results
| **Benchmarks** | **Before cft** | **After cft** |
|----------------|----------------|---------------|
| STS12 | 23.04 | 61.62 |
| STS13 | 20.79 | 71.87 |
| STS14 | 17.06 | 60.46 |
| STS15 | 24.56 | 71.18 |
| STS16 | 48.68 | 74.77 |
| STS17 | 41.43 | 80.20 |
| STSBenchmark | 37.87 | 79.46 |
| BOISSES | 28.04 | 64.06 |
| SICK-R | 48.40 | 74.37 |
| **Overall** | **32.21** | **70.89** |
## Contributors
Trapoom Ukarapol, Zhicheng Lee, Amy Xin
## Foot Notes
This work is the final project of the Natural Language Processing Spring 2024 course at Tsinghua University 🟣. We would like to express our sincere gratitude to this course ! |
snshrivas10/tiny-chatbot-dpo | snshrivas10 | "2024-05-19T06:33:48Z" | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2024-05-19T06:31:42Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-chatbot-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
prxy5604/f04f9214-3f11-48bc-9511-fa4ab80fd7ed | prxy5604 | "2025-01-18T08:08:18Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:tlphams/gollm-12.8b-instruct-v2.3",
"base_model:adapter:tlphams/gollm-12.8b-instruct-v2.3",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-01-18T06:20:35Z" | ---
library_name: peft
license: cc-by-nc-4.0
base_model: tlphams/gollm-12.8b-instruct-v2.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f04f9214-3f11-48bc-9511-fa4ab80fd7ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tlphams/gollm-12.8b-instruct-v2.3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 500a091925e5b6f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/500a091925e5b6f2_train_data.json
type:
field_input: selected_word
field_instruction: original
field_output: perturbed
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/f04f9214-3f11-48bc-9511-fa4ab80fd7ed
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/500a091925e5b6f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 68317063-c692-4732-a3b0-9be4ed3ef2e3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 68317063-c692-4732-a3b0-9be4ed3ef2e3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f04f9214-3f11-48bc-9511-fa4ab80fd7ed
This model is a fine-tuned version of [tlphams/gollm-12.8b-instruct-v2.3](https://huggingface.co/tlphams/gollm-12.8b-instruct-v2.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8424 | 0.0003 | 1 | 0.4176 |
| 1.7364 | 0.0160 | 50 | 0.1728 |
| 1.6649 | 0.0320 | 100 | 0.1228 |
| 1.8686 | 0.0480 | 150 | 0.1104 |
| 1.6596 | 0.0640 | 200 | 0.1085 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
simonycl/best_model-sst-2-64-42 | simonycl | "2023-07-26T02:15:01Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-26T02:03:35Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: best_model-sst-2-64-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_model-sst-2-64-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4849
- Accuracy: 0.8281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 1.3914 | 0.8125 |
| No log | 2.0 | 8 | 1.3910 | 0.8203 |
| 0.3843 | 3.0 | 12 | 1.3922 | 0.8203 |
| 0.3843 | 4.0 | 16 | 1.3920 | 0.8203 |
| 0.5793 | 5.0 | 20 | 1.3923 | 0.8203 |
| 0.5793 | 6.0 | 24 | 1.3989 | 0.8203 |
| 0.5793 | 7.0 | 28 | 1.4029 | 0.8281 |
| 0.3663 | 8.0 | 32 | 1.4103 | 0.8281 |
| 0.3663 | 9.0 | 36 | 1.3999 | 0.8281 |
| 0.2779 | 10.0 | 40 | 1.4010 | 0.8281 |
| 0.2779 | 11.0 | 44 | 1.3978 | 0.8281 |
| 0.2779 | 12.0 | 48 | 1.3963 | 0.8203 |
| 0.3589 | 13.0 | 52 | 1.4087 | 0.8203 |
| 0.3589 | 14.0 | 56 | 1.4067 | 0.8281 |
| 0.3185 | 15.0 | 60 | 1.4148 | 0.8281 |
| 0.3185 | 16.0 | 64 | 1.4171 | 0.8359 |
| 0.3185 | 17.0 | 68 | 1.4140 | 0.8359 |
| 0.1743 | 18.0 | 72 | 1.3982 | 0.8359 |
| 0.1743 | 19.0 | 76 | 1.3650 | 0.8359 |
| 0.1416 | 20.0 | 80 | 1.3456 | 0.8359 |
| 0.1416 | 21.0 | 84 | 1.3210 | 0.8359 |
| 0.1416 | 22.0 | 88 | 1.3070 | 0.8359 |
| 0.0354 | 23.0 | 92 | 1.3015 | 0.8359 |
| 0.0354 | 24.0 | 96 | 1.3319 | 0.8438 |
| 0.0035 | 25.0 | 100 | 1.3656 | 0.8281 |
| 0.0035 | 26.0 | 104 | 1.3587 | 0.8281 |
| 0.0035 | 27.0 | 108 | 1.3243 | 0.8359 |
| 0.0006 | 28.0 | 112 | 1.2945 | 0.8438 |
| 0.0006 | 29.0 | 116 | 1.2898 | 0.8438 |
| 0.0028 | 30.0 | 120 | 1.3066 | 0.8438 |
| 0.0028 | 31.0 | 124 | 1.3055 | 0.8438 |
| 0.0028 | 32.0 | 128 | 1.3202 | 0.8438 |
| 0.0049 | 33.0 | 132 | 1.3351 | 0.8438 |
| 0.0049 | 34.0 | 136 | 1.3190 | 0.8438 |
| 0.0102 | 35.0 | 140 | 1.3141 | 0.8438 |
| 0.0102 | 36.0 | 144 | 1.3142 | 0.8438 |
| 0.0102 | 37.0 | 148 | 1.3647 | 0.8281 |
| 0.0034 | 38.0 | 152 | 1.4250 | 0.8203 |
| 0.0034 | 39.0 | 156 | 1.4708 | 0.8203 |
| 0.0001 | 40.0 | 160 | 1.4570 | 0.8203 |
| 0.0001 | 41.0 | 164 | 1.4446 | 0.8203 |
| 0.0001 | 42.0 | 168 | 1.4345 | 0.8281 |
| 0.0001 | 43.0 | 172 | 1.4272 | 0.8281 |
| 0.0001 | 44.0 | 176 | 1.4185 | 0.8281 |
| 0.0001 | 45.0 | 180 | 1.4048 | 0.8281 |
| 0.0001 | 46.0 | 184 | 1.3962 | 0.8281 |
| 0.0001 | 47.0 | 188 | 1.4924 | 0.8203 |
| 0.0002 | 48.0 | 192 | 1.5361 | 0.8125 |
| 0.0002 | 49.0 | 196 | 1.5831 | 0.8125 |
| 0.0292 | 50.0 | 200 | 1.4789 | 0.8281 |
| 0.0292 | 51.0 | 204 | 1.2642 | 0.8359 |
| 0.0292 | 52.0 | 208 | 1.2154 | 0.8516 |
| 0.0001 | 53.0 | 212 | 1.1895 | 0.8516 |
| 0.0001 | 54.0 | 216 | 1.1775 | 0.8438 |
| 0.0001 | 55.0 | 220 | 1.1730 | 0.8438 |
| 0.0001 | 56.0 | 224 | 1.1746 | 0.8438 |
| 0.0001 | 57.0 | 228 | 1.1782 | 0.8516 |
| 0.0001 | 58.0 | 232 | 1.1838 | 0.8516 |
| 0.0001 | 59.0 | 236 | 1.2456 | 0.8281 |
| 0.025 | 60.0 | 240 | 1.3887 | 0.8281 |
| 0.025 | 61.0 | 244 | 1.4950 | 0.8125 |
| 0.025 | 62.0 | 248 | 1.5753 | 0.8047 |
| 0.0001 | 63.0 | 252 | 1.6287 | 0.8047 |
| 0.0001 | 64.0 | 256 | 1.6608 | 0.8047 |
| 0.0001 | 65.0 | 260 | 1.6803 | 0.8047 |
| 0.0001 | 66.0 | 264 | 1.6919 | 0.7969 |
| 0.0001 | 67.0 | 268 | 1.5961 | 0.8047 |
| 0.0001 | 68.0 | 272 | 1.4858 | 0.8125 |
| 0.0001 | 69.0 | 276 | 1.4104 | 0.8281 |
| 0.0001 | 70.0 | 280 | 1.3623 | 0.8281 |
| 0.0001 | 71.0 | 284 | 1.3333 | 0.8359 |
| 0.0001 | 72.0 | 288 | 1.3172 | 0.8359 |
| 0.0 | 73.0 | 292 | 1.3107 | 0.8359 |
| 0.0 | 74.0 | 296 | 1.5801 | 0.8047 |
| 0.0014 | 75.0 | 300 | 1.7857 | 0.8047 |
| 0.0014 | 76.0 | 304 | 1.8724 | 0.7969 |
| 0.0014 | 77.0 | 308 | 1.9146 | 0.7969 |
| 0.0001 | 78.0 | 312 | 1.9250 | 0.7969 |
| 0.0001 | 79.0 | 316 | 1.9265 | 0.7969 |
| 0.0001 | 80.0 | 320 | 1.9268 | 0.7969 |
| 0.0001 | 81.0 | 324 | 1.9243 | 0.7969 |
| 0.0001 | 82.0 | 328 | 1.9215 | 0.7969 |
| 0.0 | 83.0 | 332 | 1.9188 | 0.7969 |
| 0.0 | 84.0 | 336 | 1.9159 | 0.7969 |
| 0.0 | 85.0 | 340 | 1.9137 | 0.7969 |
| 0.0 | 86.0 | 344 | 1.9119 | 0.7969 |
| 0.0 | 87.0 | 348 | 1.9103 | 0.7969 |
| 0.0009 | 88.0 | 352 | 1.6541 | 0.8047 |
| 0.0009 | 89.0 | 356 | 1.2749 | 0.8438 |
| 0.0 | 90.0 | 360 | 1.2046 | 0.8438 |
| 0.0 | 91.0 | 364 | 1.1909 | 0.8438 |
| 0.0 | 92.0 | 368 | 1.1860 | 0.8594 |
| 0.0 | 93.0 | 372 | 1.1901 | 0.8594 |
| 0.0 | 94.0 | 376 | 1.1966 | 0.8516 |
| 0.0001 | 95.0 | 380 | 1.2014 | 0.8516 |
| 0.0001 | 96.0 | 384 | 1.2061 | 0.8438 |
| 0.0001 | 97.0 | 388 | 1.2109 | 0.8438 |
| 0.0 | 98.0 | 392 | 1.2170 | 0.8516 |
| 0.0 | 99.0 | 396 | 1.2210 | 0.8516 |
| 0.0 | 100.0 | 400 | 1.2237 | 0.8516 |
| 0.0 | 101.0 | 404 | 1.2258 | 0.8516 |
| 0.0 | 102.0 | 408 | 1.2276 | 0.8438 |
| 0.0 | 103.0 | 412 | 1.2290 | 0.8438 |
| 0.0 | 104.0 | 416 | 1.2301 | 0.8438 |
| 0.0 | 105.0 | 420 | 1.2313 | 0.8438 |
| 0.0 | 106.0 | 424 | 1.2324 | 0.8438 |
| 0.0 | 107.0 | 428 | 1.2334 | 0.8438 |
| 0.0 | 108.0 | 432 | 1.2345 | 0.8438 |
| 0.0 | 109.0 | 436 | 1.2356 | 0.8438 |
| 0.0 | 110.0 | 440 | 1.2366 | 0.8438 |
| 0.0 | 111.0 | 444 | 1.2375 | 0.8516 |
| 0.0 | 112.0 | 448 | 1.2384 | 0.8516 |
| 0.0 | 113.0 | 452 | 1.2400 | 0.8516 |
| 0.0 | 114.0 | 456 | 1.2415 | 0.8516 |
| 0.0 | 115.0 | 460 | 1.2428 | 0.8516 |
| 0.0 | 116.0 | 464 | 1.2439 | 0.8516 |
| 0.0 | 117.0 | 468 | 1.2450 | 0.8516 |
| 0.0 | 118.0 | 472 | 1.2459 | 0.8516 |
| 0.0 | 119.0 | 476 | 1.2467 | 0.8516 |
| 0.0 | 120.0 | 480 | 1.2476 | 0.8516 |
| 0.0 | 121.0 | 484 | 1.2485 | 0.8516 |
| 0.0 | 122.0 | 488 | 1.2495 | 0.8516 |
| 0.0 | 123.0 | 492 | 1.2495 | 0.8516 |
| 0.0 | 124.0 | 496 | 1.2491 | 0.8516 |
| 0.0 | 125.0 | 500 | 1.2491 | 0.8516 |
| 0.0 | 126.0 | 504 | 1.2494 | 0.8516 |
| 0.0 | 127.0 | 508 | 1.2498 | 0.8516 |
| 0.0 | 128.0 | 512 | 1.2503 | 0.8516 |
| 0.0 | 129.0 | 516 | 1.2509 | 0.8516 |
| 0.0 | 130.0 | 520 | 1.2514 | 0.8516 |
| 0.0 | 131.0 | 524 | 1.2519 | 0.8516 |
| 0.0 | 132.0 | 528 | 1.2527 | 0.8516 |
| 0.0 | 133.0 | 532 | 1.2535 | 0.8516 |
| 0.0 | 134.0 | 536 | 1.2542 | 0.8516 |
| 0.0 | 135.0 | 540 | 1.2549 | 0.8516 |
| 0.0 | 136.0 | 544 | 1.2554 | 0.8516 |
| 0.0 | 137.0 | 548 | 1.3879 | 0.8359 |
| 0.0001 | 138.0 | 552 | 1.6893 | 0.7969 |
| 0.0001 | 139.0 | 556 | 1.8348 | 0.7969 |
| 0.0 | 140.0 | 560 | 1.8942 | 0.7969 |
| 0.0 | 141.0 | 564 | 1.8778 | 0.7969 |
| 0.0 | 142.0 | 568 | 1.7187 | 0.8047 |
| 0.0001 | 143.0 | 572 | 1.6119 | 0.8203 |
| 0.0001 | 144.0 | 576 | 1.5523 | 0.8281 |
| 0.0 | 145.0 | 580 | 1.5189 | 0.8281 |
| 0.0 | 146.0 | 584 | 1.5008 | 0.8281 |
| 0.0 | 147.0 | 588 | 1.4916 | 0.8281 |
| 0.0 | 148.0 | 592 | 1.4872 | 0.8281 |
| 0.0 | 149.0 | 596 | 1.4854 | 0.8281 |
| 0.0 | 150.0 | 600 | 1.4849 | 0.8281 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
ilanasto/a2c-PandaReachDense-v3 | ilanasto | "2024-05-04T07:35:22Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-04T07:31:08Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SG161222/Paragon_V1.0 | SG161222 | "2023-06-03T06:19:16Z" | 103 | 54 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-03T07:17:32Z" | ---
license: creativeml-openrail-m
---
<b>Please read this!</b><br>
This model is in the testing phase. The necessary VAE is already baked into the model.<br><hr>
<b>The recommended negative prompt:</b><br><br>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, <a href="https://civitai.com/models/7808/easynegative">easynegative</a>, <a href="https://huggingface.co/zwv9/idk-who-is-this-model-belong-to/blob/main/bad-hands-5.pt">bad-hands-5</a><br><br>
<b>Recommended parameters for generation:</b><br><br>
<b>Sampling method:</b> Euler A<br>
<b>CFG Scale:</b> 5-12<br>
<b>Clip Skip:</b> 2<br><br>
<b>Hires.Fix Parameters:</b><br><br>
<b>Upscaler:</b> Latent or other<br>
<b>Hires Steps:</b> 0 or other<br>
<b>Denoising Strength:</b> 0.35 - 0.7<br>
<b>Upscaled by:</b> 1.1 - 2.0<br><hr>
<b>Examples:</b><br><br>
<a href='https://postimg.cc/3kxXkXSJ' target='_blank'><img src='https://i.postimg.cc/0ypcHZ7m/Pic1.png' border='0' alt='Pic1'/></a>
<a href='https://postimg.cc/2qmVqr8d' target='_blank'><img src='https://i.postimg.cc/q76n5vPF/Pic2.png' border='0' alt='Pic2'/></a>
<a href='https://postimg.cc/k6GM84rS' target='_blank'><img src='https://i.postimg.cc/sX9MkQwT/Pic3.png' border='0' alt='Pic3'/></a>
<a href='https://postimg.cc/gX7zKWdT' target='_blank'><img src='https://i.postimg.cc/j2xDtx1t/Pic4.png' border='0' alt='Pic4'/></a>
<a href='https://postimg.cc/Js81xKVM' target='_blank'><img src='https://i.postimg.cc/mgztb6mz/Pic5.png' border='0' alt='Pic5'/></a>
<a href='https://postimg.cc/Pp0HwQQG' target='_blank'><img src='https://i.postimg.cc/Zn55Xf9q/Pic6.png' border='0' alt='Pic6'/></a> |
broalantap/GPT2-large-16-48000steps | broalantap | "2024-11-02T14:25:46Z" | 145 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-02T14:24:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sriram-sanjeev9s/T5_model_1 | sriram-sanjeev9s | "2024-04-02T05:48:13Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-02T05:38:46Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: T5_model_1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
config: fr-en
split: validation
args: fr-en
metrics:
- name: Bleu
type: bleu
value: 8.741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_model_1
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4948
- Bleu: 8.741
- Gen Len: 17.974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 10 | 1.5554 | 8.7554 | 17.9983 |
| No log | 2.0 | 20 | 1.4948 | 8.741 | 17.974 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.12.1
- Datasets 2.18.0
- Tokenizers 0.13.2
|
PrunaAI/NeverSleep-Llama-3-Lumimaid-8B-v0.1-bnb-8bit-smashed | PrunaAI | "2024-07-21T07:20:20Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1",
"base_model:quantized:NeverSleep/Llama-3-Lumimaid-8B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-21T07:16:10Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NeverSleep/Llama-3-Lumimaid-8B-v0.1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/NeverSleep-Llama-3-Lumimaid-8B-v0.1-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NeverSleep/Llama-3-Lumimaid-8B-v0.1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NeverSleep/Llama-3-Lumimaid-8B-v0.1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
atasoglu/mbert-base-cased-nli-stsb-tr | atasoglu | "2024-04-20T18:49:12Z" | 23 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"tr",
"dataset:nli_tr",
"dataset:emrecan/stsb-mt-turkish",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-04-20T18:44:50Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
datasets:
- nli_tr
- emrecan/stsb-mt-turkish
language:
- tr
---
# atasoglu/mbert-base-cased-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was adapted from [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) and fine-tuned on these datasets:
- [nli_tr](https://huggingface.co/datasets/nli_tr)
- [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('atasoglu/mbert-base-cased-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('atasoglu/mbert-base-cased-nli-stsb-tr')
model = AutoModel.from_pretrained('atasoglu/mbert-base-cased-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below:
```txt
Cosine-Similarity : Pearson: 0.8152 Spearman: 0.8130
Manhattan-Distance: Pearson: 0.8049 Spearman: 0.8128
Euclidean-Distance: Pearson: 0.8049 Spearman: 0.8126
Dot-Product-Similarity: Pearson: 0.7878 Spearman: 0.7822
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 18,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 108,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
LoneStriker/PiVoT-0.1-Starling-LM-RP-5.0bpw-h6-exl2 | LoneStriker | "2023-11-28T16:35:59Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-28T16:33:02Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# PiVoT-0.1-Starling-LM-RP

# **Model Details**
### Description
PiVoT-0.1-Starling-LM-RP is RP finetuned model based on Starling-LM-alpha. Using Synatra-RP dataset.
<!-- prompt-template start -->
## Prompt template: OpenChat
```
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:
```
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift 💕
Contact me on Telegram: https://t.me/AlzarTakkarsen |
sd-concepts-library/3d-female-cyborgs | sd-concepts-library | "2022-09-17T20:15:59Z" | 0 | 39 | null | [
"license:mit",
"region:us"
] | null | "2022-09-17T20:15:45Z" | ---
license: mit
---
### 3d Female Cyborgs on Stable Diffusion
This is the `<A female cyborg>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
disanda/first_try_4 | disanda | "2023-07-09T07:21:57Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-07-09T07:20:27Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: first_try_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_try_4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7226 | 1.0 | 157 | 2.5273 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.0+cu102
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DEVECOAI/Qwen2.5-Coder-32B-Instruct-bnb-4bit_lr2e-05_r16_rsTrue | DEVECOAI | "2025-04-04T10:40:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-04T10:40:06Z" | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DEVECOAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CyberHarem/handa_roco_theidolmstermillionlive | CyberHarem | "2023-09-25T01:03:58Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/handa_roco_theidolmstermillionlive",
"license:mit",
"region:us"
] | text-to-image | "2023-09-25T00:51:04Z" | ---
license: mit
datasets:
- CyberHarem/handa_roco_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of handa_roco_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2380, you need to download `2380/handa_roco_theidolmstermillionlive.pt` as the embedding and `2380/handa_roco_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2380**, with the score of 0.759. The trigger words are:
1. `handa_roco_theidolmstermillionlive`
2. `long_hair, blush, bow, yellow_eyes, smile, hair_bow, bangs, twintails, green_eyes, grey_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.664 | [Download](5100/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](5100/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.724 | [Download](4760/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](4760/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.639 | [Download](4420/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](4420/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.681 | [Download](4080/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](4080/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.629 | [Download](3740/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](3740/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.660 | [Download](3400/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](3400/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.660 | [Download](3060/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](3060/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.679 | [Download](2720/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](2720/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| **2380** | **0.759** | [**Download**](2380/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](2380/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.755 | [Download](2040/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](2040/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.737 | [Download](1700/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](1700/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.609 | [Download](1360/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](1360/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.574 | [Download](1020/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](1020/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.514 | [Download](680/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](680/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.331 | [Download](340/handa_roco_theidolmstermillionlive.zip) | [<NSFW, click to see>](340/previews/pattern_1.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
sn56/ef3fc409-5f78-48ab-a9e3-eb62f77e5e20 | sn56 | "2025-02-07T12:46:05Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"region:us"
] | null | "2025-02-07T12:21:57Z" | ---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ef3fc409-5f78-48ab-a9e3-eb62f77e5e20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- a6dae33cde59515e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a6dae33cde59515e_train_data.json
type:
field_input: selftext
field_instruction: title
field_output: answers.text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: sn56/ef3fc409-5f78-48ab-a9e3-eb62f77e5e20
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 9.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a6dae33cde59515e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 205560767
sequence_len: 1024
shuffle: true
strict: false
tf32: true
tokenizer_type: AutoTokenizer
torch_compile: true
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: sn56-miner
wandb_mode: disabled
wandb_name: null
wandb_project: god
wandb_run: vxah
wandb_runid: null
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ef3fc409-5f78-48ab-a9e3-eb62f77e5e20
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 205560767
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.9727 | 0.0010 | 1 | 3.9163 |
| 11.3203 | 0.0513 | 50 | 2.8247 |
| 11.5312 | 0.1026 | 100 | 2.7625 |
| 11.2695 | 0.1540 | 150 | 2.7405 |
| 11.3359 | 0.2053 | 200 | 2.7362 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yamatazen/Ayla-Light-12B-Stock | yamatazen | "2025-02-17T08:42:47Z" | 1 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:yamatazen/Ayla-Light-12B",
"base_model:merge:yamatazen/Ayla-Light-12B",
"base_model:yamatazen/Ayla-Light-12B-v2",
"base_model:merge:yamatazen/Ayla-Light-12B-v2",
"base_model:yamatazen/Ayla-Light-12B-v3",
"base_model:merge:yamatazen/Ayla-Light-12B-v3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T04:11:51Z" | ---
base_model:
- yamatazen/Ayla-Light-12B-v2
- yamatazen/Ayla-Light-12B
- yamatazen/Ayla-Light-12B-v3
library_name: transformers
tags:
- mergekit
- merge
---

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yamatazen/Ayla-Light-12B](https://huggingface.co/yamatazen/Ayla-Light-12B) as a base.
### Models Merged
The following models were included in the merge:
* [yamatazen/Ayla-Light-12B-v2](https://huggingface.co/yamatazen/Ayla-Light-12B-v2)
* [yamatazen/Ayla-Light-12B-v3](https://huggingface.co/yamatazen/Ayla-Light-12B-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: yamatazen/Ayla-Light-12B
models:
- model: yamatazen/Ayla-Light-12B-v2
- model: yamatazen/Ayla-Light-12B-v3
merge_method: model_stock
dtype: bfloat16
parameters:
normalize: true
```
|
damgomz/ft_1_12e6_x8 | damgomz | "2024-07-14T04:16:43Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-19T16:47:07Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 153199.52336907387 |
| Emissions (Co2eq in kg) | 0.092703332620456 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.8086008221386256 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.159581013076256 |
| Consumed energy (kWh) | 1.9681818352148808 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.2949090824854672 |
| Emissions (Co2eq in kg) | 0.06000314665288726 |
## Note
12 juillet 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/fp_bs32_lr1e4_x8 |
| model_name | ft_1_12e6_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.707225 | 0.336807 |
| 1 | 0.272179 | 0.248273 | 0.899627 |
| 2 | 0.180017 | 0.233954 | 0.896014 |
| 3 | 0.113639 | 0.252010 | 0.919735 |
| 4 | 0.059565 | 0.308335 | 0.918050 |
| 5 | 0.032741 | 0.333578 | 0.926764 |
| 6 | 0.020775 | 0.390008 | 0.922906 |
|
SeyedAli/Melanoma-Classification | SeyedAli | "2024-03-02T16:08:40Z" | 18 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-02T10:57:00Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Melanoma-Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Melanoma-Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [SeyedAli/Skin-Lesion-Dataset](https://huggingface.co/datasets/SeyedAli/Skin-Lesion-Dataset) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5750
- Accuracy: 0.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9779 | 0.08 | 100 | 1.1158 | 0.6041 |
| 0.9934 | 0.16 | 200 | 1.0227 | 0.6501 |
| 0.9562 | 0.24 | 300 | 0.9276 | 0.6748 |
| 1.0995 | 0.32 | 400 | 0.9088 | 0.6836 |
| 0.8198 | 0.39 | 500 | 0.8581 | 0.6949 |
| 0.8034 | 0.47 | 600 | 0.8444 | 0.6967 |
| 0.8319 | 0.55 | 700 | 0.8196 | 0.7148 |
| 0.787 | 0.63 | 800 | 0.8360 | 0.6975 |
| 0.8642 | 0.71 | 900 | 0.8250 | 0.7008 |
| 0.8329 | 0.79 | 1000 | 0.7939 | 0.7172 |
| 0.9678 | 0.87 | 1100 | 0.7661 | 0.7332 |
| 0.8226 | 0.95 | 1200 | 0.7284 | 0.7373 |
| 0.7967 | 1.03 | 1300 | 0.7355 | 0.7411 |
| 0.6531 | 1.1 | 1400 | 0.7561 | 0.7247 |
| 0.5719 | 1.18 | 1500 | 0.6839 | 0.7638 |
| 0.6123 | 1.26 | 1600 | 0.6857 | 0.7584 |
| 0.6504 | 1.34 | 1700 | 0.6970 | 0.7531 |
| 0.6214 | 1.42 | 1800 | 0.6841 | 0.7576 |
| 0.4925 | 1.5 | 1900 | 0.6624 | 0.7642 |
| 0.5797 | 1.58 | 2000 | 0.6287 | 0.7709 |
| 0.6018 | 1.66 | 2100 | 0.6537 | 0.7622 |
| 0.6334 | 1.74 | 2200 | 0.6413 | 0.7713 |
| 0.4111 | 1.82 | 2300 | 0.6242 | 0.7786 |
| 0.4779 | 1.89 | 2400 | 0.6260 | 0.7790 |
| 0.5488 | 1.97 | 2500 | 0.6146 | 0.7807 |
| 0.3212 | 2.05 | 2600 | 0.6975 | 0.7707 |
| 0.4282 | 2.13 | 2700 | 0.6344 | 0.7790 |
| 0.2822 | 2.21 | 2800 | 0.6985 | 0.7845 |
| 0.3003 | 2.29 | 2900 | 0.5954 | 0.7993 |
| 0.2982 | 2.37 | 3000 | 0.6156 | 0.7940 |
| 0.2628 | 2.45 | 3100 | 0.6318 | 0.7963 |
| 0.2987 | 2.53 | 3200 | 0.6495 | 0.8030 |
| 0.2714 | 2.6 | 3300 | 0.6018 | 0.8052 |
| 0.3059 | 2.68 | 3400 | 0.5944 | 0.8078 |
| 0.2762 | 2.76 | 3500 | 0.6296 | 0.7936 |
| 0.3685 | 2.84 | 3600 | 0.6277 | 0.8017 |
| 0.2299 | 2.92 | 3700 | 0.5834 | 0.8125 |
| 0.3414 | 3.0 | 3800 | 0.5750 | 0.8167 |
| 0.1082 | 3.08 | 3900 | 0.6201 | 0.8196 |
| 0.049 | 3.16 | 4000 | 0.6475 | 0.8161 |
| 0.102 | 3.24 | 4100 | 0.6791 | 0.8097 |
| 0.0483 | 3.31 | 4200 | 0.6582 | 0.8216 |
| 0.1204 | 3.39 | 4300 | 0.6603 | 0.8222 |
| 0.0611 | 3.47 | 4400 | 0.7174 | 0.8190 |
| 0.0555 | 3.55 | 4500 | 0.6841 | 0.8236 |
| 0.0188 | 3.63 | 4600 | 0.7009 | 0.8240 |
| 0.1292 | 3.71 | 4700 | 0.7040 | 0.8204 |
| 0.0661 | 3.79 | 4800 | 0.7074 | 0.8238 |
| 0.1061 | 3.87 | 4900 | 0.6984 | 0.8210 |
| 0.0861 | 3.95 | 5000 | 0.6913 | 0.8230 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
suku9/gpt2-moses-dpo | suku9 | "2025-04-10T02:40:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-10T02:40:19Z" | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF | mradermacher | "2024-12-08T18:33:10Z" | 484 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"tango",
"en",
"es",
"dataset:spanish-ir/messirve",
"dataset:tatakof/messi_mod-v0.0.2",
"base_model:sandbox-ai/Llama-3.1-Tango-8b-f16",
"base_model:quantized:sandbox-ai/Llama-3.1-Tango-8b-f16",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-12-08T14:08:32Z" | ---
base_model: sandbox-ai/Llama-3.1-Tango-8b-f16
datasets:
- spanish-ir/messirve
- tatakof/messi_mod-v0.0.2
language:
- en
- es
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- tango
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/sandbox-ai/Llama-3.1-Tango-8b-f16
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Tango-8b-f16-i1-GGUF/resolve/main/Llama-3.1-Tango-8b-f16.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KBLab/megatron.bert-large.unigram-32k-pretok.25k-steps | KBLab | "2024-04-23T08:44:03Z" | 93 | 0 | transformers | [
"transformers",
"safetensors",
"megatron-bert",
"feature-extraction",
"sv",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-04-23T08:29:42Z" | ---
language:
- sv
---
# megatron.bert-large.unigram-32k-pretok.25k-steps
This BERT model was trained using the NeMo library.
The size of the model is a regular bert-large.
The model was trained on more than 245GB of data, consisting mostly of web-data and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 25k training steps using a batch size of 8k.
The model has multiple sibling models trained on the same dataset using different tokenizers or more/less parameters:
- [megatron.bert-base.bpe-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.bpe-32k-no_pretok.25k-steps)
- [megatron.bert-base.bpe-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.bpe-64k-no_pretok.25k-steps)
- [megatron.bert-base.spe-bpe-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-32k-no_pretok.25k-steps)
- [megatron.bert-base.spe-bpe-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-32k-pretok.25k-steps)
- [megatron.bert-base.spe-bpe-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-64k-no_pretok.25k-steps)
- [megatron.bert-base.spe-bpe-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.spe-bpe-64k-pretok.25k-steps)
- [megatron.bert-base.unigram-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-32k-no_pretok.25k-steps)
- [megatron.bert-base.unigram-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-32k-pretok.25k-steps)
- [megatron.bert-base.unigram-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-64k-no_pretok.25k-steps)
- [megatron.bert-base.unigram-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.unigram-64k-pretok.25k-steps)
- [megatron.bert-base.wordpiece-32k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-32k-no_pretok.25k-steps)
- [megatron.bert-base.wordpiece-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-32k-pretok.25k-steps)
- [megatron.bert-base.wordpiece-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-64k-no_pretok.25k-steps)
- [megatron.bert-base.wordpiece-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-base.wordpiece-64k-pretok.25k-steps)
- [megatron.bert-large.bpe-64k-no_pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.bpe-64k-no_pretok.25k-steps)
- [megatron.bert-large.spe-bpe-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.spe-bpe-32k-pretok.25k-steps)
- [megatron.bert-large.unigram-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.unigram-32k-pretok.25k-steps)
- [megatron.bert-large.unigram-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.unigram-64k-pretok.25k-steps)
- [megatron.bert-large.wordpiece-32k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.wordpiece-32k-pretok.25k-steps)
- [megatron.bert-large.wordpiece-64k-pretok.25k-steps](https://huggingface.co/KBLab/megatron.bert-large.wordpiece-64k-pretok.25k-steps)
## Acknowledgements
The training was performed on the Luxembourg national supercomputer MeluXina.
The authors gratefully acknowledge the LuxProvide teams for their expert support.
|
texanrangee/98daa160-8406-4bb3-98a2-7a8e6abe22d9 | texanrangee | "2025-03-22T04:37:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-22T03:21:46Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
localmodels/WizardLM-7B-Uncensored-4bit | localmodels | "2023-05-29T03:54:19Z" | 0 | 4 | null | [
"region:us"
] | null | "2023-05-28T14:23:42Z" | ## WizardLM 7B Uncensored 4-bit
From ehartford: https://huggingface.co/ehartford/WizardLM-7B-Uncensored
### Folders
**ggml:** q4_0 and q4_1
**gptq:** works with Triton and CUDA branches |
DayStay/dqn-SpaceInvadersNoFrameskip-v4 | DayStay | "2023-11-17T10:03:09Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-17T10:02:33Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 653.00 +/- 139.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DayStay -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DayStay -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DayStay
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
cs552-mlp/phi3-lora-arc3-gptq-3bits | cs552-mlp | "2024-06-12T12:21:25Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | "2024-06-12T12:20:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheBloke/Swallow-13B-AWQ | TheBloke | "2023-12-19T21:58:07Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ja",
"base_model:tokyotech-llm/Swallow-13b-hf",
"base_model:quantized:tokyotech-llm/Swallow-13b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-12-19T21:32:03Z" | ---
base_model: tokyotech-llm/Swallow-13b-hf
inference: false
language:
- en
- ja
library_name: transformers
license: llama2
model_creator: tokyotech-llm
model_name: Swallow 13B
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Swallow 13B - AWQ
- Model creator: [tokyotech-llm](https://huggingface.co/tokyotech-llm)
- Original model: [Swallow 13B](https://huggingface.co/tokyotech-llm/Swallow-13b-hf)
<!-- description start -->
## Description
This repo contains AWQ model files for [tokyotech-llm's Swallow 13B](https://huggingface.co/tokyotech-llm/Swallow-13b-hf).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Swallow-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Swallow-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Swallow-13B-GGUF)
* [tokyotech-llm's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/tokyotech-llm/Swallow-13b-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Swallow-13B-AWQ/tree/main) | 4 | 128 | [Alpaca Japanese](https://huggingface.co/datasets/fujiki/japanese_alpaca_data/viewer/) | 4096 | 7.48 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Swallow-13B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Swallow-13B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Swallow-13B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Swallow-13B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Swallow-13B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Swallow-13B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: tokyotech-llm's Swallow 13B
# Swallow
Our Swallow model has undergone continuous pre-training from the Llama 2 family, primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
Links to other models can be found in the index.
## Swallow Model Index
|Model|Swallow-hf|Swallow-instruct-hf|
|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)|

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our paper (preprint coming soon) for more details!
## Model Details
* **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2)
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Base Model Performance
### Japanese version
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
|Llama 2|7B|0.3852|0.4240|0.3410|0.7917|0.1905|0.0760|0.1783|0.1738|
|Swallow|7B|0.4808|0.5078|0.5968|0.8573|0.1830|0.1240|0.2510|0.1511|
|Llama 2|13B|0.6997|0.4415|0.4170|0.8533|0.2139|0.1320|0.2146|0.1982|
|Swallow|13B|0.7837|0.5063|0.6398|0.9005|0.2168|0.2040|0.2720|0.1771|
|Llama 2|70B|0.8686|0.4656|0.5256|0.9080|**0.2361**|0.3560|0.2643|**0.2398**|
|Swallow|70B|**0.9348**|**0.6290**|**0.6960**|**0.9176**|0.2266|**0.4840**|**0.3043**|0.2298|
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
PROMPT_DICT = {
"prompt_input": (
"以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:"
),
"prompt_no_input": (
"以下に、あるタスクを説明する指示があります。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 応答:"
),
}
def create_prompt(instruction, input=None):
"""
Generates a prompt based on the given instruction and an optional input.
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
If no input is provided, it uses the 'prompt_no_input' template.
Args:
instruction (str): The instruction describing the task.
input (str, optional): Additional input providing context for the task. Default is None.
Returns:
str: The generated prompt.
"""
if input:
# Use the 'prompt_input' template when additional input is provided
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
else:
# Use the 'prompt_no_input' template when no additional input is provided
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
# Example usage
instruction_example = "以下のトピックに関する詳細な情報を提供してください。"
input_example = "東京工業大学の主なキャンパスについて教えてください"
prompt = create_prompt(instruction_example, input_example)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
### Use the base model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "東京工業大学の主なキャンパスは、"
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- Swallow Corpus
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
### Instruction Tuning
The following datasets were used for the instruction tuning.
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
|
sadhaklal/a-and-not-b | sadhaklal | "2024-02-21T07:45:12Z" | 0 | 0 | pytorch | [
"pytorch",
"license:apache-2.0",
"region:us"
] | null | "2024-02-21T04:31:18Z" | ---
license: apache-2.0
library_name: pytorch
---
# a-and-not-b
A neuron that performs the A AND (NOT B) logical computation. It generates the following truth table:
| A | B | C |
| - | - | - |
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
It is inspired by McCulloch & Pitts' 1943 paper 'A Logical Calculus of the Ideas Immanent in Nervous Activity'.
It doesn't contain any parameters.
It takes as input two column vectors of zeros and ones. It outputs a single column vector of zeros and ones.
Its mechanism is outlined in Figure 10-3 of Aurelien Geron's book 'Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow'.

Like all the other neurons in Figure 10-3, it is activated when at least two of its input connections are active.
Code: https://github.com/sambitmukherjee/handson-ml3-pytorch/blob/main/chapter10/logical_computations_with_neurons.ipynb
## Usage
```
import torch
import torch.nn as nn
from huggingface_hub import PyTorchModelHubMixin
# Let's create two column vectors containing `0`s and `1`s.
batch = {'a': torch.tensor([[0], [0], [1], [1]]), 'b': torch.tensor([[0], [1], [0], [1]])}
class A_AND_NOT_B(nn.Module, PyTorchModelHubMixin):
def __init__(self):
super().__init__()
self.operation = "C = A AND (NOT B)"
def forward(self, x):
a = x['a']
b = x['b']
b = -1 * b
inputs = torch.cat([a, a, b], axis=1)
column_sum = torch.sum(inputs, dim=1, keepdim=True)
output = (column_sum >= 2).long()
return output
# Instantiate:
a_and_not_b = A_AND_NOT_B.from_pretrained("sadhaklal/a-and-not-b")
# Forward pass:
output = a_and_not_b(batch)
print(output)
``` |
huggingtweets/th3nfthunt3r | huggingtweets | "2022-10-16T18:36:40Z" | 141 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-10-16T18:35:50Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/th3nfthunt3r/1665945395711/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1563458962158022656/CWXK4AUr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Th3 NFT Hunt3r</div>
<div style="text-align: center; font-size: 14px;">@th3nfthunt3r</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Th3 NFT Hunt3r.
| Data | Th3 NFT Hunt3r |
| --- | --- |
| Tweets downloaded | 364 |
| Retweets | 50 |
| Short tweets | 113 |
| Tweets kept | 201 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13l2dy5v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @th3nfthunt3r's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1xgt6nuf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1xgt6nuf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/th3nfthunt3r')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mradermacher/SmolLM2-MagpieUltra-8k-GGUF | mradermacher | "2025-01-29T15:46:07Z" | 221 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:HuggingFaceTB/SmolLM2-MagpieUltra-8k",
"base_model:quantized:HuggingFaceTB/SmolLM2-MagpieUltra-8k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-29T15:19:41Z" | ---
base_model: HuggingFaceTB/SmolLM2-MagpieUltra-8k
language:
- en
library_name: transformers
model_name: SmolLM2-MagpieUltra-8k
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/HuggingFaceTB/SmolLM2-MagpieUltra-8k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-MagpieUltra-8k-GGUF/resolve/main/SmolLM2-MagpieUltra-8k.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-IT-baseline-4bits | RichardErkhov | "2025-03-04T21:51:25Z" | 0 | 0 | null | [
"safetensors",
"bloom",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-04T21:50:29Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bloom-1b7-ropes-IT-baseline - bnb 4bits
- Model creator: https://huggingface.co/alonzogarbanzo/
- Original model: https://huggingface.co/alonzogarbanzo/Bloom-1b7-ropes-IT-baseline/
Original model description:
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-1b7
tags:
- generated_from_trainer
model-index:
- name: Bloom-1b7-ropes-IT-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bloom-1b7-ropes-IT-baseline
This model is a fine-tuned version of [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Instruction Tuned on the ropes task here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/ropes
## Training procedure
Given a set of prompts:
``` python
prompts = [
"Given the following background and situation, answer the question: ",
"Based on the background information and the current situation, what is the answer to the question? ",
"Considering the background and the described situation, provide an answer to this question: ",
]
```
Each example is concatenated with the prompt, background, situation, question and answer:
``` python
input_text = f"{prompt}Background: {background} Situation: {situation} Question: {question} Answer: {answer_text}."
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
Final results: {'loss': 0.024, 'grad_norm': 1.3331243991851807, 'learning_rate': 8.000000000000001e-07, 'epoch': 10.0}
Average results: {'train_runtime': 862.219, 'train_samples_per_second': 2.32, 'train_steps_per_second': 0.58, 'train_loss': 0.4160269268453121, 'epoch': 10.0}
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
RogerB/afro-xlmr-small-finetuned-kintweetsD | RogerB | "2023-07-09T16:16:36Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-07-09T15:59:59Z" | ---
license: afl-3.0
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-small-finetuned-kintweetsD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-small-finetuned-kintweetsD
This model is a fine-tuned version of [Davlan/afro-xlmr-small](https://huggingface.co/Davlan/afro-xlmr-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8179 | 1.0 | 900 | 1.6363 |
| 1.7094 | 2.0 | 1800 | 1.5927 |
| 1.6816 | 3.0 | 2700 | 1.6023 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tinycompany/BiBo-R1-v0.2 | tinycompany | "2025-03-15T21:23:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-15T21:22:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mci29/sn29_w1m1_h9i7 | mci29 | "2025-02-07T20:12:38Z" | 320 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-07T20:09:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rhplus0831/maid-yuzu-v2-mid | rhplus0831 | "2024-02-03T04:17:12Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-03T03:43:41Z" | ---
base_model:
- smelborp/MixtralOrochi8x7B
- ycros/BagelMIsteryTour-v2-8x7B
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v2-mid
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: smelborp/MixtralOrochi8x7B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.375
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
```
|
TheBloke/llama2_7b_merge_orcafamily-GPTQ | TheBloke | "2023-11-20T18:19:45Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:beaugogh/openorca-multiplechoice-10k",
"base_model:yeen214/llama2_7b_merge_orcafamily",
"base_model:quantized:yeen214/llama2_7b_merge_orcafamily",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-11-20T17:55:43Z" | ---
base_model: yeen214/llama2_7b_merge_orcafamily
datasets:
- Open-Orca/SlimOrca
- beaugogh/openorca-multiplechoice-10k
inference: false
language:
- en
license: mit
metrics:
- accuracy
model_creator: yeen heui yeen
model_name: Llama2 7B Merge Orcafamily
model_type: llama
prompt_template: 'Info on prompt template will be added shortly.
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 7B Merge Orcafamily - GPTQ
- Model creator: [yeen heui yeen](https://huggingface.co/yeen214)
- Original model: [Llama2 7B Merge Orcafamily](https://huggingface.co/yeen214/llama2_7b_merge_orcafamily)
<!-- description start -->
# Description
This repo contains GPTQ model files for [yeen heui yeen's Llama2 7B Merge Orcafamily](https://huggingface.co/yeen214/llama2_7b_merge_orcafamily).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GGUF)
* [yeen heui yeen's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/yeen214/llama2_7b_merge_orcafamily)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: TBC
```
Info on prompt template will be added shortly.
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [yeen heui yeen's Llama2 7B Merge Orcafamily](https://huggingface.co/yeen214/llama2_7b_merge_orcafamily).
<!-- licensing end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/llama2_7b_merge_orcafamily-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/llama2_7b_merge_orcafamily-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `llama2_7b_merge_orcafamily-GPTQ`:
```shell
mkdir llama2_7b_merge_orcafamily-GPTQ
huggingface-cli download TheBloke/llama2_7b_merge_orcafamily-GPTQ --local-dir llama2_7b_merge_orcafamily-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir llama2_7b_merge_orcafamily-GPTQ
huggingface-cli download TheBloke/llama2_7b_merge_orcafamily-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir llama2_7b_merge_orcafamily-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir llama2_7b_merge_orcafamily-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama2_7b_merge_orcafamily-GPTQ --local-dir llama2_7b_merge_orcafamily-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/llama2_7b_merge_orcafamily-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/llama2_7b_merge_orcafamily-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/llama2_7b_merge_orcafamily-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `llama2_7b_merge_orcafamily-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/llama2_7b_merge_orcafamily-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Info on prompt template will be added shortly.
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/llama2_7b_merge_orcafamily-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Info on prompt template will be added shortly.
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: yeen heui yeen's Llama2 7B Merge Orcafamily
This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.
The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.
First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.
Second: model fine-tuned with the SlimOrca dataset on llama2 7b.
Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.
We'll add the results once we have the official results
|
ClarenceDan/b1539395-e53a-46d2-bdd9-e7009913d702 | ClarenceDan | "2025-01-18T11:18:23Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-18T11:00:21Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1539395-e53a-46d2-bdd9-e7009913d702
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ee00b5ffc79c413_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ee00b5ffc79c413_train_data.json
type:
field_instruction: seq
field_output: labels_str
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/b1539395-e53a-46d2-bdd9-e7009913d702
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ee00b5ffc79c413_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7ffb5512-24ac-400f-b2e1-903a11f0a7da
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7ffb5512-24ac-400f-b2e1-903a11f0a7da
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b1539395-e53a-46d2-bdd9-e7009913d702
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3681 | 0.0000 | 1 | 2.1789 |
| 1.9003 | 0.0001 | 3 | 2.1549 |
| 1.6763 | 0.0002 | 6 | 1.7226 |
| 1.1284 | 0.0003 | 9 | 1.0767 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/274b9903-7e15-48a6-8eb8-6843490553e3 | shibajustfor | "2025-02-24T00:50:05Z" | 0 | 0 | peft | [
"peft",
"qwen2",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-0.5B-Instruct",
"region:us"
] | null | "2025-02-24T00:50:01Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2-0.5B-Instruct
model-index:
- name: shibajustfor/274b9903-7e15-48a6-8eb8-6843490553e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/274b9903-7e15-48a6-8eb8-6843490553e3
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Harshini2004/mbart_en_te_model | Harshini2004 | "2025-02-19T17:32:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:flores",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-19T17:26:44Z" | ---
library_name: transformers
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
datasets:
- flores
model-index:
- name: mbart_en_te_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_en_te_model
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the flores dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 100 | 0.9786 |
| No log | 2.0 | 200 | 0.8959 |
| No log | 3.0 | 300 | 0.8832 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0
|
harikc456/vizdoom_health_gathering_supreme | harikc456 | "2023-03-25T03:39:54Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-25T03:39:41Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.07 +/- 5.48
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r harikc456/vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF | mradermacher | "2025-01-02T23:31:44Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28",
"base_model:quantized:MaziyarPanahi/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-02T23:07:26Z" | ---
base_model: MaziyarPanahi/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: Experiment26Neuralsirkrishna_Strangemerges_30Experiment28
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28-GGUF/resolve/main/Experiment26Neuralsirkrishna_Strangemerges_30Experiment28.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-8_0bpw_exl2 | Zoyd | "2024-06-04T15:55:14Z" | 76 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-04T15:48:07Z" | ---
base_model:
- Nitral-AI/Poppy-1.35-Phase1
- Nitral-AI/Pp-72xra1
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_0bpw_exl2)**</center> | <center>6489 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_5bpw_exl2)**</center> | <center>6909 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-8_0bpw_exl2)**</center> | <center>8123 MB</center> | <center>8</center> |

# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
# Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30.
# : [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets).
# If you want to use vision functionality: You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). And need to load the specified **mmproj** file: [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16).
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Pp-72xra1
layer_range: [0, 32]
- model: Nitral-AI/Poppy-1.35-Phase1
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Pp-72xra1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
omersubasi/xlm-roberta-base-finetuned-panx-it | omersubasi | "2023-12-08T05:57:10Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-08T05:52:04Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8218390804597702
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2503
- F1: 0.8218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8253 | 1.0 | 70 | 0.3503 | 0.7160 |
| 0.2781 | 2.0 | 140 | 0.2643 | 0.8148 |
| 0.1871 | 3.0 | 210 | 0.2503 | 0.8218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
swanggl/movie-genre-classifier | swanggl | "2025-03-28T07:14:34Z" | 49 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2025-03-27T21:00:24Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>We had to rate limit you. To continue using our service, please log in or create an account.</p>
</div>
</main>
</body>
</html> |
fnlp/SmolLM-135M-GQA-d_kv_128 | fnlp | "2025-03-13T07:23:55Z" | 27 | 0 | null | [
"safetensors",
"llama",
"text-generation",
"dataset:HuggingFaceTB/smollm-corpus",
"arxiv:2502.14837",
"base_model:HuggingFaceTB/SmolLM-135M",
"base_model:finetune:HuggingFaceTB/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-03-04T11:30:23Z" | ---
license: apache-2.0
datasets:
- HuggingFaceTB/smollm-corpus
base_model:
- HuggingFaceTB/SmolLM-135M
pipeline_tag: text-generation
---
**Research Paper** ["Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs"](https://arxiv.org/abs/2502.14837)
## Inference
- Step 1: Download the [**monkey patch file**](https://github.com/JT-Ushio/MHA2MLA/blob/main/src/mha2mla/monkey_patch.py).
```shell
wget https://raw.githubusercontent.com/JT-Ushio/MHA2MLA/refs/heads/main/src/mha2mla/monkey_patch.py
```
- Step 2(Option): For MHA2MLA models using Partial-RoPE 2-nrom method, Download the [**qk_2-norm file**](https://github.com/JT-Ushio/MHA2MLA/tree/main/utils).
Take `qk_tensor_135M.pth` as an example:
```shell
wget https://github.com/JT-Ushio/MHA2MLA/raw/refs/heads/main/utils/qk_tensor_135M.pth
```
- Step 3: Download the [MHA2MLA models](https://huggingface.co/fnlp/SmolLM-135M-GQA-d_kv_128) and run inference.
Take `fnlp/SmolLM-135M-GQA-d_kv_128` as an example:
```python
import torch
from transformers import AutoConfig, AutoTokenizer, LlamaForCausalLM
from monkey_patch import infer_monkey_patch
model_name = "fnlp/SmolLM-135M-GQA-d_kv_128"
# Monkey Patch: MHA -> MLA
config = AutoConfig.from_pretrained(model_name)
if "RoPE" in config:
config.RoPE["qk_tensor_path"] = "qk_tensor_135M.pth" # Configuration for Specific Models
infer_monkey_patch(config.RoPE)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.bfloat16).cuda()
# Generate
text = "Which American-born Sinclair won the Nobel Prize for Literature in 1930?"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
generation_kwargs = {"do_sample": False, "use_cache": True, "max_new_tokens": 128}
output = model.generate(**inputs, **generation_kwargs)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# - Sinclair Lewis
```
## Citation
```
@misc{ji2025economicalinferenceenablingdeepseeks,
title={Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs},
author={Tao Ji and Bin Guo and Yuanbin Wu and Qipeng Guo and Lixing Shen and Zhan Chen and Xipeng Qiu and Qi Zhang and Tao Gui},
year={2025},
eprint={2502.14837},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.14837},
}
```
|
Remex23/llama-2-finetune-elsa | Remex23 | "2024-06-04T13:26:36Z" | 0 | 0 | null | [
"llama-2",
"fine-tuning",
"causal-lm",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-06-04T13:08:10Z" |
---
language: en
tags:
- llama-2
- fine-tuning
- causal-lm
license: apache-2.0
---
# Llama-2-finetune-Elsa
This is a fine-tuned version of the Llama-2-7b-chat model using the `Remex23/counselchat-llama2-full` dataset. |
qtdy/roberta-base-klue-ynat-classification | qtdy | "2025-02-13T02:07:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-13T02:07:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minji222/mistral_lora_clm_with_added_tokens | minji222 | "2024-04-09T07:38:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-09T05:26:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aardvark-labs/stp-classifier-4-2 | aardvark-labs | "2025-03-13T12:17:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-13T10:03:48Z" | ---
library_name: transformers
--- |
RE-N-Y/cc3m-transformer_blocks.18-1 | RE-N-Y | "2024-11-18T04:19:08Z" | 5 | 0 | finebooru | [
"finebooru",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2024-11-18T04:11:47Z" | ---
library_name: finebooru
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/RE-N-Y/finebooru
- Docs: [More Information Needed] |
Ahmed007/hossam-t5 | Ahmed007 | "2024-02-18T09:09:48Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:UBC-NLP/octopus",
"base_model:finetune:UBC-NLP/octopus",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-18T01:30:28Z" | ---
base_model: UBC-NLP/octopus
tags:
- generated_from_trainer
model-index:
- name: hossam-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hossam-t5
This model is a fine-tuned version of [UBC-NLP/octopus](https://huggingface.co/UBC-NLP/octopus) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
mav23/Qwen2.5-14B-UpToDate-GGUF | mav23 | "2024-11-19T03:05:39Z" | 14 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:CultriX/Qwen2.5-14B-MegaMerge-pt2",
"base_model:quantized:CultriX/Qwen2.5-14B-MegaMerge-pt2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-19T01:06:53Z" | ---
base_model: CultriX/Qwen2.5-14B-MegaMerge-pt2
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CultriX
- **License:** apache-2.0
- **Finetuned from model :** CultriX/Qwen2.5-14B-MegaMerge-pt2
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/hentai-cinematic-pony-v2-sdxl | John6666 | "2024-07-11T22:53:44Z" | 60 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cinematic",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-11T22:49:18Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cinematic
- pony
---
Original model is [here](https://civitai.com/models/492705/hentaicinematicpony?modelVersionId=637173).
|
mradermacher/Llama-3-22B-Instruct-v0.1-GGUF | mradermacher | "2024-05-31T09:01:44Z" | 9 | 2 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-30T21:36:35Z" | ---
base_model: DataGuard/Llama-3-22B-Instruct-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DataGuard/Llama-3-22B-Instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 10.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 11.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 13.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 13.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 18.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 24.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sail-rvc/Principal_of_the_Thing__RVC_V2_-_500_Epochs_ | sail-rvc | "2023-07-14T07:30:06Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:29:51Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Principal_of_the_Thing__RVC_V2_-_500_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:30:06
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
jwhj/Qwen2.5-Math-1.5B-SFT | jwhj | "2024-12-10T06:28:58Z" | 158 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-10T05:37:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
P3ps/distilbert-amazon-shoe-reviews | P3ps | "2023-04-20T11:34:31Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-20T11:07:39Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-amazon-shoe-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9519
- Accuracy: 0.5757
- F1: [0.63178677 0.45622938 0.50453543 0.55380711 0.73119358]
- Precision: [0.62256809 0.46798542 0.48583569 0.58248799 0.71751969]
- Recall: [0.64128257 0.4450495 0.52473228 0.52781809 0.74539877]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
| 0.9652 | 1.0 | 2813 | 0.9519 | 0.5757 | [0.63178677 0.45622938 0.50453543 0.55380711 0.73119358] | [0.62256809 0.46798542 0.48583569 0.58248799 0.71751969] | [0.64128257 0.4450495 0.52473228 0.52781809 0.74539877] |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
sehilnlf/model_v3_v2 | sehilnlf | "2024-05-26T18:34:55Z" | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-26T06:54:37Z" | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- text2text-generation
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: model_v3_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_v3_v2
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5669
- Sacrebleu: 66.8302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log | 0.99 | 54 | 0.6545 | 66.3234 |
| No log | 1.99 | 109 | 0.5940 | 66.8342 |
| No log | 2.96 | 162 | 0.5669 | 66.8302 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
0xagentai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_poisonous_bear | 0xagentai | "2025-04-20T05:13:03Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am jumping poisonous bear",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-15T08:22:48Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
q631599119/model2 | q631599119 | "2025-03-21T00:50:12Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-21T00:50:11Z" | ---
license: apache-2.0
---
|
zyh571p/whisper-small-finetuned | zyh571p | "2024-05-23T18:13:32Z" | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-23T14:33:50Z" | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-finetuned
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Wer: 0.0337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.0 | 4.5662 | 1000 | 0.0000 | 0.1349 |
| 0.0 | 9.1324 | 2000 | 0.0000 | 0.0337 |
| 0.0 | 13.6986 | 3000 | 0.0000 | 0.0337 |
| 0.0 | 18.2648 | 4000 | 0.0000 | 0.0337 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ReLURavioli/rlur0120 | ReLURavioli | "2025-03-04T11:06:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T11:06:06Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ReLURavioli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ethicalabs/Kurtis-Qwen2.5-0.5B-Instruct-PEFT | ethicalabs | "2025-03-02T17:29:36Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"text-generation-inference",
"text-generation",
"en",
"dataset:mrs83/kurtis_mental_health_final",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:mit",
"region:us"
] | text-generation | "2025-03-02T16:56:58Z" | ---
library_name: peft
license: mit
datasets:
- mrs83/kurtis_mental_health_final
language:
- en
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# Model Card for Kurtis
Kurtis is a mental-health AI assistant designed with empathy at its core.
Unlike other AI models that aim for peak efficiency, Kurtis prioritizes understanding, emotional nuance, and meaningful conversations.
It won’t solve complex math problems or write code, nor will it generate images or videos.
Instead, Kurtis focuses on being a thoughtful companion, offering support, perspective, and human-like dialogue.
It doesn’t strive to break records or chase artificial intelligence supremacy—its goal is to create a space for genuine interaction.
Whether you need someone to talk to, reflect on ideas with, or engage in insightful discussion, Kurtis is there to listen and respond in an understanding way. |
maxfrax/distilbert-base-uncased-finetuned-emotion | maxfrax | "2024-02-16T17:32:59Z" | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-16T17:20:28Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258243133918047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2134
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3212 | 0.906 | 0.9047 |
| No log | 2.0 | 500 | 0.2134 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
genow2/123 | genow2 | "2025-04-18T21:46:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-18T21:46:05Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
tomrb/bettercallbloom-560m | tomrb | "2022-10-17T12:39:37Z" | 133 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-10-16T13:09:33Z" | ---
language: en
widget:
- text: "my example goes here in the requested language"
license: mit
---
# WORK IN PROGRESS
# bettercallbloom-560m
Finetuned bloom-560m model on the PileOfLaw - r/legal_advice
## Model description
## Intended uses & limitations
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
|
oljik/tut_unsloth_3b_lora_model | oljik | "2025-02-18T23:32:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-18T23:32:44Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oljik
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Benjaminpwh/xlsr-toratan-60-copt | Benjaminpwh | "2025-03-26T17:51:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"pretraining",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-26T17:14:39Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: xlsr-toratan-60-copt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-toratan-60-copt
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
ClarenceDan/a5ce742d-11ad-4f04-9cb9-5b3888224913 | ClarenceDan | "2025-03-04T17:05:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"region:us"
] | null | "2025-03-04T16:28:54Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a5ce742d-11ad-4f04-9cb9-5b3888224913
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e5bda4a33546a6d1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e5bda4a33546a6d1_train_data.json
type:
field_input: lang
field_instruction: sentence1
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/a5ce742d-11ad-4f04-9cb9-5b3888224913
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e5bda4a33546a6d1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 791b8a0f-4f7d-4883-814e-024386776e92
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 791b8a0f-4f7d-4883-814e-024386776e92
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a5ce742d-11ad-4f04-9cb9-5b3888224913
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7645 | 0.0001 | 1 | 4.7227 |
| 5.0736 | 0.0003 | 3 | 4.6863 |
| 4.0429 | 0.0006 | 6 | 3.8167 |
| 2.2522 | 0.0009 | 9 | 1.8158 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Prisma-Multimodal/imagenet-sweep-vanilla-x64-Spatial_max_0-hook_resid_post-989.203430175781-99 | Prisma-Multimodal | "2025-01-30T08:36:29Z" | 10 | 0 | null | [
"region:us"
] | null | "2025-01-30T08:36:17Z" | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 0
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: False
### Training
- **Training Images**: 1299936
- **Learning Rate**: 0.0006
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 49
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 989.2034
- **Dead Features**: 0
- **Mean Passes Since Fired**: 8.6203
### Reconstruction
- **Explained Variance**: 0.9999
- **Explained Variance Std**: 0.0004
- **MSE Loss**: 0.0000
- **L1 Loss**: 2368.7610
- **Overall Loss**: 0.0000
## Training Details
- **Training Duration**: 4395 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/52afe93d-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1300020.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-Spatial_only-012-sweep/runs/ny207vem
- **Random Seed**: 42
|
dmartincc/vedt-lg | dmartincc | "2025-03-12T17:22:02Z" | 32 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-27T15:29:14Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
- accuracy
model-index:
- name: vedt-lg
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.93
- name: Accuracy
type: accuracy
value: 0.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vedt-lg
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1817
- F1: 0.93
- Roc Auc: 0.95
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|:--------:|
| 0.5369 | 1.0 | 122 | 0.5339 | 0.53 | 0.67 | 0.41 |
| 0.3995 | 2.0 | 245 | 0.3591 | 0.8 | 0.84 | 0.73 |
| 0.2357 | 3.0 | 367 | 0.2492 | 0.89 | 0.92 | 0.88 |
| 0.1409 | 4.0 | 490 | 0.2015 | 0.91 | 0.93 | 0.9 |
| 0.1137 | 4.98 | 610 | 0.1817 | 0.93 | 0.95 | 0.92 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF | mradermacher | "2025-01-28T10:44:50Z" | 550 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:pankajmathur/orca_mini_v1_dataset",
"dataset:pankajmathur/orca_mini_v8_sharegpt_format",
"base_model:pankajmathur/orca_mini_v9_5_1B-Instruct_preview",
"base_model:quantized:pankajmathur/orca_mini_v9_5_1B-Instruct_preview",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-28T10:05:29Z" | ---
base_model: pankajmathur/orca_mini_v9_5_1B-Instruct_preview
datasets:
- pankajmathur/orca_mini_v1_dataset
- pankajmathur/orca_mini_v8_sharegpt_format
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pankajmathur/orca_mini_v9_5_1B-Instruct_preview
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ2_M.gguf) | i1-IQ2_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q4_0.gguf) | i1-Q4_0 | 0.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v9_5_1B-Instruct_preview-i1-GGUF/resolve/main/orca_mini_v9_5_1B-Instruct_preview.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Fragko/qwen2-VL-7b-instruct-leaves-from-field-diagnosis | Fragko | "2025-03-12T14:19:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-12T00:56:36Z" | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: qwen2-VL-7b-instruct-leaves-from-field-diagnosis
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-VL-7b-instruct-leaves-from-field-diagnosis
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Fragko/qwen2-VL-7b-instruct-leaves-from-field-diagnosis", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gfragko-technical-university-of-crete/qwen2-VL-7b-instruct-leaves-from-field-diagnosis/runs/anfp2n14)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e35_non_member_shadow11 | FounderOfHuggingface | "2023-12-07T12:01:09Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-07T12:01:07Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run2_AugV5_k7_task1_organization | MayBashendy | "2025-01-14T19:11:01Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-14T18:57:36Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run2_AugV5_k7_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run2_AugV5_k7_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0657
- Qwk: 0.5355
- Mse: 1.0657
- Rmse: 1.0323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0571 | 2 | 5.3430 | -0.0131 | 5.3430 | 2.3115 |
| No log | 0.1143 | 4 | 3.1272 | 0.0649 | 3.1272 | 1.7684 |
| No log | 0.1714 | 6 | 2.6556 | -0.1003 | 2.6556 | 1.6296 |
| No log | 0.2286 | 8 | 2.4195 | -0.1156 | 2.4195 | 1.5555 |
| No log | 0.2857 | 10 | 1.7140 | 0.0267 | 1.7140 | 1.3092 |
| No log | 0.3429 | 12 | 1.2703 | 0.1423 | 1.2703 | 1.1271 |
| No log | 0.4 | 14 | 1.2489 | 0.2203 | 1.2489 | 1.1175 |
| No log | 0.4571 | 16 | 1.1863 | 0.2751 | 1.1863 | 1.0892 |
| No log | 0.5143 | 18 | 1.2975 | 0.1746 | 1.2975 | 1.1391 |
| No log | 0.5714 | 20 | 1.3932 | 0.1370 | 1.3932 | 1.1803 |
| No log | 0.6286 | 22 | 1.3680 | 0.1204 | 1.3680 | 1.1696 |
| No log | 0.6857 | 24 | 1.2413 | 0.3295 | 1.2413 | 1.1141 |
| No log | 0.7429 | 26 | 1.1591 | 0.3779 | 1.1591 | 1.0766 |
| No log | 0.8 | 28 | 1.2529 | 0.2372 | 1.2529 | 1.1193 |
| No log | 0.8571 | 30 | 1.1474 | 0.3339 | 1.1474 | 1.0712 |
| No log | 0.9143 | 32 | 1.1505 | 0.3121 | 1.1505 | 1.0726 |
| No log | 0.9714 | 34 | 1.1191 | 0.3563 | 1.1191 | 1.0579 |
| No log | 1.0286 | 36 | 1.1045 | 0.3838 | 1.1045 | 1.0510 |
| No log | 1.0857 | 38 | 1.0767 | 0.3972 | 1.0767 | 1.0376 |
| No log | 1.1429 | 40 | 1.0334 | 0.3862 | 1.0334 | 1.0166 |
| No log | 1.2 | 42 | 0.8723 | 0.4907 | 0.8723 | 0.9340 |
| No log | 1.2571 | 44 | 0.8736 | 0.5391 | 0.8736 | 0.9347 |
| No log | 1.3143 | 46 | 0.8995 | 0.5487 | 0.8995 | 0.9484 |
| No log | 1.3714 | 48 | 0.9249 | 0.5679 | 0.9249 | 0.9617 |
| No log | 1.4286 | 50 | 0.9743 | 0.5092 | 0.9743 | 0.9871 |
| No log | 1.4857 | 52 | 0.9736 | 0.4893 | 0.9736 | 0.9867 |
| No log | 1.5429 | 54 | 0.9967 | 0.4736 | 0.9967 | 0.9984 |
| No log | 1.6 | 56 | 0.9438 | 0.5147 | 0.9438 | 0.9715 |
| No log | 1.6571 | 58 | 1.0154 | 0.5401 | 1.0154 | 1.0077 |
| No log | 1.7143 | 60 | 0.9127 | 0.5038 | 0.9127 | 0.9554 |
| No log | 1.7714 | 62 | 1.0786 | 0.5033 | 1.0786 | 1.0385 |
| No log | 1.8286 | 64 | 1.0134 | 0.5134 | 1.0134 | 1.0067 |
| No log | 1.8857 | 66 | 0.8451 | 0.5707 | 0.8451 | 0.9193 |
| No log | 1.9429 | 68 | 0.9047 | 0.5867 | 0.9047 | 0.9512 |
| No log | 2.0 | 70 | 0.8410 | 0.6429 | 0.8410 | 0.9171 |
| No log | 2.0571 | 72 | 0.8599 | 0.5858 | 0.8599 | 0.9273 |
| No log | 2.1143 | 74 | 1.0333 | 0.5201 | 1.0333 | 1.0165 |
| No log | 2.1714 | 76 | 0.9665 | 0.5218 | 0.9665 | 0.9831 |
| No log | 2.2286 | 78 | 0.8244 | 0.5851 | 0.8244 | 0.9080 |
| No log | 2.2857 | 80 | 0.8443 | 0.5752 | 0.8443 | 0.9189 |
| No log | 2.3429 | 82 | 0.8400 | 0.5657 | 0.8400 | 0.9165 |
| No log | 2.4 | 84 | 0.9719 | 0.5372 | 0.9719 | 0.9858 |
| No log | 2.4571 | 86 | 0.9467 | 0.5528 | 0.9467 | 0.9730 |
| No log | 2.5143 | 88 | 0.7803 | 0.6361 | 0.7803 | 0.8834 |
| No log | 2.5714 | 90 | 0.7549 | 0.6035 | 0.7549 | 0.8688 |
| No log | 2.6286 | 92 | 0.8418 | 0.5549 | 0.8418 | 0.9175 |
| No log | 2.6857 | 94 | 0.7118 | 0.6245 | 0.7118 | 0.8437 |
| No log | 2.7429 | 96 | 0.9979 | 0.5331 | 0.9979 | 0.9989 |
| No log | 2.8 | 98 | 1.3117 | 0.4883 | 1.3117 | 1.1453 |
| No log | 2.8571 | 100 | 1.0906 | 0.5105 | 1.0906 | 1.0443 |
| No log | 2.9143 | 102 | 0.6861 | 0.6388 | 0.6861 | 0.8283 |
| No log | 2.9714 | 104 | 0.7704 | 0.5976 | 0.7704 | 0.8777 |
| No log | 3.0286 | 106 | 0.7716 | 0.6420 | 0.7716 | 0.8784 |
| No log | 3.0857 | 108 | 0.6576 | 0.6805 | 0.6576 | 0.8109 |
| No log | 3.1429 | 110 | 0.8588 | 0.5971 | 0.8588 | 0.9267 |
| No log | 3.2 | 112 | 1.0898 | 0.5213 | 1.0898 | 1.0439 |
| No log | 3.2571 | 114 | 1.1132 | 0.5205 | 1.1132 | 1.0551 |
| No log | 3.3143 | 116 | 1.0380 | 0.5333 | 1.0380 | 1.0188 |
| No log | 3.3714 | 118 | 1.0357 | 0.5401 | 1.0357 | 1.0177 |
| No log | 3.4286 | 120 | 0.9567 | 0.6132 | 0.9567 | 0.9781 |
| No log | 3.4857 | 122 | 1.0190 | 0.5522 | 1.0190 | 1.0095 |
| No log | 3.5429 | 124 | 1.2592 | 0.4356 | 1.2592 | 1.1222 |
| No log | 3.6 | 126 | 1.1839 | 0.4543 | 1.1839 | 1.0881 |
| No log | 3.6571 | 128 | 1.0883 | 0.4830 | 1.0883 | 1.0432 |
| No log | 3.7143 | 130 | 1.0148 | 0.5338 | 1.0148 | 1.0074 |
| No log | 3.7714 | 132 | 0.7750 | 0.6389 | 0.7750 | 0.8803 |
| No log | 3.8286 | 134 | 0.7688 | 0.6161 | 0.7688 | 0.8768 |
| No log | 3.8857 | 136 | 0.7777 | 0.6601 | 0.7777 | 0.8819 |
| No log | 3.9429 | 138 | 0.9604 | 0.6227 | 0.9604 | 0.9800 |
| No log | 4.0 | 140 | 1.2622 | 0.5311 | 1.2622 | 1.1235 |
| No log | 4.0571 | 142 | 1.2558 | 0.5144 | 1.2558 | 1.1206 |
| No log | 4.1143 | 144 | 1.0652 | 0.5312 | 1.0652 | 1.0321 |
| No log | 4.1714 | 146 | 0.8674 | 0.6355 | 0.8674 | 0.9313 |
| No log | 4.2286 | 148 | 0.7753 | 0.6502 | 0.7753 | 0.8805 |
| No log | 4.2857 | 150 | 0.8393 | 0.6358 | 0.8393 | 0.9161 |
| No log | 4.3429 | 152 | 0.8959 | 0.6019 | 0.8959 | 0.9465 |
| No log | 4.4 | 154 | 0.8972 | 0.5988 | 0.8972 | 0.9472 |
| No log | 4.4571 | 156 | 0.7496 | 0.6873 | 0.7496 | 0.8658 |
| No log | 4.5143 | 158 | 0.7267 | 0.6988 | 0.7267 | 0.8524 |
| No log | 4.5714 | 160 | 0.7081 | 0.6854 | 0.7081 | 0.8415 |
| No log | 4.6286 | 162 | 0.7194 | 0.6824 | 0.7194 | 0.8481 |
| No log | 4.6857 | 164 | 0.8391 | 0.6738 | 0.8391 | 0.9160 |
| No log | 4.7429 | 166 | 1.1286 | 0.5640 | 1.1286 | 1.0623 |
| No log | 4.8 | 168 | 1.0770 | 0.5831 | 1.0770 | 1.0378 |
| No log | 4.8571 | 170 | 0.8046 | 0.6916 | 0.8046 | 0.8970 |
| No log | 4.9143 | 172 | 0.7048 | 0.6692 | 0.7048 | 0.8395 |
| No log | 4.9714 | 174 | 0.7162 | 0.6755 | 0.7162 | 0.8463 |
| No log | 5.0286 | 176 | 0.7766 | 0.6634 | 0.7766 | 0.8812 |
| No log | 5.0857 | 178 | 0.9326 | 0.6175 | 0.9326 | 0.9657 |
| No log | 5.1429 | 180 | 1.0926 | 0.5442 | 1.0926 | 1.0453 |
| No log | 5.2 | 182 | 0.9450 | 0.6135 | 0.9450 | 0.9721 |
| No log | 5.2571 | 184 | 0.7525 | 0.6745 | 0.7525 | 0.8674 |
| No log | 5.3143 | 186 | 0.7507 | 0.6302 | 0.7507 | 0.8664 |
| No log | 5.3714 | 188 | 0.7302 | 0.6167 | 0.7302 | 0.8545 |
| No log | 5.4286 | 190 | 0.7342 | 0.6394 | 0.7342 | 0.8568 |
| No log | 5.4857 | 192 | 0.8565 | 0.6562 | 0.8565 | 0.9255 |
| No log | 5.5429 | 194 | 0.9119 | 0.5911 | 0.9119 | 0.9550 |
| No log | 5.6 | 196 | 0.9576 | 0.5911 | 0.9576 | 0.9786 |
| No log | 5.6571 | 198 | 0.8575 | 0.6347 | 0.8575 | 0.9260 |
| No log | 5.7143 | 200 | 0.7727 | 0.6947 | 0.7727 | 0.8790 |
| No log | 5.7714 | 202 | 0.7947 | 0.6674 | 0.7947 | 0.8915 |
| No log | 5.8286 | 204 | 0.9209 | 0.6373 | 0.9209 | 0.9597 |
| No log | 5.8857 | 206 | 1.0566 | 0.6122 | 1.0566 | 1.0279 |
| No log | 5.9429 | 208 | 0.9640 | 0.6463 | 0.9640 | 0.9819 |
| No log | 6.0 | 210 | 0.8122 | 0.6719 | 0.8122 | 0.9012 |
| No log | 6.0571 | 212 | 0.7881 | 0.6924 | 0.7881 | 0.8877 |
| No log | 6.1143 | 214 | 0.8804 | 0.6246 | 0.8804 | 0.9383 |
| No log | 6.1714 | 216 | 0.9973 | 0.5694 | 0.9973 | 0.9986 |
| No log | 6.2286 | 218 | 0.9968 | 0.5608 | 0.9968 | 0.9984 |
| No log | 6.2857 | 220 | 0.8269 | 0.6476 | 0.8269 | 0.9093 |
| No log | 6.3429 | 222 | 0.7584 | 0.6718 | 0.7584 | 0.8708 |
| No log | 6.4 | 224 | 0.6644 | 0.6765 | 0.6644 | 0.8151 |
| No log | 6.4571 | 226 | 0.6248 | 0.6804 | 0.6248 | 0.7905 |
| No log | 6.5143 | 228 | 0.7130 | 0.6969 | 0.7130 | 0.8444 |
| No log | 6.5714 | 230 | 0.9473 | 0.6137 | 0.9473 | 0.9733 |
| No log | 6.6286 | 232 | 0.9530 | 0.6280 | 0.9530 | 0.9762 |
| No log | 6.6857 | 234 | 0.7536 | 0.7182 | 0.7536 | 0.8681 |
| No log | 6.7429 | 236 | 0.7443 | 0.7166 | 0.7443 | 0.8627 |
| No log | 6.8 | 238 | 0.8401 | 0.6413 | 0.8401 | 0.9166 |
| No log | 6.8571 | 240 | 0.9515 | 0.6228 | 0.9515 | 0.9754 |
| No log | 6.9143 | 242 | 0.8301 | 0.6694 | 0.8301 | 0.9111 |
| No log | 6.9714 | 244 | 0.7428 | 0.6982 | 0.7428 | 0.8619 |
| No log | 7.0286 | 246 | 0.8660 | 0.6178 | 0.8660 | 0.9306 |
| No log | 7.0857 | 248 | 0.9069 | 0.5964 | 0.9069 | 0.9523 |
| No log | 7.1429 | 250 | 0.9427 | 0.6036 | 0.9427 | 0.9709 |
| No log | 7.2 | 252 | 0.8597 | 0.6288 | 0.8597 | 0.9272 |
| No log | 7.2571 | 254 | 0.8272 | 0.6288 | 0.8272 | 0.9095 |
| No log | 7.3143 | 256 | 0.9297 | 0.6086 | 0.9297 | 0.9642 |
| No log | 7.3714 | 258 | 0.8701 | 0.6293 | 0.8701 | 0.9328 |
| No log | 7.4286 | 260 | 0.7681 | 0.6944 | 0.7681 | 0.8764 |
| No log | 7.4857 | 262 | 0.7007 | 0.7263 | 0.7007 | 0.8371 |
| No log | 7.5429 | 264 | 0.6556 | 0.7099 | 0.6556 | 0.8097 |
| No log | 7.6 | 266 | 0.6917 | 0.7006 | 0.6917 | 0.8317 |
| No log | 7.6571 | 268 | 0.8437 | 0.6470 | 0.8437 | 0.9185 |
| No log | 7.7143 | 270 | 0.8517 | 0.6517 | 0.8517 | 0.9229 |
| No log | 7.7714 | 272 | 0.7675 | 0.7046 | 0.7675 | 0.8761 |
| No log | 7.8286 | 274 | 0.7646 | 0.7054 | 0.7646 | 0.8744 |
| No log | 7.8857 | 276 | 0.7847 | 0.6628 | 0.7847 | 0.8858 |
| No log | 7.9429 | 278 | 0.8038 | 0.6628 | 0.8038 | 0.8965 |
| No log | 8.0 | 280 | 0.7983 | 0.6527 | 0.7983 | 0.8935 |
| No log | 8.0571 | 282 | 0.7692 | 0.6788 | 0.7692 | 0.8770 |
| No log | 8.1143 | 284 | 0.7306 | 0.6965 | 0.7306 | 0.8548 |
| No log | 8.1714 | 286 | 0.7067 | 0.6590 | 0.7067 | 0.8406 |
| No log | 8.2286 | 288 | 0.7578 | 0.6885 | 0.7578 | 0.8705 |
| No log | 8.2857 | 290 | 0.8385 | 0.6512 | 0.8385 | 0.9157 |
| No log | 8.3429 | 292 | 0.9729 | 0.5842 | 0.9729 | 0.9864 |
| No log | 8.4 | 294 | 0.8527 | 0.6420 | 0.8527 | 0.9234 |
| No log | 8.4571 | 296 | 0.7301 | 0.6967 | 0.7301 | 0.8545 |
| No log | 8.5143 | 298 | 0.6768 | 0.6643 | 0.6768 | 0.8227 |
| No log | 8.5714 | 300 | 0.6640 | 0.6516 | 0.6640 | 0.8149 |
| No log | 8.6286 | 302 | 0.6898 | 0.6664 | 0.6898 | 0.8306 |
| No log | 8.6857 | 304 | 0.8909 | 0.6159 | 0.8909 | 0.9439 |
| No log | 8.7429 | 306 | 1.1835 | 0.4966 | 1.1835 | 1.0879 |
| No log | 8.8 | 308 | 1.2569 | 0.4960 | 1.2569 | 1.1211 |
| No log | 8.8571 | 310 | 1.1361 | 0.5189 | 1.1361 | 1.0659 |
| No log | 8.9143 | 312 | 1.0565 | 0.5399 | 1.0565 | 1.0279 |
| No log | 8.9714 | 314 | 0.8676 | 0.6551 | 0.8676 | 0.9315 |
| No log | 9.0286 | 316 | 0.7602 | 0.6899 | 0.7602 | 0.8719 |
| No log | 9.0857 | 318 | 0.7732 | 0.6707 | 0.7732 | 0.8793 |
| No log | 9.1429 | 320 | 0.8802 | 0.5899 | 0.8802 | 0.9382 |
| No log | 9.2 | 322 | 1.0857 | 0.5209 | 1.0857 | 1.0419 |
| No log | 9.2571 | 324 | 1.1553 | 0.5265 | 1.1553 | 1.0748 |
| No log | 9.3143 | 326 | 1.0771 | 0.5189 | 1.0771 | 1.0378 |
| No log | 9.3714 | 328 | 1.0802 | 0.5209 | 1.0802 | 1.0393 |
| No log | 9.4286 | 330 | 0.9214 | 0.5783 | 0.9214 | 0.9599 |
| No log | 9.4857 | 332 | 0.8058 | 0.6211 | 0.8058 | 0.8977 |
| No log | 9.5429 | 334 | 0.7409 | 0.6878 | 0.7409 | 0.8607 |
| No log | 9.6 | 336 | 0.7973 | 0.6509 | 0.7973 | 0.8929 |
| No log | 9.6571 | 338 | 0.9476 | 0.5867 | 0.9476 | 0.9735 |
| No log | 9.7143 | 340 | 0.8953 | 0.5806 | 0.8953 | 0.9462 |
| No log | 9.7714 | 342 | 0.7900 | 0.6858 | 0.7900 | 0.8888 |
| No log | 9.8286 | 344 | 0.7239 | 0.6774 | 0.7239 | 0.8508 |
| No log | 9.8857 | 346 | 0.7151 | 0.6791 | 0.7151 | 0.8457 |
| No log | 9.9429 | 348 | 0.7722 | 0.6819 | 0.7722 | 0.8787 |
| No log | 10.0 | 350 | 0.9402 | 0.6286 | 0.9402 | 0.9696 |
| No log | 10.0571 | 352 | 1.0030 | 0.6179 | 1.0030 | 1.0015 |
| No log | 10.1143 | 354 | 0.8716 | 0.6648 | 0.8716 | 0.9336 |
| No log | 10.1714 | 356 | 0.7326 | 0.7014 | 0.7326 | 0.8559 |
| No log | 10.2286 | 358 | 0.7226 | 0.6905 | 0.7226 | 0.8501 |
| No log | 10.2857 | 360 | 0.7907 | 0.6880 | 0.7907 | 0.8892 |
| No log | 10.3429 | 362 | 0.9463 | 0.5667 | 0.9463 | 0.9728 |
| No log | 10.4 | 364 | 1.0349 | 0.5381 | 1.0349 | 1.0173 |
| No log | 10.4571 | 366 | 0.9543 | 0.5932 | 0.9543 | 0.9769 |
| No log | 10.5143 | 368 | 0.8436 | 0.6396 | 0.8436 | 0.9185 |
| No log | 10.5714 | 370 | 0.8043 | 0.6596 | 0.8043 | 0.8968 |
| No log | 10.6286 | 372 | 0.7737 | 0.7008 | 0.7737 | 0.8796 |
| No log | 10.6857 | 374 | 0.8094 | 0.6618 | 0.8094 | 0.8996 |
| No log | 10.7429 | 376 | 0.7938 | 0.6666 | 0.7938 | 0.8909 |
| No log | 10.8 | 378 | 0.7080 | 0.7159 | 0.7080 | 0.8414 |
| No log | 10.8571 | 380 | 0.6661 | 0.6835 | 0.6661 | 0.8162 |
| No log | 10.9143 | 382 | 0.6601 | 0.6835 | 0.6601 | 0.8125 |
| No log | 10.9714 | 384 | 0.6558 | 0.6619 | 0.6558 | 0.8098 |
| No log | 11.0286 | 386 | 0.6875 | 0.7120 | 0.6875 | 0.8291 |
| No log | 11.0857 | 388 | 0.7488 | 0.6509 | 0.7488 | 0.8653 |
| No log | 11.1429 | 390 | 0.8062 | 0.6638 | 0.8062 | 0.8979 |
| No log | 11.2 | 392 | 0.7661 | 0.6570 | 0.7661 | 0.8753 |
| No log | 11.2571 | 394 | 0.7196 | 0.7086 | 0.7196 | 0.8483 |
| No log | 11.3143 | 396 | 0.7788 | 0.6609 | 0.7788 | 0.8825 |
| No log | 11.3714 | 398 | 0.8422 | 0.6489 | 0.8422 | 0.9177 |
| No log | 11.4286 | 400 | 0.9245 | 0.6167 | 0.9245 | 0.9615 |
| No log | 11.4857 | 402 | 0.8394 | 0.6395 | 0.8394 | 0.9162 |
| No log | 11.5429 | 404 | 0.8095 | 0.6505 | 0.8095 | 0.8997 |
| No log | 11.6 | 406 | 0.7775 | 0.6481 | 0.7775 | 0.8818 |
| No log | 11.6571 | 408 | 0.8326 | 0.6582 | 0.8326 | 0.9125 |
| No log | 11.7143 | 410 | 0.7396 | 0.7004 | 0.7396 | 0.8600 |
| No log | 11.7714 | 412 | 0.6857 | 0.7117 | 0.6857 | 0.8280 |
| No log | 11.8286 | 414 | 0.7590 | 0.7058 | 0.7590 | 0.8712 |
| No log | 11.8857 | 416 | 0.9415 | 0.6233 | 0.9415 | 0.9703 |
| No log | 11.9429 | 418 | 1.0937 | 0.6062 | 1.0937 | 1.0458 |
| No log | 12.0 | 420 | 1.0981 | 0.5797 | 1.0981 | 1.0479 |
| No log | 12.0571 | 422 | 1.0543 | 0.5637 | 1.0543 | 1.0268 |
| No log | 12.1143 | 424 | 1.0139 | 0.5377 | 1.0139 | 1.0069 |
| No log | 12.1714 | 426 | 0.9686 | 0.5629 | 0.9686 | 0.9842 |
| No log | 12.2286 | 428 | 1.0128 | 0.5138 | 1.0128 | 1.0064 |
| No log | 12.2857 | 430 | 1.0498 | 0.5128 | 1.0498 | 1.0246 |
| No log | 12.3429 | 432 | 0.8833 | 0.6268 | 0.8833 | 0.9398 |
| No log | 12.4 | 434 | 0.7097 | 0.6949 | 0.7097 | 0.8424 |
| No log | 12.4571 | 436 | 0.6709 | 0.6879 | 0.6709 | 0.8191 |
| No log | 12.5143 | 438 | 0.6722 | 0.6978 | 0.6722 | 0.8198 |
| No log | 12.5714 | 440 | 0.7612 | 0.7090 | 0.7612 | 0.8725 |
| No log | 12.6286 | 442 | 0.9488 | 0.6221 | 0.9488 | 0.9741 |
| No log | 12.6857 | 444 | 0.9146 | 0.6440 | 0.9146 | 0.9563 |
| No log | 12.7429 | 446 | 0.7522 | 0.6964 | 0.7522 | 0.8673 |
| No log | 12.8 | 448 | 0.6023 | 0.6598 | 0.6023 | 0.7761 |
| No log | 12.8571 | 450 | 0.6168 | 0.6413 | 0.6168 | 0.7854 |
| No log | 12.9143 | 452 | 0.6096 | 0.6681 | 0.6096 | 0.7808 |
| No log | 12.9714 | 454 | 0.6104 | 0.6865 | 0.6104 | 0.7813 |
| No log | 13.0286 | 456 | 0.6612 | 0.7128 | 0.6612 | 0.8131 |
| No log | 13.0857 | 458 | 0.6447 | 0.7036 | 0.6447 | 0.8029 |
| No log | 13.1429 | 460 | 0.6339 | 0.7079 | 0.6339 | 0.7962 |
| No log | 13.2 | 462 | 0.6410 | 0.6216 | 0.6410 | 0.8006 |
| No log | 13.2571 | 464 | 0.6306 | 0.6425 | 0.6306 | 0.7941 |
| No log | 13.3143 | 466 | 0.6213 | 0.6835 | 0.6213 | 0.7882 |
| No log | 13.3714 | 468 | 0.6497 | 0.7204 | 0.6497 | 0.8061 |
| No log | 13.4286 | 470 | 0.7284 | 0.6667 | 0.7284 | 0.8535 |
| No log | 13.4857 | 472 | 0.7837 | 0.6530 | 0.7837 | 0.8853 |
| No log | 13.5429 | 474 | 0.7644 | 0.6530 | 0.7644 | 0.8743 |
| No log | 13.6 | 476 | 0.7373 | 0.6657 | 0.7373 | 0.8586 |
| No log | 13.6571 | 478 | 0.7246 | 0.6857 | 0.7246 | 0.8512 |
| No log | 13.7143 | 480 | 0.6883 | 0.6970 | 0.6883 | 0.8297 |
| No log | 13.7714 | 482 | 0.6774 | 0.7007 | 0.6774 | 0.8230 |
| No log | 13.8286 | 484 | 0.7164 | 0.7116 | 0.7164 | 0.8464 |
| No log | 13.8857 | 486 | 0.7260 | 0.7116 | 0.7260 | 0.8521 |
| No log | 13.9429 | 488 | 0.7837 | 0.6726 | 0.7837 | 0.8852 |
| No log | 14.0 | 490 | 0.8038 | 0.6715 | 0.8038 | 0.8965 |
| No log | 14.0571 | 492 | 0.8259 | 0.6629 | 0.8259 | 0.9088 |
| No log | 14.1143 | 494 | 0.7970 | 0.6973 | 0.7970 | 0.8928 |
| No log | 14.1714 | 496 | 0.7024 | 0.7072 | 0.7024 | 0.8381 |
| No log | 14.2286 | 498 | 0.7095 | 0.6971 | 0.7095 | 0.8423 |
| 0.3637 | 14.2857 | 500 | 0.7234 | 0.6954 | 0.7234 | 0.8505 |
| 0.3637 | 14.3429 | 502 | 0.7055 | 0.7079 | 0.7055 | 0.8399 |
| 0.3637 | 14.4 | 504 | 0.7237 | 0.7030 | 0.7237 | 0.8507 |
| 0.3637 | 14.4571 | 506 | 0.7712 | 0.6960 | 0.7712 | 0.8782 |
| 0.3637 | 14.5143 | 508 | 0.7300 | 0.6960 | 0.7300 | 0.8544 |
| 0.3637 | 14.5714 | 510 | 0.6766 | 0.7263 | 0.6766 | 0.8226 |
| 0.3637 | 14.6286 | 512 | 0.6835 | 0.7247 | 0.6835 | 0.8268 |
| 0.3637 | 14.6857 | 514 | 0.7758 | 0.6851 | 0.7758 | 0.8808 |
| 0.3637 | 14.7429 | 516 | 0.8551 | 0.6773 | 0.8551 | 0.9247 |
| 0.3637 | 14.8 | 518 | 0.8200 | 0.6915 | 0.8200 | 0.9055 |
| 0.3637 | 14.8571 | 520 | 0.7410 | 0.6865 | 0.7410 | 0.8608 |
| 0.3637 | 14.9143 | 522 | 0.6627 | 0.6997 | 0.6627 | 0.8140 |
| 0.3637 | 14.9714 | 524 | 0.6722 | 0.7049 | 0.6722 | 0.8199 |
| 0.3637 | 15.0286 | 526 | 0.7373 | 0.6982 | 0.7373 | 0.8587 |
| 0.3637 | 15.0857 | 528 | 0.8253 | 0.7017 | 0.8253 | 0.9085 |
| 0.3637 | 15.1429 | 530 | 0.8180 | 0.6814 | 0.8180 | 0.9044 |
| 0.3637 | 15.2 | 532 | 0.7457 | 0.6816 | 0.7457 | 0.8636 |
| 0.3637 | 15.2571 | 534 | 0.7131 | 0.7012 | 0.7131 | 0.8445 |
| 0.3637 | 15.3143 | 536 | 0.7075 | 0.7152 | 0.7075 | 0.8411 |
| 0.3637 | 15.3714 | 538 | 0.6597 | 0.7086 | 0.6597 | 0.8122 |
| 0.3637 | 15.4286 | 540 | 0.6856 | 0.7224 | 0.6856 | 0.8280 |
| 0.3637 | 15.4857 | 542 | 0.7557 | 0.6901 | 0.7557 | 0.8693 |
| 0.3637 | 15.5429 | 544 | 0.9779 | 0.6161 | 0.9779 | 0.9889 |
| 0.3637 | 15.6 | 546 | 1.1443 | 0.5428 | 1.1443 | 1.0697 |
| 0.3637 | 15.6571 | 548 | 1.1605 | 0.5411 | 1.1605 | 1.0773 |
| 0.3637 | 15.7143 | 550 | 1.0657 | 0.5355 | 1.0657 | 1.0323 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
texanrangee/ca8a279b-d498-45c6-b1c3-e8a347ae9c4a | texanrangee | "2025-03-15T12:05:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-15T10:08:43Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Manuel221097/dije | Manuel221097 | "2024-04-11T14:06:37Z" | 0 | 0 | null | [
"license:cc-by-nc-sa-2.0",
"region:us"
] | null | "2024-04-11T14:06:36Z" | ---
license: cc-by-nc-sa-2.0
---
|
AndyJamesTurner/suicideDetector | AndyJamesTurner | "2024-04-17T13:49:57Z" | 0 | 0 | sklearn | [
"sklearn",
"skops",
"text-classification",
"license:mit",
"region:us"
] | text-classification | "2024-04-12T10:08:45Z" | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- text-classification
model_format: pickle
model_file: model.pkl
---
# Model description
Suicide Detection text classification model.
PYTHON 3.10 ONLY
## Training Procedure
Trained using 0.7 of the the Suicide and Depression Detection dataset (https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch)
The model vectorises each text using a trained tfidf vectorizer and then classifies using xgboost.
See main.py for further details.
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('tfidf', TfidfVectorizer(min_df=100, ngram_range=(1, 3),<br /> preprocessor=<function preprocessor at 0x7f8d443a30a0>)), ('classifier', XGBClassifier(base_score=None, booster=None, callbacks=None,<br /> colsample_bylevel=None, colsample_bynode=None,<br /> colsample_bytree=None, device=None, early_stopping_rounds=None,<br /> enable_categorical=False, eval_metric=None, feature_types=None,<br /> gamma=None, grow_policy=None, importance_type=None,<br /> interaction_constraints=None, learning_rate=None, max_bin=None,<br /> max_cat_threshold=None, max_cat_to_onehot=None,<br /> max_delta_step=None, max_depth=None, max_leaves=None,<br /> min_child_weight=None, missing=nan, monotone_constraints=None,<br /> multi_strategy=None, n_estimators=None, n_jobs=None,<br /> num_parallel_tree=None, random_state=None, ...))] |
| verbose | True |
| tfidf | TfidfVectorizer(min_df=100, ngram_range=(1, 3),<br /> preprocessor=<function preprocessor at 0x7f8d443a30a0>) |
| classifier | XGBClassifier(base_score=None, booster=None, callbacks=None,<br /> colsample_bylevel=None, colsample_bynode=None,<br /> colsample_bytree=None, device=None, early_stopping_rounds=None,<br /> enable_categorical=False, eval_metric=None, feature_types=None,<br /> gamma=None, grow_policy=None, importance_type=None,<br /> interaction_constraints=None, learning_rate=None, max_bin=None,<br /> max_cat_threshold=None, max_cat_to_onehot=None,<br /> max_delta_step=None, max_depth=None, max_leaves=None,<br /> min_child_weight=None, missing=nan, monotone_constraints=None,<br /> multi_strategy=None, n_estimators=None, n_jobs=None,<br /> num_parallel_tree=None, random_state=None, ...) |
| tfidf__analyzer | word |
| tfidf__binary | False |
| tfidf__decode_error | strict |
| tfidf__dtype | <class 'numpy.float64'> |
| tfidf__encoding | utf-8 |
| tfidf__input | content |
| tfidf__lowercase | True |
| tfidf__max_df | 1.0 |
| tfidf__max_features | |
| tfidf__min_df | 100 |
| tfidf__ngram_range | (1, 3) |
| tfidf__norm | l2 |
| tfidf__preprocessor | <function preprocessor at 0x7f8d443a30a0> |
| tfidf__smooth_idf | True |
| tfidf__stop_words | |
| tfidf__strip_accents | |
| tfidf__sublinear_tf | False |
| tfidf__token_pattern | (?u)\b\w\w+\b |
| tfidf__tokenizer | |
| tfidf__use_idf | True |
| tfidf__vocabulary | |
| classifier__objective | binary:logistic |
| classifier__base_score | |
| classifier__booster | |
| classifier__callbacks | |
| classifier__colsample_bylevel | |
| classifier__colsample_bynode | |
| classifier__colsample_bytree | |
| classifier__device | |
| classifier__early_stopping_rounds | |
| classifier__enable_categorical | False |
| classifier__eval_metric | |
| classifier__feature_types | |
| classifier__gamma | |
| classifier__grow_policy | |
| classifier__importance_type | |
| classifier__interaction_constraints | |
| classifier__learning_rate | |
| classifier__max_bin | |
| classifier__max_cat_threshold | |
| classifier__max_cat_to_onehot | |
| classifier__max_delta_step | |
| classifier__max_depth | |
| classifier__max_leaves | |
| classifier__min_child_weight | |
| classifier__missing | nan |
| classifier__monotone_constraints | |
| classifier__multi_strategy | |
| classifier__n_estimators | |
| classifier__n_jobs | |
| classifier__num_parallel_tree | |
| classifier__random_state | |
| classifier__reg_alpha | |
| classifier__reg_lambda | |
| classifier__sampling_method | |
| classifier__scale_pos_weight | |
| classifier__subsample | |
| classifier__tree_method | |
| classifier__validate_parameters | |
| classifier__verbosity | |
</details>
### Model Plot
<style>#sk-container-id-1 {/* Definition of color scheme common for light and dark mode */--sklearn-color-text: black;--sklearn-color-line: gray;/* Definition of color scheme for unfitted estimators */--sklearn-color-unfitted-level-0: #fff5e6;--sklearn-color-unfitted-level-1: #f6e4d2;--sklearn-color-unfitted-level-2: #ffe0b3;--sklearn-color-unfitted-level-3: chocolate;/* Definition of color scheme for fitted estimators */--sklearn-color-fitted-level-0: #f0f8ff;--sklearn-color-fitted-level-1: #d4ebff;--sklearn-color-fitted-level-2: #b3dbfd;--sklearn-color-fitted-level-3: cornflowerblue;/* Specific color for light theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, white)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, black)));--sklearn-color-icon: #696969;@media (prefers-color-scheme: dark) {/* Redefinition of color scheme for dark theme */--sklearn-color-text-on-default-background: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-background: var(--sg-background-color, var(--theme-background, var(--jp-layout-color0, #111)));--sklearn-color-border-box: var(--sg-text-color, var(--theme-code-foreground, var(--jp-content-font-color1, white)));--sklearn-color-icon: #878787;}
}#sk-container-id-1 {color: var(--sklearn-color-text);
}#sk-container-id-1 pre {padding: 0;
}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;
}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed var(--sklearn-color-line);margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: var(--sklearn-color-background);
}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }`but bootstrap.min.css set `[hidden] { display: none !important; }`so we also need the `!important` here to be able to override thedefault hidden behavior on the sphinx rendered scikit-learn.org.See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;
}#sk-container-id-1 div.sk-text-repr-fallback {display: none;
}div.sk-parallel-item,
div.sk-serial,
div.sk-item {/* draw centered vertical line to link estimators */background-image: linear-gradient(var(--sklearn-color-text-on-default-background), var(--sklearn-color-text-on-default-background));background-size: 2px 100%;background-repeat: no-repeat;background-position: center center;
}/* Parallel-specific style estimator block */#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 2px solid var(--sklearn-color-text-on-default-background);flex-grow: 1;
}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: var(--sklearn-color-background);position: relative;
}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;
}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;
}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;
}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;
}/* Serial-specific style estimator block */#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: var(--sklearn-color-background);padding-right: 1em;padding-left: 1em;
}/* Toggleable style: style used for estimator/Pipeline/ColumnTransformer box that is
clickable and can be expanded/collapsed.
- Pipeline and ColumnTransformer use this feature and define the default style
- Estimators will overwrite some part of the style using the `sk-estimator` class
*//* Pipeline and ColumnTransformer style (default) */#sk-container-id-1 div.sk-toggleable {/* Default theme specific background. It is overwritten whether we have aspecific estimator or a Pipeline/ColumnTransformer */background-color: var(--sklearn-color-background);
}/* Toggleable label */
#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.5em;box-sizing: border-box;text-align: center;
}#sk-container-id-1 label.sk-toggleable__label-arrow:before {/* Arrow on the left of the label */content: "▸";float: left;margin-right: 0.25em;color: var(--sklearn-color-icon);
}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: var(--sklearn-color-text);
}/* Toggleable content - dropdown */#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;border-radius: 0.25em;color: var(--sklearn-color-text);/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-toggleable__content.fitted pre {/* unfitted */background-color: var(--sklearn-color-fitted-level-0);
}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {/* Expand drop-down */max-height: 200px;max-width: 100%;overflow: auto;
}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";
}/* Pipeline/ColumnTransformer-specific style */#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-label.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator-specific style *//* Colorize estimator box */
#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-estimator.fitted input.sk-toggleable__control:checked~label.sk-toggleable__label {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}#sk-container-id-1 div.sk-label label.sk-toggleable__label,
#sk-container-id-1 div.sk-label label {/* The background is the default theme color */color: var(--sklearn-color-text-on-default-background);
}/* On hover, darken the color of the background */
#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {color: var(--sklearn-color-text);background-color: var(--sklearn-color-unfitted-level-2);
}/* Label box, darken color on hover, fitted */
#sk-container-id-1 div.sk-label.fitted:hover label.sk-toggleable__label.fitted {color: var(--sklearn-color-text);background-color: var(--sklearn-color-fitted-level-2);
}/* Estimator label */#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;
}#sk-container-id-1 div.sk-label-container {text-align: center;
}/* Estimator-specific */
#sk-container-id-1 div.sk-estimator {font-family: monospace;border: 1px dotted var(--sklearn-color-border-box);border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;/* unfitted */background-color: var(--sklearn-color-unfitted-level-0);
}#sk-container-id-1 div.sk-estimator.fitted {/* fitted */background-color: var(--sklearn-color-fitted-level-0);
}/* on hover */
#sk-container-id-1 div.sk-estimator:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-2);
}#sk-container-id-1 div.sk-estimator.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-2);
}/* Specification for estimator info (e.g. "i" and "?") *//* Common style for "i" and "?" */.sk-estimator-doc-link,
a:link.sk-estimator-doc-link,
a:visited.sk-estimator-doc-link {float: right;font-size: smaller;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1em;height: 1em;width: 1em;text-decoration: none !important;margin-left: 1ex;/* unfitted */border: var(--sklearn-color-unfitted-level-1) 1pt solid;color: var(--sklearn-color-unfitted-level-1);
}.sk-estimator-doc-link.fitted,
a:link.sk-estimator-doc-link.fitted,
a:visited.sk-estimator-doc-link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
div.sk-estimator:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover,
div.sk-label-container:hover .sk-estimator-doc-link:hover,
.sk-estimator-doc-link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}div.sk-estimator.fitted:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover,
div.sk-label-container:hover .sk-estimator-doc-link.fitted:hover,
.sk-estimator-doc-link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}/* Span, style for the box shown on hovering the info icon */
.sk-estimator-doc-link span {display: none;z-index: 9999;position: relative;font-weight: normal;right: .2ex;padding: .5ex;margin: .5ex;width: min-content;min-width: 20ex;max-width: 50ex;color: var(--sklearn-color-text);box-shadow: 2pt 2pt 4pt #999;/* unfitted */background: var(--sklearn-color-unfitted-level-0);border: .5pt solid var(--sklearn-color-unfitted-level-3);
}.sk-estimator-doc-link.fitted span {/* fitted */background: var(--sklearn-color-fitted-level-0);border: var(--sklearn-color-fitted-level-3);
}.sk-estimator-doc-link:hover span {display: block;
}/* "?"-specific style due to the `<a>` HTML tag */#sk-container-id-1 a.estimator_doc_link {float: right;font-size: 1rem;line-height: 1em;font-family: monospace;background-color: var(--sklearn-color-background);border-radius: 1rem;height: 1rem;width: 1rem;text-decoration: none;/* unfitted */color: var(--sklearn-color-unfitted-level-1);border: var(--sklearn-color-unfitted-level-1) 1pt solid;
}#sk-container-id-1 a.estimator_doc_link.fitted {/* fitted */border: var(--sklearn-color-fitted-level-1) 1pt solid;color: var(--sklearn-color-fitted-level-1);
}/* On hover */
#sk-container-id-1 a.estimator_doc_link:hover {/* unfitted */background-color: var(--sklearn-color-unfitted-level-3);color: var(--sklearn-color-background);text-decoration: none;
}#sk-container-id-1 a.estimator_doc_link.fitted:hover {/* fitted */background-color: var(--sklearn-color-fitted-level-3);
}
</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('tfidf',TfidfVectorizer(min_df=100, ngram_range=(1, 3),preprocessor=<function preprocessor at 0x7f8d443a30a0>)),('classifier',XGBClassifier(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, device=None,early_stopping_rounds=None,enable_categorical=False, eval_metric=None,featur...importance_type=None,interaction_constraints=None, learning_rate=None,max_bin=None, max_cat_threshold=None,max_cat_to_onehot=None, max_delta_step=None,max_depth=None, max_leaves=None,min_child_weight=None, missing=nan,monotone_constraints=None, multi_strategy=None,n_estimators=None, n_jobs=None,num_parallel_tree=None, random_state=None, ...))],verbose=True)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted"> Pipeline<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.pipeline.Pipeline.html">?<span>Documentation for Pipeline</span></a><span class="sk-estimator-doc-link fitted">i<span>Fitted</span></span></label><div class="sk-toggleable__content fitted"><pre>Pipeline(steps=[('tfidf',TfidfVectorizer(min_df=100, ngram_range=(1, 3),preprocessor=<function preprocessor at 0x7f8d443a30a0>)),('classifier',XGBClassifier(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, device=None,early_stopping_rounds=None,enable_categorical=False, eval_metric=None,featur...importance_type=None,interaction_constraints=None, learning_rate=None,max_bin=None, max_cat_threshold=None,max_cat_to_onehot=None, max_delta_step=None,max_depth=None, max_leaves=None,min_child_weight=None, missing=nan,monotone_constraints=None, multi_strategy=None,n_estimators=None, n_jobs=None,num_parallel_tree=None, random_state=None, ...))],verbose=True)</pre></div> </div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted"> TfidfVectorizer<a class="sk-estimator-doc-link fitted" rel="noreferrer" target="_blank" href="https://scikit-learn.org/1.4/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html">?<span>Documentation for TfidfVectorizer</span></a></label><div class="sk-toggleable__content fitted"><pre>TfidfVectorizer(min_df=100, ngram_range=(1, 3),preprocessor=<function preprocessor at 0x7f8d443a30a0>)</pre></div> </div></div><div class="sk-item"><div class="sk-estimator fitted sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label fitted sk-toggleable__label-arrow fitted">XGBClassifier</label><div class="sk-toggleable__content fitted"><pre>XGBClassifier(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, device=None, early_stopping_rounds=None,enable_categorical=False, eval_metric=None, feature_types=None,gamma=None, grow_policy=None, importance_type=None,interaction_constraints=None, learning_rate=None, max_bin=None,max_cat_threshold=None, max_cat_to_onehot=None,max_delta_step=None, max_depth=None, max_leaves=None,min_child_weight=None, missing=nan, monotone_constraints=None,multi_strategy=None, n_estimators=None, n_jobs=None,num_parallel_tree=None, random_state=None, ...)</pre></div> </div></div></div></div></div></div>
## Evaluation Results
| Metric | Value |
|----------|----------|
| accuracy | 0.910317 |
| f1 score | 0.910317 |
| ROC AUC | 0.969008 |
# How to Get Started with the Model
```python
import sklearn
import dill as pickle
from skops import hub_utils
from pathlib import Path
suicide_detector_repo = Path("./suicide-detector")
hub_utils.download(
repo_id="AndyJamesTurner/suicideDetector",
dst=suicide_detector_repo
)
with open(suicide_detector_repo / "model.pkl", 'rb') as file:
clf = pickle.load(file)
classification = clf.predict(["I want to kill myself"])[0]
```
# Model Evaluation
The model was evaluated on a 0.3 holdout split using f1 score, accuracy, confusion matrix and ROC curves.
## Confusion matrix

## ROC Curve

# Classification Report
| index | precision | recall | f1-score | support |
|--------------|-------------|----------|------------|--------------|
| not suicide | 0.891721 | 0.934126 | 0.912431 | 34824 |
| suicide | 0.930785 | 0.886491 | 0.908098 | 34799 |
| accuracy | 0.910317 | 0.910317 | 0.910317 | 0.910317 |
| macro avg | 0.911253 | 0.910308 | 0.910265 | 69623 |
| weighted avg | 0.911246 | 0.910317 | 0.910265 | 69623 |
# Model Authors
This model was created by the following authors:
* Andy Turner
|
punchnami/resnet50-pothole-classification | punchnami | "2024-02-18T00:31:38Z" | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-18T00:20:01Z" | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: output_resnet
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6705298013245033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_resnet
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4783
- Accuracy: 0.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cpu
- Datasets 2.17.0
- Tokenizers 0.15.1
|
mradermacher/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune-i1-GGUF | mradermacher | "2025-03-02T20:38:38Z" | 57 | 0 | transformers | [
"transformers",
"gguf",
"medit-lite",
"model-pruning",
"text-generation",
"pl",
"en",
"base_model:meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune",
"base_model:quantized:meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-03-01T19:10:03Z" | ---
base_model: meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune
language:
- pl
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- medit-lite
- model-pruning
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune-i1-GGUF/resolve/main/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune.i1-Q2_K.gguf) | i1-Q2_K | 5.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune-i1-GGUF/resolve/main/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune-i1-GGUF/resolve/main/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune-i1-GGUF/resolve/main/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.0 | optimal size/speed/quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
devasheeshG/whisper_medium_fp16_transformers | devasheeshG | "2023-07-11T21:09:33Z" | 107 | 2 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"speech",
"wav2vec2",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-07-02T11:04:37Z" | ---
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- pytorch
- audio
- speech
- automatic-speech-recognition
- whisper
- wav2vec2
model-index:
- name: whisper_medium_fp16_transformers
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
type: librispeech_asr
name: LibriSpeech (clean)
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 0
name: Test WER
description: Word Error Rate
- type: mer
value: 0
name: Test MER
description: Match Error Rate
- type: wil
value: 0
name: Test WIL
description: Word Information Lost
- type: wip
value: 0
name: Test WIP
description: Word Information Preserved
- type: cer
value: 0
name: Test CER
description: Character Error Rate
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
type: librispeech_asr
name: LibriSpeech (other)
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 0
name: Test WER
description: Word Error Rate
- type: mer
value: 0
name: Test MER
description: Match Error Rate
- type: wil
value: 0
name: Test WIL
description: Word Information Lost
- type: wip
value: 0
name: Test WIP
description: Word Information Preserved
- type: cer
value: 0
name: Test CER
description: Character Error Rate
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
type: mozilla-foundation/common_voice_14_0
name: Common Voice (14.0) (Hindi)
config: hi
split: test
args:
language: hi
metrics:
- type: wer
value: 54.97
name: Test WER
description: Word Error Rate
- type: mer
value: 47.86
name: Test MER
description: Match Error Rate
- type: wil
value: 66.83
name: Test WIL
description: Word Information Lost
- type: wip
value: 33.16
name: Test WIP
description: Word Information Preserved
- type: cer
value: 30.23
name: Test CER
description: Character Error Rate
widget:
- example_title: Hinglish Sample
src: https://huggingface.co/devasheeshG/whisper_medium_fp16_transformers/resolve/main/test.wav
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
---
## Versions:
- CUDA: 12.1
- cuDNN Version: 8.9.2.26_1.0-1_amd64
* tensorflow Version: 2.12.0
* torch Version: 2.1.0.dev20230606+cu12135
* transformers Version: 4.30.2
* accelerate Version: 0.20.3
## Model Benchmarks:
- RAM: 2.8 GB (Original_Model: 5.5GB)
- VRAM: 1812 MB (Original_Model: 6GB)
- test.wav: 23 s (Multilingual Speech i.e. English+Hindi)
- **Time in seconds for Processing by each device**
| Device Name | float32 (Original) | float16 | CudaCores | TensorCores |
| ----------------- | ------------------ | ------- | --------- | ----------- |
| 3060 | 1.7 | 1.1 | 3,584 | 112 |
| 1660 Super | OOM | 3.3 | 1,408 | N/A |
| Collab (Tesla T4) | 2.8 | 2.2 | 2,560 | 320 |
| Collab (CPU) | 35 | N/A | N/A | N/A |
| M1 (CPU) | - | - | - | - |
| M1 (GPU -> 'mps') | - | - | - | - |
- **NOTE: TensorCores are efficient in mixed-precision calculations**
- **CPU -> torch.float16 not supported on CPU (AMD Ryzen 5 3600 or Collab CPU)**
- Punchuation: True
## Model Error Benchmarks:
- **WER: Word Error Rate**
- **MER: Match Error Rate**
- **WIL: Word Information Lost**
- **WIP: Word Information Preserved**
- **CER: Character Error Rate**
### Hindi to Hindi (test.tsv) [Common Voice 14.0](https://commonvoice.mozilla.org/en/datasets)
**Test done on RTX 3060 on 2557 Samples**
| | WER | MER | WIL | WIP | CER |
| ----------------------- | ----- | ----- | ----- | ----- | ----- |
| Original_Model (54 min) | 52.02 | 47.86 | 66.82 | 33.17 | 23.76 |
| This_Model (38 min) | 54.97 | 47.86 | 66.83 | 33.16 | 30.23 |
### Hindi to English (test.csv) [Custom Dataset](https://huggingface.co/datasets/devasheeshG/common_voices_14_0_hi2en_hi2hi)
**Test done on RTX 3060 on 1000 Samples**
| | WER | MER | WIL | WIP | CER |
| ----------------------- | --- | --- | --- | --- | --- |
| Original_Model (30 min) | - | - | - | - | - |
| This_Model (20 min) | - | - | - | - | - |
### English ([LibriSpeech](https://huggingface.co/datasets/librispeech_asr) -> test-clean)
**Test done on RTX 3060 on __ Samples**
| | WER | MER | WIL | WIP | CER |
| -------------- | --- | --- | --- | --- | --- |
| Original_Model | - | - | - | - | - |
| This_Model | - | - | - | - | - |
### English ([LibriSpeech](https://huggingface.co/datasets/librispeech_asr) -> test-other)
**Test done on RTX 3060 on __ Samples**
| | WER | MER | WIL | WIP | CER |
| -------------- | --- | --- | --- | --- | --- |
| Original_Model | - | - | - | - | - |
| This_Model | - | - | - | - | - |
- **'jiwer' library is used for calculations**
## Code for conversion:
- ### [Will be soon Uploaded on Github](https://github.com/devasheeshG)
## Usage
A file ``__init__.py`` is contained inside this repo which contains all the code to use this model.
Firstly, clone this repo and place all the files inside a folder.
### Make sure you have git-lfs installed (https://git-lfs.com)
```bash
git lfs install
git clone https://huggingface.co/devasheeshG/whisper_medium_fp16_transformers
```
**Please try in jupyter notebook**
```python
# Import the Model
from whisper_medium_fp16_transformers import Model, load_audio, pad_or_trim
```
```python
# Initilise the model
model = Model(
model_name_or_path='whisper_medium_fp16_transformers',
cuda_visible_device="0",
device='cuda',
)
```
```python
# Load Audio
audio = load_audio('whisper_medium_fp16_transformers/test.wav')
audio = pad_or_trim(audio)
```
```python
# Transcribe (First transcription takes time)
model.transcribe(audio)
```
## Credits
It is fp16 version of ``openai/whisper-medium``
|
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0 | anas-awadalla | "2022-02-26T05:24:05Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits